repo_id
stringlengths
21
96
file_path
stringlengths
31
155
content
stringlengths
1
92.9M
__index_level_0__
int64
0
0
rapidsai_public_repos/cucim/examples/python
rapidsai_public_repos/cucim/examples/python/gds_whole_slide/README.md
Example scripts used for benchmarking reads of uncompressed TIFF images using existing CPU-based tools (openslide-python, tifffile) as well as reads accelerated using `kvikio` with GPUDirect Storage enabled or disabled. ### GPUDirect setup/install GPUDirect Storage (GDS) is only supported on certain linux systems (e.g. at the time of writing: Ubuntu 18.04, Ubuntu 20.04, RHEL 8.3, RHEL 8.4, DGX workstations) Also, not all CUDA-capable hardware supports GDS. For example, gaming-focused GPUs are currently not GDS-enabled. GDS is supported on T4, V100, A100 and RTX A6000 GPUs, for example: https://docs.nvidia.com/gpudirect-storage/troubleshooting-guide/index.html#api-error-2 GPUDirect must be installed as described in https://docs.nvidia.com/gpudirect-storage/troubleshooting-guide/index.html As part of this, the Mellanox OFED drivers (MLNX_OFED) must be installed: https://docs.nvidia.com/gpudirect-storage/troubleshooting-guide/index.html#mofed-req-install https://network.nvidia.com/support/mlnx-ofed-matrix/ ### Obtaining test data Due to the large data size needed for benchmarking, no data is included in this repository. Many suitable images are publicly available. Assume we download a single image named `TUPAC-TR-467.svs` which is available in the training data from this challenge: https://tupac.grand-challenge.org/Dataset/ which corresponds to the following publication: Veta, Mitko, et al. "Predicting breast tumor proliferation from whole-slide images: the TUPAC16 challenge." Medical image analysis 54 (2019): 111-121. Because the demo only supports uncompressed data, it is necessary to first convert the data a raw (uncompressed) TIFF image. This can be done using cuCIM >= 2022.12.00 via the following command line call: ```sh cucim convert --tile-size 512 --overlap 0 --num-workers 12 --output-filename resize.tiff --compression RAW TUPAC-TR-467.svs ``` The scripts have the filename `resize.tiff` hardcoded, so you will have to modify the file name in near the top of the scripts if a different image is to be used. ### Summary of demo files - **benchmark_read.py** : Benchmark reads of a full image at the specified resolution level from an uncompressed multi-resolution TIFF image. - **benchmark_round_trip.py** : Read from uncompressed TIFF while writing to an uncompressed Zarr file with a tile size matching the TIFF image. - **benchmark_zarr_write.py** : Benchmark writing of a CuPy array to an uncompressed Zarr file of the specified chunk size. - **benchmark_zarr_write_lz4_via_dask.py** : Use Dask and `kvikio.zarr.GDSStore` to write LZ4 lossless compressed Zarr array with the specified storage level. - **lz4_nvcomp.py** : this is a LZ4 compressor for use with `kvikio.zarr.GDSStore` - **demo_implementation.py** : Implementations of tiled read/write that are used by the benchmarking scripts described above. ### Some commands from GDSDirect guide for checking system status Below are some useful diagnostic commands extracted from the [GPUDirect Storage docs](https://docs.nvidia.com/gpudirect-storage/index.html). To check MLNX_OFED version ```sh ofed_info -s ``` To check GDS status run: ```sh /usr/local/cuda/gds/tools/gdscheck -p ``` want to see at least NVMe supported in the driver configuration, e.g. ``` ===================== DRIVER CONFIGURATION: ===================== NVMe : Supported ``` and under the GPU INFO, the device should be listed as "supports GDS", e.g. ``` ========= GPU INFO: ========= GPU index 0 NVIDIA RTX A6000 bar:1 bar size (MiB):256 supports GDS, IOMMU State: Disable ``` Check nvidia-fs ```sh cat /proc/driver/nvidia-fs/stats ``` e.g. ``` GDS Version: 1.4.0.29 NVFS statistics(ver: 4.0) NVFS Driver(version: 2.13.5) Mellanox PeerDirect Supported: False IO stats: Disabled, peer IO stats: Disabled Logging level: info ```
0
rapidsai_public_repos/cucim/examples/python
rapidsai_public_repos/cucim/examples/python/gds_whole_slide/benchmark_zarr_write_lz4_via_dask.py
""" TODO: Currently LZ4-compressed images are a tiny bit larger than uncompressed. Need to look into why this is! """ import math import os import cupy as cp import kvikio.defaults import numpy as np from cupyx.profiler import benchmark from demo_implementation import cupy_to_zarr, get_n_tiles, read_tiled from lz4_nvcomp import LZ4NVCOMP from tifffile import TiffFile data_dir = os.environ.get("WHOLE_SLIDE_DATA_DIR", os.path.dirname("__file__")) fname = os.path.join(data_dir, "resize.tiff") if not os.path.exists(fname): raise RuntimeError(f"Could not find data file: {fname}") # make sure we are not in compatibility mode to ensure cuFile is being used # (when compat_mode() is True, POSIX will be used instead of libcufile.so) kvikio.defaults.compat_mode_reset(False) assert not kvikio.defaults.compat_mode() # set the number of threads to use kvikio.defaults.num_threads_reset(16) print(f"\t{kvikio.defaults.compat_mode() = }") print(f"\t{kvikio.defaults.get_num_threads() = }") print(f"\tkvikio task size = {kvikio.defaults.task_size()/1024**2} MB") n_buffer = 128 max_duration = 8 level = 1 with TiffFile(fname) as tif: page = tif.pages[level] page_shape = page.shape tile_shape = (page.tilelength, page.tilewidth, page.samplesperpixel) total_tiles = math.prod(get_n_tiles(page)) print(f"Resolution level {level}\n") print(f"\tshape: {page_shape}") print(f"\tstored as {total_tiles} tiles of shape {tile_shape}") # read the uint8 TIFF kwargs = dict(levels=[level], backend="kvikio-pread", n_buffer=n_buffer) image_gpu = read_tiled(fname, **kwargs)[0] # benchmark writing these CuPy outputs to Zarr with various chunk sizes # Note: nvcomp only supports integer and unsigned dtypes. # https://github.com/rapidsai/kvikio/blob/b0c6cedf43d1bc240c3ef1b38ebb9d89574a08ee/python/kvikio/nvcomp.py#L12-L21 # noqa: E501 dtypes = ["uint16"] chunk_shapes = [ (512, 512, 3), (1024, 1024, 3), (2048, 2048, 3), (4096, 4096, 3), ] backend = "dask" compressors = [None, LZ4NVCOMP()] kvikio.defaults.num_threads_reset(16) write_time_means = np.zeros( ((len(dtypes), len(chunk_shapes), len(compressors), 2)), dtype=float ) write_time_stds = np.zeros_like(write_time_means) for i, dtype in enumerate(dtypes): dtype = np.dtype(dtype) if dtype == np.uint8: img = image_gpu assert img.dtype == cp.uint8 elif dtype == np.uint16: img = image_gpu.astype(dtype) else: raise NotImplementedError( "LZ4 compression can only be tested for uint8 and uint16" ) for j, chunk_shape in enumerate(chunk_shapes): for k, compressor in enumerate(compressors): kwargs = dict( output_path=f"./image-{dtype}-chunk{chunk_shape[0]}.zarr" if compressor is None else f"./image-{dtype}-chunk{chunk_shape[0]}-lz4.zarr", chunk_shape=chunk_shape, zarr_kwargs=dict(overwrite=True, compressor=compressor), n_buffer=64, backend=backend, ) for m, gds_enabled in enumerate([False, True]): kvikio.defaults.compat_mode_reset(not gds_enabled) perf_write_float32 = benchmark( cupy_to_zarr, (img,), kwargs=kwargs, n_warmup=1, n_repeat=7, max_duration=max_duration, ) t = perf_write_float32.gpu_times write_time_means[i, j, k, m] = t.mean() write_time_stds[i, j, k, m] = t.std() print( f"Duration ({cp.dtype(dtype).name} write, {chunk_shape=}, {compressor=}, {gds_enabled=}): " # noqa: E501 f"{t.mean()} s +/- {t.std()} s" ) out_name = "write_times_lz4.npz" # auto-increment filename to avoid overwriting old results cnt = 1 while os.path.exists(out_name): out_name = f"write_times_lz4{cnt}.npz" cnt += 1 np.savez( out_name, write_time_means=write_time_means, write_time_stds=write_time_stds ) """ 421M image-uint8-chunk2048.zarr/ 295M image-uint8-chunk2048-lz4.zarr/ Output on local system: kvikio.defaults.compat_mode() = False kvikio.defaults.get_num_threads() = 16 kvikio task size = 4.0 MB Resolution level 1 Duration (uint8 write, chunk_shape=(512, 512, 3), compressor=None, gds_enabled=False): 0.9078736816406249 s +/- 0.014021576325619447 s Duration (uint8 write, chunk_shape=(512, 512, 3), compressor=None, gds_enabled=True): 0.7154334309895835 s +/- 0.012911493047009337 s Duration (uint8 write, chunk_shape=(512, 512, 3), compressor=LZ4NVCOMP, gds_enabled=False): 4.5858623046875 s +/- 0.0 s Duration (uint8 write, chunk_shape=(512, 512, 3), compressor=LZ4NVCOMP, gds_enabled=True): 4.7737607421875 s +/- 0.0 s Duration (uint8 write, chunk_shape=(1024, 1024, 3), compressor=None, gds_enabled=False): 0.33250243268694196 s +/- 0.033499521893803265 s Duration (uint8 write, chunk_shape=(1024, 1024, 3), compressor=None, gds_enabled=True): 0.1679712175641741 s +/- 0.00943557539273183 s Duration (uint8 write, chunk_shape=(1024, 1024, 3), compressor=LZ4NVCOMP, gds_enabled=False): 1.308169403076172 s +/- 0.009489436283604222 s Duration (uint8 write, chunk_shape=(1024, 1024, 3), compressor=LZ4NVCOMP, gds_enabled=True): 1.3726735026041668 s +/- 0.0047038287720149695 s Duration (uint8 write, chunk_shape=(2048, 2048, 3), compressor=None, gds_enabled=False): 0.25327674865722655 s +/- 0.06120098715153658 s Duration (uint8 write, chunk_shape=(2048, 2048, 3), compressor=None, gds_enabled=True): 0.17847256687709265 s +/- 0.008324480608522849 s Duration (uint8 write, chunk_shape=(2048, 2048, 3), compressor=LZ4NVCOMP, gds_enabled=False): 0.5187601710728237 s +/- 0.007874048300727562 s Duration (uint8 write, chunk_shape=(2048, 2048, 3), compressor=LZ4NVCOMP, gds_enabled=True): 0.45903962925502234 s +/- 0.019780733967690416 s Duration (uint8 write, chunk_shape=(4096, 4096, 3), compressor=None, gds_enabled=False): 0.2989522748674665 s +/- 0.01275530846360682 s Duration (uint8 write, chunk_shape=(4096, 4096, 3), compressor=None, gds_enabled=True): 0.3445836966378349 s +/- 0.010775083201675684 s Duration (uint8 write, chunk_shape=(4096, 4096, 3), compressor=LZ4NVCOMP, gds_enabled=False): 0.3000637948172433 s +/- 0.04060511008994888 s Duration (uint8 write, chunk_shape=(4096, 4096, 3), compressor=LZ4NVCOMP, gds_enabled=True): 0.2680468902587891 s +/- 0.023300006446218775 s shape: (13210, 9960, 3) stored as 520 tiles of shape (512, 512, 3) Duration (uint16 write, chunk_shape=(512, 512, 3), compressor=None, gds_enabled=False): 1.0883130187988281 s +/- 0.029807589266119452 s Duration (uint16 write, chunk_shape=(512, 512, 3), compressor=None, gds_enabled=True): 0.7361140238444012 s +/- 0.02239347882169216 s Duration (uint16 write, chunk_shape=(512, 512, 3), compressor=LZ4NVCOMP, gds_enabled=False): 4.56396484375 s +/- 0.0 s Duration (uint16 write, chunk_shape=(512, 512, 3), compressor=LZ4NVCOMP, gds_enabled=True): 4.76813818359375 s +/- 0.0 s Duration (uint16 write, chunk_shape=(1024, 1024, 3), compressor=None, gds_enabled=False): 0.45565206037248884 s +/- 0.14820532739770373 s Duration (uint16 write, chunk_shape=(1024, 1024, 3), compressor=None, gds_enabled=True): 0.31185867309570314 s +/- 0.013444509106645106 s Duration (uint16 write, chunk_shape=(1024, 1024, 3), compressor=LZ4NVCOMP, gds_enabled=False): 1.5407437337239582 s +/- 0.12621846871213385 s Duration (uint16 write, chunk_shape=(1024, 1024, 3), compressor=LZ4NVCOMP, gds_enabled=True): 1.298984100341797 s +/- 0.025558385966707807 s Duration (uint16 write, chunk_shape=(2048, 2048, 3), compressor=None, gds_enabled=False): 0.5751916896275112 s +/- 0.1870959869706265 s Duration (uint16 write, chunk_shape=(2048, 2048, 3), compressor=None, gds_enabled=True): 0.36480389404296876 s +/- 0.008747247240530139 s Duration (uint16 write, chunk_shape=(2048, 2048, 3), compressor=LZ4NVCOMP, gds_enabled=False): 0.7505391642252603 s +/- 0.19527720554541175 s Duration (uint16 write, chunk_shape=(2048, 2048, 3), compressor=LZ4NVCOMP, gds_enabled=True): 0.5359197431291852 s +/- 0.08704735860133013 s Duration (uint16 write, chunk_shape=(4096, 4096, 3), compressor=None, gds_enabled=False): 1.5224884440104167 s +/- 0.16454276798775797 s Duration (uint16 write, chunk_shape=(4096, 4096, 3), compressor=None, gds_enabled=True): 1.2984992370605468 s +/- 0.028939276482940306 s Duration (uint16 write, chunk_shape=(4096, 4096, 3), compressor=LZ4NVCOMP, gds_enabled=False): 0.8166022583007813 s +/- 0.19712617258152443 s Duration (uint16 write, chunk_shape=(4096, 4096, 3), compressor=LZ4NVCOMP, gds_enabled=True): 0.7342889607747396 s +/- 0.025796103217999546 s """ # noqa: E501
0
rapidsai_public_repos/cucim/examples/python
rapidsai_public_repos/cucim/examples/python/gds_whole_slide/benchmark_round_trip.py
import os from time import time import cupy as cp import kvikio import kvikio.defaults import numpy as np from cupyx.profiler import benchmark from demo_implementation import cupy_to_zarr, read_tiled import cucim.skimage.filters from cucim.core.operations.color import image_to_absorbance data_dir = os.environ.get("WHOLE_SLIDE_DATA_DIR", os.path.dirname("__file__")) fname = os.path.join(data_dir, "resize.tiff") if not os.path.exists(fname): raise RuntimeError(f"Could not find data file: {fname}") level = 0 max_duration = 8 compressor = None # set the number of threads to use kvikio.defaults.num_threads_reset(16) # Go back to 4MB task size in pread case kvikio.defaults.task_size_reset(4 * 1024 * 1024) def round_trip( fname, level=0, n_buffer=64, kernel_func=None, kernel_func_kwargs={}, apply_kernel_tilewise=True, out_dtype=cp.uint8, zarr_chunk_shape=(2048, 2048, 3), output_path=None, zarr_kwargs=dict(overwrite=True, compressor=None), verbose_times=False, ): if output_path is None: output_path = f"./image-{cp.dtype(out_dtype).name}.zarr" if apply_kernel_tilewise: tile_func = kernel_func tile_func_kwargs = kernel_func_kwargs else: tile_func = None tile_func_kwargs = {} if verbose_times: tstart = time() data_gpu = read_tiled( fname, levels=[level], backend="kvikio-pread", n_buffer=n_buffer, tile_func=tile_func, tile_func_kwargs=tile_func_kwargs, out_dtype=out_dtype, )[0] if verbose_times: dur_read = time() - tstart if not apply_kernel_tilewise: dur_read = time() - tstart print(f"{dur_read=}") else: dur_read_and_comp = time() - tstart print(f"{dur_read_and_comp=}") if not apply_kernel_tilewise: if verbose_times: tstart = time() data_gpu = kernel_func(data_gpu, **kernel_func_kwargs) if verbose_times: dur_comp = time() - tstart print(f"{dur_comp=}") if verbose_times: tstart = time() cupy_to_zarr( data_gpu, backend="dask", # 'kvikio-pwrite', output_path=output_path, chunk_shape=zarr_chunk_shape, zarr_kwargs=zarr_kwargs, ) if verbose_times: dur_write = time() - tstart print(f"{dur_write=}") return output_path gds_enabled = True apply_kernel_tilewise = True times = [] labels = [] n_buffer = 32 for zarr_chunk_shape in [ (512, 512, 3), (1024, 1024, 3), (2048, 2048, 3), (4096, 4096, 3), ]: for computation in ["absorbance", "median", "gaussian", "sobel"]: if computation is None: kernel_func = None kernel_func_kwargs = {} out_dtype = cp.uint8 elif computation == "absorbance": kernel_func = image_to_absorbance kernel_func_kwargs = {} out_dtype = cp.float32 elif computation == "median": kernel_func = cucim.skimage.filters.median kernel_func_kwargs = dict(footprint=cp.ones((5, 5, 1), dtype=bool)) out_dtype = cp.uint8 elif computation == "gaussian": kernel_func = cucim.skimage.filters.gaussian kernel_func_kwargs = dict(sigma=2.5, channel_axis=-1) out_dtype = cp.uint8 elif computation == "sobel": kernel_func = cucim.skimage.filters.sobel kernel_func_kwargs = dict(axis=(0, 1)) out_dtype = cp.float32 for apply_kernel_tilewise in [False, True]: for gds_enabled in [False, True]: kvikio.defaults.compat_mode_reset(not gds_enabled) assert kvikio.defaults.compat_mode() == (not gds_enabled) kwargs = dict( level=0, n_buffer=n_buffer, kernel_func=kernel_func, kernel_func_kwargs=kernel_func_kwargs, out_dtype=out_dtype, apply_kernel_tilewise=apply_kernel_tilewise, zarr_chunk_shape=zarr_chunk_shape, zarr_kwargs=dict(overwrite=True, compressor=compressor), verbose_times=False, ) perf = benchmark( round_trip, (fname,), kwargs=kwargs, n_warmup=1, n_repeat=100, max_duration=max_duration, ) t = perf.gpu_times kernel_description = ( "tiled" if apply_kernel_tilewise else "global" ) gds_description = "with GDS" if gds_enabled else "without GDS" label = f"{computation=}, {kernel_description}, chunk_shape={zarr_chunk_shape}, {gds_description}" # noqa: E501 print(f"Duration ({label}): {t.mean()} s +/- {t.std()} s") times.append(t.mean()) labels.append(label) out_name = "round_trip_times.npz" # auto-increment filename to avoid overwriting old results cnt = 1 while os.path.exists(out_name): out_name = f"round_trip_times{cnt}.npz" cnt += 1 np.savez(out_name, times=np.asarray(times), labels=np.asarray(labels)) """ on dgx-02 (gds_demo) apollo@dgx-02:/mnt/nvme0/cucim/gds-cucim-demo$ python benchmark_round_trip.py Duration (computation='absorbance', global, chunk_shape=(512, 512, 3), without GDS): 3.1191054687500004 s +/- 0.225595815853941 s Duration (computation='absorbance', global, chunk_shape=(512, 512, 3), with GDS): 2.1042802734375003 s +/- 0.027930034537340144 s Duration (computation='absorbance', tiled, chunk_shape=(512, 512, 3), without GDS): 3.141532592773437 s +/- 0.04939690170558264 s Duration (computation='absorbance', tiled, chunk_shape=(512, 512, 3), with GDS): 2.231423193359375 s +/- 0.16606144818711904 s Duration (computation='median', global, chunk_shape=(512, 512, 3), without GDS): 2.0818441894531245 s +/- 0.021839879963544123 s Duration (computation='median', global, chunk_shape=(512, 512, 3), with GDS): 2.035376928710938 s +/- 0.029007739628238723 s Duration (computation='median', tiled, chunk_shape=(512, 512, 3), without GDS): 2.567677185058594 s +/- 0.1512259361762735 s Duration (computation='median', tiled, chunk_shape=(512, 512, 3), with GDS): 2.258326318359375 s +/- 0.03280750045711541 s Duration (computation='gaussian', global, chunk_shape=(512, 512, 3), without GDS): 3.1225366821289064 s +/- 0.03711702097752233 s Duration (computation='gaussian', global, chunk_shape=(512, 512, 3), with GDS): 2.2253964843750005 s +/- 0.2325274037827292 s Duration (computation='gaussian', tiled, chunk_shape=(512, 512, 3), without GDS): 2.4777058593750003 s +/- 0.009240477773701704 s Duration (computation='gaussian', tiled, chunk_shape=(512, 512, 3), with GDS): 2.3227379394531256 s +/- 0.009206045295060663 s Duration (computation='sobel', global, chunk_shape=(512, 512, 3), without GDS): 3.2487342529296876 s +/- 0.27136693727180555 s Duration (computation='sobel', global, chunk_shape=(512, 512, 3), with GDS): 2.10892294921875 s +/- 0.02097221308256493 s Duration (computation='sobel', tiled, chunk_shape=(512, 512, 3), without GDS): 4.1089834798177085 s +/- 0.01915263510603932 s Duration (computation='sobel', tiled, chunk_shape=(512, 512, 3), with GDS): 2.938178039550781 s +/- 0.014859342085037834 s Duration (computation='absorbance', global, chunk_shape=(1024, 1024, 3), without GDS): 2.7858702392578127 s +/- 0.019651934337578617 s Duration (computation='absorbance', global, chunk_shape=(1024, 1024, 3), with GDS): 1.9969252726236981 s +/- 0.027131826130877872 s Duration (computation='absorbance', tiled, chunk_shape=(1024, 1024, 3), without GDS): 2.805119934082031 s +/- 0.04126104227305295 s Duration (computation='absorbance', tiled, chunk_shape=(1024, 1024, 3), with GDS): 2.000840478515625 s +/- 0.018744582209408268 s Duration (computation='median', global, chunk_shape=(1024, 1024, 3), without GDS): 1.1235912679036462 s +/- 0.017105640884020744 s Duration (computation='median', global, chunk_shape=(1024, 1024, 3), with GDS): 0.797370131272536 s +/- 0.15744958452772662 s Duration (computation='median', tiled, chunk_shape=(1024, 1024, 3), without GDS): 1.437654331752232 s +/- 0.021581864835220975 s Duration (computation='median', tiled, chunk_shape=(1024, 1024, 3), with GDS): 0.9701279130415483 s +/- 0.02103963138394191 s Duration (computation='gaussian', global, chunk_shape=(1024, 1024, 3), without GDS): 2.8036607666015625 s +/- 0.012992702272646189 s Duration (computation='gaussian', global, chunk_shape=(1024, 1024, 3), with GDS): 2.0812043945312504 s +/- 0.11895250430711447 s Duration (computation='gaussian', tiled, chunk_shape=(1024, 1024, 3), without GDS): 1.5445246407645088 s +/- 0.015315103050140895 s Duration (computation='gaussian', tiled, chunk_shape=(1024, 1024, 3), with GDS): 1.0669870971679687 s +/- 0.02188820345886527 s Duration (computation='sobel', global, chunk_shape=(1024, 1024, 3), without GDS): 2.9324288330078128 s +/- 0.22276139967810918 s Duration (computation='sobel', global, chunk_shape=(1024, 1024, 3), with GDS): 2.0065390625000004 s +/- 0.013617705944694957 s Duration (computation='sobel', tiled, chunk_shape=(1024, 1024, 3), without GDS): 3.6309766438802087 s +/- 0.05874521968221261 s Duration (computation='sobel', tiled, chunk_shape=(1024, 1024, 3), with GDS): 2.984893737792969 s +/- 0.25415558271992245 s Duration (computation='absorbance', global, chunk_shape=(2048, 2048, 3), without GDS): 2.9885695800781247 s +/- 0.026795940572500634 s Duration (computation='absorbance', global, chunk_shape=(2048, 2048, 3), with GDS): 1.965831502278646 s +/- 0.0168025999020846 s Duration (computation='absorbance', tiled, chunk_shape=(2048, 2048, 3), without GDS): 3.1813931884765627 s +/- 0.2428613856732194 s Duration (computation='absorbance', tiled, chunk_shape=(2048, 2048, 3), with GDS): 1.9277428181966145 s +/- 0.04373333981541544 s Duration (computation='median', global, chunk_shape=(2048, 2048, 3), without GDS): 1.0377198669433594 s +/- 0.012093431782665117 s Duration (computation='median', global, chunk_shape=(2048, 2048, 3), with GDS): 0.707420703125 s +/- 0.01371743462230204 s Duration (computation='median', tiled, chunk_shape=(2048, 2048, 3), without GDS): 1.4118775634765626 s +/- 0.18244415183128557 s Duration (computation='median', tiled, chunk_shape=(2048, 2048, 3), with GDS): 0.9218176491477272 s +/- 0.01591385754055359 s Duration (computation='gaussian', global, chunk_shape=(2048, 2048, 3), without GDS): 2.9058739013671877 s +/- 0.016099121671779952 s Duration (computation='gaussian', global, chunk_shape=(2048, 2048, 3), with GDS): 2.0523116210937498 s +/- 0.18981050500282015 s Duration (computation='gaussian', tiled, chunk_shape=(2048, 2048, 3), without GDS): 1.4494330357142857 s +/- 0.014561240480290236 s Duration (computation='gaussian', tiled, chunk_shape=(2048, 2048, 3), with GDS): 1.0200827880859376 s +/- 0.019787969209851694 s Duration (computation='sobel', global, chunk_shape=(2048, 2048, 3), without GDS): 3.119711364746094 s +/- 0.22031551398713953 s Duration (computation='sobel', global, chunk_shape=(2048, 2048, 3), with GDS): 1.958296569824219 s +/- 0.02411710745257342 s Duration (computation='sobel', tiled, chunk_shape=(2048, 2048, 3), without GDS): 3.8941464843749998 s +/- 0.049486723671958784 s Duration (computation='sobel', tiled, chunk_shape=(2048, 2048, 3), with GDS): 2.9164300537109376 s +/- 0.1746352677421199 s Duration (computation='absorbance', global, chunk_shape=(4096, 4096, 3), without GDS): 4.78259814453125 s +/- 0.08003375317781046 s Duration (computation='absorbance', global, chunk_shape=(4096, 4096, 3), with GDS): 2.1372686035156248 s +/- 0.043582537001700936 s Duration (computation='absorbance', tiled, chunk_shape=(4096, 4096, 3), without GDS): 4.702880371093751 s +/- 0.036472302464017906 s Duration (computation='absorbance', tiled, chunk_shape=(4096, 4096, 3), with GDS): 2.1466302734375 s +/- 0.04854747600437549 s Duration (computation='median', global, chunk_shape=(4096, 4096, 3), without GDS): 1.2019552951388888 s +/- 0.041718545478099486 s Duration (computation='median', global, chunk_shape=(4096, 4096, 3), with GDS): 0.7432061026436942 s +/- 0.024861430042890365 s Duration (computation='median', tiled, chunk_shape=(4096, 4096, 3), without GDS): 1.5174084472656253 s +/- 0.19087705973449015 s Duration (computation='median', tiled, chunk_shape=(4096, 4096, 3), with GDS): 0.9612706076882103 s +/- 0.047138270409090556 s Duration (computation='gaussian', global, chunk_shape=(4096, 4096, 3), without GDS): 4.438681803385417 s +/- 0.07821072441047229 s Duration (computation='gaussian', global, chunk_shape=(4096, 4096, 3), with GDS): 2.292842724609375 s +/- 0.17273283392695543 s Duration (computation='gaussian', tiled, chunk_shape=(4096, 4096, 3), without GDS): 1.5610232805524553 s +/- 0.04472800111712806 s Duration (computation='gaussian', tiled, chunk_shape=(4096, 4096, 3), with GDS): 1.053441522216797 s +/- 0.047010837833830116 s Duration (computation='sobel', global, chunk_shape=(4096, 4096, 3), without GDS): 4.608963867187501 s +/- 0.27347076419226174 s Duration (computation='sobel', global, chunk_shape=(4096, 4096, 3), with GDS): 2.2000548339843746 s +/- 0.07589894027639234 s Duration (computation='sobel', tiled, chunk_shape=(4096, 4096, 3), without GDS): 5.430656982421875 s +/- 0.01277270507812478 s Duration (computation='sobel', tiled, chunk_shape=(4096, 4096, 3), with GDS): 3.1940713500976563 s +/- 0.19852391299904837 s """ # noqa: E501
0
rapidsai_public_repos/cucim/examples/python
rapidsai_public_repos/cucim/examples/python/gds_whole_slide/demo_implementation.py
import os import warnings import cupy as cp import dask.array as da import kvikio import kvikio.defaults import numpy as np import openslide import tifffile from kvikio.cufile import IOFuture from kvikio.zarr import GDSStore from tifffile import TiffFile from zarr import DirectoryStore from zarr.creation import init_array from cucim.clara import filesystem """ Developed with Dask 2022.05.2 zarr >= 2.13.2 kvikio >= 2022.10.00 (but had to use a recent development branch on my system to properly find libcufile.so) """ # noqa: E501 def get_n_tiles(page): """Create a tuple containing the number of tiles along each axis Parameters ---------- page : tifffile.tifffile.TiffPage The TIFF page. """ tdepth, tlength, twidth = (page.tiledepth, page.tilelength, page.tilewidth) assert page.shaped[-1] == page.samplesperpixel imdepth, imlength, imwidth, samples = page.shaped[-4:] n_width = (imwidth + twidth - 1) // twidth n_length = (imlength + tlength - 1) // tlength n_depth = (imdepth + tdepth - 1) // tdepth return (n_depth, n_length, n_width) def _get_tile_multiindex(page, index, n_tiles): """Position offsets into the output array for the current tile. Parameters ---------- page : tifffile.tifffile.TiffPage The TIFF page. index : int The linear index in page.dataoffsets (or page.databytecounts) corresponding to the current tile. n_tiles : int The total number of tiles as returned by ``get_n_tiles(page)`` Returns ------- multi_index : tuple of int Starting index for the tile along each axis in the output array. """ d, h, w = n_tiles wh = w * h whd = wh * d multi_index = ( index // whd, (index // wh) % d * page.tiledepth, (index // w) % h * page.tilelength, index % w * page.tilewidth, 0, ) return multi_index def _decode(data, tile_shape, truncation_slices): """Reshape data from a 1d buffer to the destination tile's shape. Parameters ---------- data : buffer or ndarray Data array (1d raveled tile data). tile_shape : tuple of int The shape of the tile truncation_slices : tuple of slice Slice objects used to truncate the reshaped data. Needed to trim the last tile along a given dimension when the page size is not an even multiple of the tile width. """ if not hasattr(data, "__cuda_array_interface__"): data = np.frombuffer(data, np.uint8) data.shape = tile_shape # truncate any tiles that extend past the image boundary if truncation_slices: data = data[truncation_slices] return data def _truncation_slices(check_needed, page_shape, offsets, tile_shape, n_tiles): """Determine any necessary truncation of boundary tiles. This is essentially generating a tuple of slices for doing: tile = tile[ : page_shape[0] - offsets[0], : page_shape[1] - offsets[1], : page_shape[2] - offsets[2], ] but has additional logic to return None in cases where truncation is not needed. Parameters ---------- check_needed : 3-tuple of bool Any axis whose page size is not evenly divisible by the tile size will have a True entry in this tuple. page_shape : tuple of int The shape of the current TIFF page (depth, length, width[, channels]) offsets : 3-tuple of int Starting corner indices for the current tile (depth, length, width). tile_shape : tuple of int Shape of a single tile (depth, length, width[, channels]). n_tiles : 3-tuple of int The number of tiles along each axis (depth, length, width) Returns ------- tile_slice : 3-tuple of slice or None Slices needed to generate a truncated tile via ``tile = tile[tile_slice]``. Returns None of no truncation is needed at the current tile. """ any_truncated = False slices = [] for ax in range(len(offsets)): if ( check_needed[ax] # don't need to truncate unless this is the last tile along an axis and (offsets[ax] // tile_shape[ax] == n_tiles[ax] - 1) ): any_truncated = True slices.append(slice(0, page_shape[ax] - offsets[ax])) else: slices.append(slice(None)) if any_truncated: return tuple(slices) return None def read_openslide(fname, level, clear_cache=True): """CPU-based reader using openslide followed by cupy.array.""" if clear_cache: assert filesystem.discard_page_cache(fname) slide = openslide.OpenSlide(fname) out = slide.read_region( location=(0, 0), level=level, size=slide.level_dimensions[level] ) # convert from PIL image to NumPy array out = np.asarray(out) # transfer to GPU, omitting the alpha channel return cp.array(out[..., :3]) def read_tifffile(fname, level, clear_cache=True): """CPU-based reader using tifffile followed by cupy.array.""" if clear_cache: assert filesystem.discard_page_cache(fname) return cp.array(tifffile.imread(fname, level=level)) def _get_aligned_read_props(offsets, bytecounts, alignment=4096): """Adjust offsets and bytecounts to get reads of the desired alignment. Parameters ---------- offsets : sequence of int The bytes offsets of each tile in the TIFF page. (i.e. tifffile's ``page.dataoffsets``). bytecounts : sequence of int The size in bytes of each tile in the TIFF page. (i.e. tifffile's ``page.databytecounts``). alignment : int, optional The desired alignment for the read operation. For GPUDirect Storage, this should be 4096. Notes ----- For GPUDirect Storage, it is important that reads occur width 4096-byte alignment and have a size that is a multiple of 4096-bytes. So, in this function we offset the read from the tile's start position back to the previous byte-aligned position. We then round up the total bytes to be read so that it is a multiple of `alignment`. """ offsets = np.asarray(offsets, dtype=int) bytecounts = np.asarray(bytecounts, dtype=int) rounded_offsets = (offsets // alignment) * alignment buffer_offsets = offsets - rounded_offsets rounded_bytecounts = buffer_offsets + bytecounts rounded_bytecounts = ( np.ceil(rounded_bytecounts / alignment).astype(int) * alignment ) # truncate last bytecounts entry to avoid possibly exceeding file extent last = offsets[-1] + bytecounts[-1] rounded_last = rounded_offsets[-1] + rounded_bytecounts[-1] rounded_bytecounts[-1] += last - rounded_last return rounded_offsets, rounded_bytecounts, buffer_offsets def _bulk_read_is_possible(offsets, bytecounts): """Check that all of the page's tile data is contiguous in memory.""" contiguous_offsets = np.array(offsets)[:-1] + np.array(bytecounts)[:-1] return np.all(contiguous_offsets == np.array(offsets)[1:]) def _get_bulk_offset_and_size(offsets, bytecounts, align=4096): """Determine offsets and bytecounts for a single bulk read of the desired alignment. Notes ----- See documentation of `_get_aligned_read_props` for explanation of the alignment requirements of GPUDirect Storage. """ if not _bulk_read_is_possible(offsets, bytecounts): raise RuntimeError("Tiles are not stored contiguously!") total_size = offsets[-1] - offsets[0] + bytecounts[-1] # reduce offset to closest value that is aligned to 4096 bytes offset_aligned = (offsets[0] // align) * align padded_bytes = offsets[0] - offset_aligned # increase size to keep the same final byte total_size += padded_bytes return offset_aligned, total_size, padded_bytes def read_alltiles_bulk(fname, level, clear_cache=True): """Read all tiles from a page in a single, bulk read operation. Can be used to compare to the performance of tile-based reads. """ if clear_cache: assert filesystem.discard_page_cache(fname) with TiffFile(fname) as tif: fh = kvikio.CuFile(fname, "r") page = tif.pages[level] offset, total_size, padded_bytes = _get_bulk_offset_and_size( page.dataoffsets, page.databytecounts ) output = cp.empty(total_size, dtype=cp.uint8) size = fh.read(output, file_offset=offset) if size != total_size: raise ValueError("failed to read the expected number of bytes") return output[padded_bytes:] def get_tile_buffers(fname, level, n_buffer): with TiffFile(fname) as tif: page = tif.pages[level] ( rounded_offsets, rounded_bytecounts, buffer_offsets, ) = _get_aligned_read_props( offsets=page.dataoffsets, bytecounts=page.databytecounts, alignment=4096, ) # Allocate buffer based on size of the largest tile after rounding # up to the nearest multiple of 4096. # (in the uncompressed TIFF case, all tiles have an equal number of # bytes) buffer_bytecount = rounded_bytecounts.max() assert buffer_bytecount % 4096 == 0 # note: tile_buffer is C-contiguous so tile_buffer[i] is contiguous tile_buffers = tuple( cp.empty(buffer_bytecount, dtype=cp.uint8) for n in range(n_buffer) ) return tile_buffers def read_tiled( fname, levels=[0], backend="kvikio-pread", n_buffer=100, tile_func=None, tile_func_kwargs={}, out_dtype=None, clear_cache=True, preregister_memory_buffers=False, tile_buffers=None, ): """Read an uncompressed, tiled multiresolution TIFF image to GPU memory. Parameters ---------- fname : str, optional File name. levels : sequence of int or 'all', optional The resolution levels to read. If 'all' then all levels in the file will be read. Level 0 corresponds to the highest resolution in the file. backend : {'kvikio-pread', 'kvikio-read', 'kvikio-raw_read'}, optional The approach to use when reading the file. The kvikio options can make use of GPUDirect Storage. Best performance was observed with the default 'kvikio-pread', which does asynchronous, multithreaded tile reads. n_buffer : int, optional Scratch space equal to `n_buffer` TIFF tiles will be allocated. Providing scratch space for multiple tiles helps the performance in the recommended asynchronous 'kvikio-pread' mode. tile_func : function, optional A CuPy-based function to apply to each tile after it is read. Must take as input a single CuPy array and return a CuPy array of the same shape (with possibly different dtype). The default of None does not apply any processing to each tile. tile_func_kwargs : dict, optional Keyword arguments to pass into `tile_func`. out_dtype : cp.dtype, optional The output dtype of the output array. If unspecified, it will be equal to the dtype of the input TIFF data. clear_cache : bool, optional If True, clear the file system's page cache prior to starting any read operations. This is necessary to avoid caching so that one gets accurate benchmark results if running this function repeatedly on the same input data. preregister_memory_buffers : bool, optional If True, explicitly preregister the memory buffers with kvikio via `kvikio.memory_register`. tile_buffers : tuple or list of ndarray If provided, use this preallocated set of tile buffers instead of allocating tile buffers within this function. Will also override n_buffer, setting it to ``n_buffer = len(tile_buffers)``. If the provided buffers are too small, a warning will be printed and new buffers will be allocated instead. Returns ------- out : list of cupy.ndarray One CuPy array per requested level in `levels`. """ if not os.path.isfile(fname): raise ValueError(f"file not found: {fname}") if clear_cache: assert filesystem.discard_page_cache(fname) if tile_buffers is not None: if not ( isinstance(tile_buffers, (tuple, list)) and all(isinstance(b, cp.ndarray) for b in tile_buffers) ): raise ValueError("tile_buffers must be a list of ndarray") # override user-provided n_buffer if len(tile_buffers) != n_buffer: n_buffer = len(tile_buffers) with TiffFile(fname) as tif: if isinstance(levels, int): levels = (levels,) if levels == "all": pages = tuple(tif.pages[n] for n in range(len(tif.pages))) elif isinstance(levels, (tuple, list)): pages = tuple(tif.pages[n] for n in levels) else: raise ValueError( "pages must be a tuple or list of int or the " "string 'all'" ) # sanity check: identical tile size for all TIFF pages # todo: is this always true? assert len(set(p.tilelength for p in pages)) == 1 assert len(set(p.tilewidth for p in pages)) == 1 outputs = [] fh = kvikio.CuFile(fname, "r") page = pages[0] n_chan = page.shaped[-1] for page in pages: if out_dtype is None: out_dtype = page.dtype out_array = cp.ndarray(shape=page.shape, dtype=out_dtype) ( rounded_offsets, rounded_bytecounts, buffer_offsets, ) = _get_aligned_read_props( offsets=page.dataoffsets, bytecounts=page.databytecounts, alignment=4096, ) # Allocate buffer based on size of the largest tile after rounding # up to the nearest multiple of 4096. # (in the uncompressed TIFF case, all tiles have an equal number of # bytes) buffer_bytecount = rounded_bytecounts.max() assert buffer_bytecount % 4096 == 0 tile_shape = (page.tilelength, page.tilewidth, n_chan) # note: tile_buffer is C-contiguous so tile_buffer[i] is contiguous if tile_buffers is None: tile_buffers = tuple( cp.empty(buffer_bytecount, dtype=cp.uint8) for n in range(n_buffer) ) elif tile_buffers[0].size < buffer_bytecount: warnings.warn( "reallocating tile buffers to accommodate data size" ) tile_buffers = tuple( cp.empty(buffer_bytecount, dtype=cp.uint8) for n in range(n_buffer) ) else: buffer_bytecount = tile_buffers[0].size if preregister_memory_buffers: for n in range(n_buffer): kvikio.memory_register(tile_buffers[n]) # compute number of tiles up-front to make _decode more efficient n_tiles = get_n_tiles(page) tile_shape = ( page.tiledepth, page.tilelength, page.tilewidth, page.samplesperpixel, ) keyframe = page.keyframe truncation_check_needed = ( (keyframe.imagedepth % page.tiledepth != 0), (keyframe.imagelength % page.tilelength != 0), (keyframe.imagewidth % page.tilewidth != 0), ) any_truncated = any(truncation_check_needed) page_shape = page.shaped[ 1: ] # Any reason to prefer page.keyframe.imagedepth, etc. here as opposed to page.shape or page.shaped? # noqa: E501 if backend == "kvikio-raw_read": def read_tile_raw(fh, tile_buffer, bytecount, offset): """returns the # of bytes read""" size = fh.raw_read( tile_buffer[:bytecount], file_offset=offset ) if size != bytecount: raise ValueError( "failed to read the expected number of bytes" ) return size kv_read = read_tile_raw elif backend == "kvikio-read": def read_tile(fh, tile_buffer, bytecount, offset): """returns the # of bytes read""" size = fh.read(tile_buffer[:bytecount], file_offset=offset) if size != bytecount: raise ValueError( "failed to read the expected number of bytes" ) return size kv_read = read_tile elif backend == "kvikio-pread": def read_tile_async(fh, tile_buffer, bytecount, offset): """returns a future""" future = fh.pread( tile_buffer[:bytecount], file_offset=offset ) # future.get() return future kv_read = read_tile_async else: raise ValueError(f"unrecognized backend: {backend}") # note: page.databytecounts contains the size of all tiles in # bytes. It will only vary in the case of compressed data for index, ( offset, tile_bytecount, rounded_bytecount, buffer_offset, ) in enumerate( zip( rounded_offsets, page.databytecounts, rounded_bytecounts, buffer_offsets, ) ): index_mod = index % n_buffer if index == 0: # initialize lists for storage of future results all_futures = [] all_tiles = [] all_slices = [] elif index_mod == 0: # process the prior group of n_buffer futures for tile, sl, future in zip( all_tiles, all_slices, all_futures ): if isinstance(future, IOFuture): size = future.get() if size != rounded_bytecount: raise ValueError( "failed to read the expected number of " "bytes" ) tile = tile[0] # omit depth axis if tile_func is None: out_array[sl] = tile else: out_array[sl] = tile_func(tile, **tile_func_kwargs) # reset the lists to prepare for the next n_buffer tiles all_futures = [] all_tiles = [] all_slices = [] # read a multiple of 4096 bytes into the current buffer at an # offset that is aligned to 4096 bytes read_output = kv_read( fh, tile_buffers[index_mod], rounded_bytecount, offset, ) # Determine offsets into `out_array` for the current tile # and determine slices to truncate the tile if needed. offset_indices = _get_tile_multiindex(page, index, n_tiles) (s, d, h, w, _) = offset_indices if any_truncated: trunc_sl = _truncation_slices( truncation_check_needed, page_shape, offset_indices[1:4], tile_shape, n_tiles, ) else: trunc_sl = None # Reads are aligned to 4096-byte boundaries, so buffer_offset # is needed to discard any initial bytes prior to the actual # tile start. buffer_start = buffer_offset buffer_end = buffer_start + tile_bytecount tile = _decode( tile_buffers[index_mod][buffer_start:buffer_end], tile_shape, trunc_sl, ) all_futures.append(read_output) all_tiles.append(tile) all_slices.append( (slice(h, h + tile_shape[1]), slice(w, w + tile_shape[2])) ) for tile, sl, future in zip(all_tiles, all_slices, all_futures): if isinstance(future, IOFuture): # make sure the buffer is filled with data future.get() tile = tile[0] # omit depth axis if tile_func is None: out_array[sl] = tile else: out_array[sl] = tile_func(tile, **tile_func_kwargs) outputs.append(out_array) if preregister_memory_buffers: for n in range(n_buffer): kvikio.memory_deregister(tile_buffers[n]) return outputs def _cupy_to_zarr_via_dask( image, output_path="./example-output.zarr", chunk_shape=(2048, 2048, 3), zarr_kwargs=dict(overwrite=False, compressor=None), ): """Write output to Zarr via GDSStore""" store = GDSStore(output_path) dask_image = da.from_array(image, chunks=chunk_shape) dask_image.to_zarr(store, meta_array=cp.empty(()), **zarr_kwargs) return dask_image def _cupy_to_zarr_kvikio_write_sync( image, output_path="./example-output.zarr", chunk_shape=(2048, 2048, 3), zarr_kwargs=dict(overwrite=False, compressor=None), backend="kvikio-raw_write", ): """Write output to Zarr via GDSStore""" # 1.) create a zarr store # 2.) call init_array to initialize Zarr .zarry metadata # this will also remove any existing files in output_path when # overwrite = True. output_path = os.path.realpath(output_path) store = DirectoryStore(output_path) init_array( store, shape=image.shape, chunks=chunk_shape, dtype=image.dtype, **zarr_kwargs, ) c0, c1, c2 = chunk_shape s0, s1, s2 = image.shape for i0, start0 in enumerate(range(0, s0, c0)): for i1, start1 in enumerate(range(0, s1, c1)): for i2, start2 in enumerate(range(0, s2, c2)): tile = image[ start0 : start0 + c0, start1 : start1 + c1, start2 : start2 + c2, ] if tile.shape == chunk_shape: # copy so the tile is contiguous in memory tile = tile.copy() else: pad_width = ( (0, c0 - tile.shape[0]), (0, c1 - tile.shape[1]), (0, c2 - tile.shape[2]), ) tile = cp.pad( tile, pad_width, mode="constant", constant_values=0 ) chunk_key = ".".join(map(str, (i0, i1, i2))) fname = os.path.join(output_path, chunk_key) with kvikio.CuFile(fname, "w") as fh: if backend == "kvikio-raw_write": size = fh.raw_write(tile) elif backend == "kvikio-write": size = fh.write(tile) else: raise ValueError(f"unknown backend {backend}") assert size == tile.nbytes return def cupy_to_zarr( image, output_path="./example-output.zarr", chunk_shape=(512, 512, 3), n_buffer=16, zarr_kwargs=dict(overwrite=False, compressor=None), backend="kvikio-pwrite", ): """Write output to Zarr via GDSStore""" if backend == "dask": return _cupy_to_zarr_via_dask( image, output_path=output_path, chunk_shape=chunk_shape, zarr_kwargs=zarr_kwargs, ) elif backend in ["kvikio-write", "kvikio-raw_write"]: return _cupy_to_zarr_kvikio_write_sync( image, output_path=output_path, chunk_shape=chunk_shape, zarr_kwargs=zarr_kwargs, backend=backend, ) elif backend != "kvikio-pwrite": raise ValueError(f"unrecognized backend: {backend}") # 1.) create a zarr store # 2.) call init_array to initialize Zarr .zarry metadata # this will also remove any existing files in output_path when # overwrite = True. output_path = os.path.realpath(output_path) store = DirectoryStore(output_path) init_array( store, shape=image.shape, chunks=chunk_shape, dtype=image.dtype, **zarr_kwargs, ) # asynchronous write using pwrite index = 0 c0, c1, c2 = chunk_shape s0, s1, s2 = image.shape tile_cache = cp.zeros((n_buffer,) + chunk_shape, dtype=image.dtype) for i0, start0 in enumerate(range(0, s0, c0)): for i1, start1 in enumerate(range(0, s1, c1)): for i2, start2 in enumerate(range(0, s2, c2)): index_mod = index % n_buffer if index == 0: # initialize lists for storage of future results all_handles = [] all_futures = [] elif index_mod == 0: for fh, future in zip(all_handles, all_futures): if isinstance(future, IOFuture): size = future.get() if size != tile_cache[0].nbytes: raise ValueError( "failed to write the expected number of " "bytes" ) fh.close() # reset the lists to prepare for the next n_buffer tiles all_futures = [] all_handles = [] tile = image[ start0 : start0 + c0, start1 : start1 + c1, start2 : start2 + c2, ] if tile.shape == chunk_shape: # copy so the tile is contiguous in memory tile_cache[index_mod] = tile else: pad_width = ( (0, c0 - tile.shape[0]), (0, c1 - tile.shape[1]), (0, c2 - tile.shape[2]), ) tile_cache[index_mod] = cp.pad( tile, pad_width, mode="constant", constant_values=0 ) chunk_key = ".".join(map(str, (i0, i1, i2))) fname = os.path.join(output_path, chunk_key) fh = kvikio.CuFile(fname, "w") future = fh.pwrite(tile_cache[index_mod]) all_futures.append(future) all_handles.append(fh) # assert written == a.nbytes index += 1 for fh, future in zip(all_handles, all_futures): if isinstance(future, IOFuture): size = future.get() if size != tile_cache[0].nbytes: raise ValueError("failed to write the expected number of bytes") fh.close() return
0
rapidsai_public_repos/cucim/examples/python
rapidsai_public_repos/cucim/examples/python/gds_whole_slide/lz4_nvcomp.py
import cupy as cp import numpy as np from kvikio.nvcomp import LZ4Manager from numcodecs import registry from numcodecs.abc import Codec def ensure_ndarray(buf): if isinstance(buf, cp.ndarray): arr = buf elif hasattr(buf, "__cuda_array_interface__"): arr = cp.asarray(buf, copy=False) elif hasattr(buf, "__array_interface__"): arr = cp.asarray(np.asarray(buf)) else: raise ValueError("expected a cupy.ndarray") return arr def ensure_contiguous_ndarray(buf, max_buffer_size=None, flatten=True): """Convenience function to coerce `buf` to ndarray-like array. Also ensures that the returned value exports fully contiguous memory, and supports the new-style buffer interface. If the optional max_buffer_size is provided, raise a ValueError if the number of bytes consumed by the returned array exceeds this value. Parameters ---------- buf : ndarray-like, array-like, or bytes-like A numpy array like object such as numpy.ndarray, cupy.ndarray, or any object exporting a buffer interface. max_buffer_size : int If specified, the largest allowable value of arr.nbytes, where arr is the returned array. flatten : bool If True, the array are flatten. Returns ------- arr : cupy.ndarray A cupy.ndarray, sharing memory with `buf`. Notes ----- This function will not create a copy under any circumstances, it is guaranteed to return a view on memory exported by `buf`. """ arr = ensure_ndarray(buf) # check for object arrays, these are just memory pointers, actual memory # holding item data is scattered elsewhere if arr.dtype == object: raise TypeError("object arrays are not supported") # check for datetime or timedelta ndarray, the buffer interface doesn't # support those if arr.dtype.kind in "Mm": arr = arr.view(np.int64) # check memory is contiguous, if so flatten if arr.flags.c_contiguous or arr.flags.f_contiguous: if flatten: # can flatten without copy arr = arr.reshape(-1, order="A") else: raise ValueError("an array with contiguous memory is required") if max_buffer_size is not None and arr.nbytes > max_buffer_size: msg = "Codec does not support buffers of > {} bytes".format( max_buffer_size ) raise ValueError(msg) return arr def ndarray_copy(src, dst): """Copy the contents of the array from `src` to `dst`.""" if dst is None: # no-op return src # ensure ndarray like src = ensure_ndarray(src) dst = ensure_ndarray(dst) # flatten source array src = src.reshape(-1, order="A") # ensure same data type if dst.dtype != object: src = src.view(dst.dtype) # reshape source to match destination if src.shape != dst.shape: if dst.flags.f_contiguous: order = "F" else: order = "C" src = src.reshape(dst.shape, order=order) # copy via numpy cp.copyto(dst, src) return dst class LZ4NVCOMP(Codec): """Codec providing compression using LZ4 on the GPU via nvCOMP. Parameters ---------- acceleration : int Acceleration level. The larger the acceleration value, the faster the algorithm, but also the lesser the compression. See Also -------- numcodecs.zstd.Zstd, numcodecs.blosc.Blosc """ codec_id = "lz4nvcomp" max_buffer_size = 0x7E000000 def __init__( self, compressor=None ): # , acceleration=1 (nvcomp lz4 doesn't take an acceleration argument) # self.acceleration = acceleration self._compressor = None def encode(self, buf): buf = ensure_contiguous_ndarray(buf, self.max_buffer_size) if ( self._compressor is None ): # or self._compressor.data_type != buf.dtype: self._compressor = LZ4Manager(data_type=buf.dtype) return self._compressor.compress(buf) # , self.acceleration) def decode(self, buf, out=None): buf = ensure_contiguous_ndarray(buf, self.max_buffer_size) if ( self._compressor is None ): # or self._compressor.data_type != buf.dtype: self._compressor = LZ4Manager(data_type=buf.dtype) decompressed = self._compressor.decompress(buf) return ndarray_copy(decompressed, out) def __repr__(self): r = "%s" % type(self).__name__ return r registry.register_codec(LZ4NVCOMP)
0
rapidsai_public_repos/cucim/examples/python
rapidsai_public_repos/cucim/examples/python/gds_whole_slide/benchmark_read.py
import math import os import kvikio import kvikio.defaults import numpy as np from cupyx.profiler import benchmark from demo_implementation import ( get_n_tiles, get_tile_buffers, read_openslide, read_tifffile, read_tiled, ) from tifffile import TiffFile data_dir = os.environ.get("WHOLE_SLIDE_DATA_DIR", os.path.dirname("__file__")) fname = os.path.join(data_dir, "resize.tiff") if not os.path.exists(fname): raise RuntimeError(f"Could not find data file: {fname}") level = 0 max_duration = 8 with TiffFile(fname) as tif: page = tif.pages[level] page_shape = page.shape tile_shape = (page.tilelength, page.tilewidth, page.samplesperpixel) total_tiles = math.prod(get_n_tiles(page)) print(f"Resolution level {level}\n") print(f"\tshape: {page_shape}") print(f"\tstored as {total_tiles} tiles of shape {tile_shape}") # make sure we are not in compatibility mode to ensure cuFile is being used # (when compat_mode() is True, POSIX will be used instead of libcufile.so) kvikio.defaults.compat_mode_reset(False) assert not kvikio.defaults.compat_mode() # set the number of threads to use kvikio.defaults.num_threads_reset(16) print(f"\t{kvikio.defaults.compat_mode() = }") print(f"\t{kvikio.defaults.get_num_threads() = }") preregister_buffers = False if preregister_buffers: tile_buffers = get_tile_buffers(fname, level, n_buffer=256) for b in tile_buffers: kvikio.memory_register(b) else: tile_buffers = None # print(f"\tkvikio task size = {kvikio.defaults.task_size()/1024**2} MB") times = [] labels = [] perf_openslide = benchmark( read_openslide, (fname, level), n_warmup=0, n_repeat=100, max_duration=max_duration, ) times.append(perf_openslide.gpu_times.mean()) labels.append("openslide") print(f"duration ({labels[-1]}) = {times[-1]}") perf_tifffile = benchmark( read_tifffile, (fname, level), n_warmup=0, n_repeat=100, max_duration=max_duration, ) times.append(perf_tifffile.gpu_times.mean()) labels.append("tifffile") print(f"duration ({labels[-1]}) = {times[-1]}") for gds_enabled in [False, True]: kvikio.defaults.compat_mode_reset(not gds_enabled) assert kvikio.defaults.compat_mode() == (not gds_enabled) p = benchmark( read_tiled, (fname, [level]), kwargs=dict(backend="kvikio-raw_read", tile_buffers=tile_buffers), n_warmup=1, n_repeat=100, max_duration=max_duration, ) if gds_enabled: perf_kvikio_raw = p else: perf_kvikio_raw_nogds = p times.append(p.gpu_times.mean()) labels.append(f"kvikio-read_raw ({gds_enabled=})") print(f"duration ({labels[-1]}) = {times[-1]}") for mm in [8, 16, 32, 64]: kvikio.defaults.task_size_reset(4096 * mm) p = benchmark( read_tiled, (fname, [level]), kwargs=dict(backend="kvikio-read", tile_buffers=tile_buffers), n_warmup=1, n_repeat=100, max_duration=max_duration, ) if gds_enabled: perf_kvikio_read = p else: perf_kvikio_read_nogds = p times.append(p.gpu_times.mean()) labels.append( f"kvikio-read (task size={kvikio.defaults.task_size() // 1024} kB)" f" ({gds_enabled=})" ) print(f"duration ({labels[-1]}) = {times[-1]}") # Go back to 4MB task size in pread case kvikio.defaults.task_size_reset(512 * 1024) if gds_enabled: perf_kvikio_pread = [] else: perf_kvikio_pread_nogds = [] n_buffers = [1, 4, 16, 64, 256] for n_buffer in n_buffers: p = benchmark( read_tiled, (fname, [level]), kwargs=dict( backend="kvikio-pread", n_buffer=n_buffer, tile_buffers=tile_buffers, ), n_warmup=1, n_repeat=100, max_duration=max_duration, ) if gds_enabled: perf_kvikio_pread.append(p) else: perf_kvikio_pread_nogds.append(p) times.append(p.gpu_times.mean()) labels.append(f"kvikio-pread ({n_buffer=}) ({gds_enabled=})") print(f"duration ({labels[-1]}) = {times[-1]}") if preregister_buffers: for b in tile_buffers: kvikio.memory_deregister(b) kvikio.defaults.compat_mode_reset(False) out_name = "read_times.npz" # auto-increment filename to avoid overwriting old results cnt = 1 while os.path.exists(out_name): out_name = f"read_times{cnt}.npz" cnt += 1 np.savez(out_name, times=np.asarray(times), labels=np.asarray(labels)) """ Resolution level 0 with Cache clearing, but reads are not 4096-byte aligned shape: (26420, 19920, 3) stored as 2028 tiles of shape (512, 512, 3) kvikio.defaults.compat_mode() = False kvikio.defaults.get_num_threads() = 18 kvikio task size = 4.0 MB duration (openslide) = 28.921716796875 duration (tifffile) = 3.818202718098958 duration (tiled-tifffile) = 3.885939778645833 duration (kvikio-read_raw (gds_enabled=False)) = 3.4184929199218748 duration (kvikio-read (gds_enabled=False)) = 3.813303955078125 duration (kvikio-pread (n_buffer=1) (gds_enabled=False)) = 3.9369333496093746 duration (kvikio-pread (n_buffer=2) (gds_enabled=False)) = 4.028409342447917 duration (kvikio-pread (n_buffer=4) (gds_enabled=False)) = 2.785054626464844 duration (kvikio-pread (n_buffer=8) (gds_enabled=False)) = 1.7379150390625 duration (kvikio-pread (n_buffer=16) (gds_enabled=False)) = 1.2908187103271485 duration (kvikio-pread (n_buffer=32) (gds_enabled=False)) = 1.0635023193359374 duration (kvikio-pread (n_buffer=64) (gds_enabled=False)) = 0.9369119762073862 duration (kvikio-pread (n_buffer=128) (gds_enabled=False)) = 0.8773154449462891 duration (kvikio-read_raw (gds_enabled=True)) = 3.4003018391927085 duration (kvikio-read (gds_enabled=True)) = 3.763134847005208 duration (kvikio-pread (n_buffer=1) (gds_enabled=True)) = 3.7581602376302086 duration (kvikio-pread (n_buffer=2) (gds_enabled=True)) = 4.107709065755208 duration (kvikio-pread (n_buffer=4) (gds_enabled=True)) = 2.609207336425781 duration (kvikio-pread (n_buffer=8) (gds_enabled=True)) = 1.744682902018229 duration (kvikio-pread (n_buffer=16) (gds_enabled=True)) = 1.2838030700683594 duration (kvikio-pread (n_buffer=32) (gds_enabled=True)) = 1.05522587890625 duration (kvikio-pread (n_buffer=64) (gds_enabled=True)) = 0.9214399691495029 duration (kvikio-pread (n_buffer=128) (gds_enabled=True)) = 0.8695069885253907 Resolution level 0 with 4096-byte aligned reads shape: (26420, 19920, 3) stored as 2028 tiles of shape (512, 512, 3) kvikio.defaults.compat_mode() = False kvikio.defaults.get_num_threads() = 18 kvikio task size = 4.0 MB duration (kvikio-read_raw (gds_enabled=False)) = 3.4100815429687494 duration (kvikio-read (gds_enabled=False)) = 3.8238279622395837 duration (kvikio-pread (n_buffer=1) (gds_enabled=False)) = 3.740669270833333 duration (kvikio-pread (n_buffer=4) (gds_enabled=False)) = 2.672812255859375 duration (kvikio-pread (n_buffer=16) (gds_enabled=False)) = 1.3131573791503905 duration (kvikio-pread (n_buffer=64) (gds_enabled=False)) = 0.9273524225408379 duration (kvikio-pread (n_buffer=256) (gds_enabled=False)) = 0.8461123250325521 duration (kvikio-read_raw (gds_enabled=True)) = 4.179492513020834 duration (kvikio-read (gds_enabled=True)) = 4.889711263020834 duration (kvikio-pread (n_buffer=1) (gds_enabled=True)) = 4.816523600260417 duration (kvikio-pread (n_buffer=4) (gds_enabled=True)) = 2.2351694824218753 duration (kvikio-pread (n_buffer=16) (gds_enabled=True)) = 1.1082978149414064 duration (kvikio-pread (n_buffer=64) (gds_enabled=True)) = 0.670870166015625 duration (kvikio-pread (n_buffer=256) (gds_enabled=True)) = 0.5998859683766086 pread with default 4MB "task size" Resolution level 0 shape: (26420, 19920, 3) stored as 2028 tiles of shape (512, 512, 3) kvikio.defaults.compat_mode() = False kvikio.defaults.get_num_threads() = 18 kvikio task size = 4 MB duration (kvikio-pread (n_buffer=1) (gds_enabled=True)) = 4.8583107096354174 duration (kvikio-pread (n_buffer=4) (gds_enabled=True)) = 2.1224323242187504 duration (kvikio-pread (n_buffer=16) (gds_enabled=True)) = 1.1164629991319446 duration (kvikio-pread (n_buffer=64) (gds_enabled=True)) = 0.6734547526041668 duration (kvikio-pread (n_buffer=256) (gds_enabled=True)) = 0.601566697064568 (cucim) grelee@grelee-dt:~/Dropbox/NVIDIA/demos/gds/gds-cucim-demo$ python benchmark_read.py Resolution level 0 pread with 64kB "task size" shape: (26420, 19920, 3) stored as 2028 tiles of shape (512, 512, 3) kvikio.defaults.compat_mode() = False kvikio.defaults.get_num_threads() = 18 kvikio task size = 0.064 MB duration (kvikio-pread (n_buffer=1) (gds_enabled=True)) = 3.0912179565429687 duration (kvikio-pread (n_buffer=4) (gds_enabled=True)) = 1.3932305145263673 duration (kvikio-pread (n_buffer=16) (gds_enabled=True)) = 0.9027577819824221 duration (kvikio-pread (n_buffer=64) (gds_enabled=True)) = 0.7827104492187501 duration (kvikio-pread (n_buffer=256) (gds_enabled=True)) = 0.756464599609375 """ # noqa: E501
0
rapidsai_public_repos/cucim/examples/python
rapidsai_public_repos/cucim/examples/python/gds_whole_slide/benchmark_zarr_write.py
import math import os import cupy as cp import kvikio.defaults import numpy as np from cupyx.profiler import benchmark from demo_implementation import cupy_to_zarr, get_n_tiles, read_tiled from tifffile import TiffFile from cucim.core.operations.color import image_to_absorbance data_dir = os.environ.get("WHOLE_SLIDE_DATA_DIR", os.path.dirname("__file__")) fname = os.path.join(data_dir, "resize.tiff") if not os.path.exists(fname): raise RuntimeError(f"Could not find data file: {fname}") # make sure we are not in compatibility mode to ensure cuFile is being used # (when compat_mode() is True, POSIX will be used instead of libcufile.so) kvikio.defaults.compat_mode_reset(False) assert not kvikio.defaults.compat_mode() # set the number of threads to use kvikio.defaults.num_threads_reset(16) print(f"\t{kvikio.defaults.compat_mode() = }") print(f"\t{kvikio.defaults.get_num_threads() = }") print(f"\tkvikio task size = {kvikio.defaults.task_size()/1024**2} MB") n_buffer = 128 max_duration = 4 level = 0 compressor = None with TiffFile(fname) as tif: page = tif.pages[level] page_shape = page.shape tile_shape = (page.tilelength, page.tilewidth, page.samplesperpixel) total_tiles = math.prod(get_n_tiles(page)) print(f"Resolution level {level}\n") print(f"\tshape: {page_shape}") print(f"\tstored as {total_tiles} tiles of shape {tile_shape}") # read the uint8 TIFF kwargs = dict(levels=[level], backend="kvikio-pread", n_buffer=n_buffer) image_gpu = read_tiled(fname, **kwargs)[0] # read the uint8 TIFF applying tile-wise processing to give a float32 array kwargs = dict( levels=[level], backend="kvikio-pread", n_buffer=n_buffer, tile_func=image_to_absorbance, out_dtype=cp.float32, ) preprocessed_gpu = read_tiled(fname, **kwargs)[0] # benchmark writing these CuPy outputs to Zarr with various chunk sizes dtypes = ["uint8", "float32"] chunk_shapes = [(512, 512, 3), (1024, 1024, 3), (2048, 2048, 3)] backends = ["dask", "kvikio-raw_write", "kvikio-pwrite"] kvikio.defaults.num_threads_reset(16) write_time_means = np.zeros( ((len(dtypes), len(chunk_shapes), len(backends), 2)), dtype=float ) write_time_stds = np.zeros_like(write_time_means) for i, dtype in enumerate(dtypes): if dtype == "uint8": img = image_gpu assert img.dtype == cp.uint8 elif dtype == "float32": img = preprocessed_gpu assert img.dtype == cp.float32 else: raise NotImplementedError("only testing for uint8 and float32") for j, chunk_shape in enumerate(chunk_shapes): for k, backend in enumerate(backends): kwargs = dict( output_path=f"./image-{dtype}.zarr", chunk_shape=chunk_shape, zarr_kwargs=dict(overwrite=True, compressor=compressor), n_buffer=64, backend=backend, ) for m, gds_enabled in enumerate([False, True]): kvikio.defaults.compat_mode_reset(not gds_enabled) perf_write_float32 = benchmark( cupy_to_zarr, (img,), kwargs=kwargs, n_warmup=1, n_repeat=7, max_duration=15, ) t = perf_write_float32.gpu_times write_time_means[i, j, k, m] = t.mean() write_time_stds[i, j, k, m] = t.std() print( f"Duration ({cp.dtype(dtype).name} write, {chunk_shape=}, {backend=}, {gds_enabled=}): " # noqa: E501 f"{t.mean()} s +/- {t.std()} s" ) out_name = "write_times.npz" # auto-increment filename to avoid overwriting old results cnt = 1 while os.path.exists(out_name): out_name = f"write_times{cnt}.npz" cnt += 1 np.savez( out_name, write_time_means=write_time_means, write_time_stds=write_time_stds ) """ Output on local system: """
0
rapidsai_public_repos/cucim/examples/python
rapidsai_public_repos/cucim/examples/python/tiff_image/main.py
# # Copyright (c) 2021, NVIDIA CORPORATION. # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import json import numpy as np from PIL import Image from cucim import CuImage img = CuImage("image.tif") print(img.is_loaded) # True if image data is loaded & available. print(img.device) # A device type. print(img.ndim) # The number of dimensions. print(img.dims) # A string containing a list of dimensions being requested. print(img.shape) # A tuple of dimension sizes (in the order of `dims`). print(img.size("XYC")) # Returns size as a tuple for the given dimension order. print(img.dtype) # The data type of the image. print(img.channel_names) # A channel name list. print(img.spacing()) # Returns physical size in tuple. print( img.spacing_units() ) # Units for each spacing element (size is same with `ndim`). print(img.origin) # Physical location of (0, 0, 0) (size is always 3). print(img.direction) # Direction cosines (size is always 3x3). print(img.coord_sys) # Coordinate frame in which the direction cosines are # measured. Available Coordinate frame is not finalized yet. # Returns a set of associated image names. print(img.associated_images) # Returns a dict that includes resolution information. print(json.dumps(img.resolutions, indent=2)) # A metadata object as `dict` print(json.dumps(img.metadata, indent=2)) # A raw metadata string. print(img.raw_metadata) # Read whole slide at the lowest resolution resolutions = img.resolutions level_count = resolutions["level_count"] # Note: 'level' is at 3rd parameter (OpenSlide has it at 2nd parameter) region = img.read_region( location=(10000, 10000), size=(512, 512), level=level_count - 1 ) region.save("test.ppm") Image.fromarray(np.asarray(region))
0
rapidsai_public_repos/cucim/examples
rapidsai_public_repos/cucim/examples/cpp/CMakeLists.txt
# # Copyright (c) 2020-2021, NVIDIA CORPORATION. # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ################################################################################ # Add executable: tiff_image ################################################################################ add_executable(tiff_image tiff_image/main.cpp) set_target_properties(tiff_image PROPERTIES CXX_STANDARD 17 CXX_STANDARD_REQUIRED YES CXX_EXTENSIONS NO ) target_compile_features(tiff_image PRIVATE ${CUCIM_REQUIRED_FEATURES}) # Use generator expression to avoid `nvcc fatal : Value '-std=c++17' is not defined for option 'Werror'` target_compile_options(tiff_image PRIVATE $<$<COMPILE_LANGUAGE:CXX>:-Werror -Wall -Wextra>) target_link_libraries(tiff_image PRIVATE ${CUCIM_PACKAGE_NAME} deps::fmt )
0
rapidsai_public_repos/cucim/examples
rapidsai_public_repos/cucim/examples/cpp/CMakeLists.txt.examples.release.in
# # Copyright (c) 2020, NVIDIA CORPORATION. # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # CUDA_STANDARD 17 is supported from CMAKE 3.18 # : https://cmake.org/cmake/help/v3.18/prop_tgt/CUDA_STANDARD.html cmake_minimum_required(VERSION 3.18) project(cucim-cpp-examples VERSION @VERSION@ DESCRIPTION "cuCIM CPP examples" LANGUAGES CUDA CXX) # Set default build type set(DEFAULT_BUILD_TYPE "Release") if (NOT CMAKE_BUILD_TYPE AND NOT CMAKE_CONFIGURATION_TYPES) message(STATUS "Setting build type to '${DEFAULT_BUILD_TYPE}' as none was specified.") set(CMAKE_BUILD_TYPE "${DEFAULT_BUILD_TYPE}" CACHE STRING "Choose the type of build." FORCE) # Set the possible values of build type for cmake-gui set_property(CACHE CMAKE_BUILD_TYPE PROPERTY STRINGS "Debug" "Release" "MinSizeRel" "RelWithDebInfo") endif () # Set default output directories if (NOT CMAKE_ARCHIVE_OUTPUT_DIRECTORY) set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}/lib") endif() if (NOT CMAKE_LIBRARY_OUTPUT_DIRECTORY) set(CMAKE_LIBRARY_OUTPUT_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}/lib") endif() if (NOT CMAKE_RUNTIME_OUTPUT_DIRECTORY) set(CMAKE_RUNTIME_OUTPUT_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}/bin") endif() ################################################################################ # Find cucim package ################################################################################ if (NOT CUCIM_SDK_PATH) get_filename_component(CUCIM_SDK_PATH "${CMAKE_SOURCE_DIR}/../.." ABSOLUTE) message("CUCIM_SDK_PATH is not set. Using '${CUCIM_SDK_PATH}'") else() message("CUCIM_SDK_PATH is set to ${CUCIM_SDK_PATH}") endif() find_package(cucim CONFIG REQUIRED HINTS ${CUCIM_SDK_PATH}/install/lib/cmake/cucim) ################################################################################ # Add executable: tiff_image ################################################################################ add_executable(tiff_image tiff_image/main.cpp) set_target_properties(tiff_image PROPERTIES CXX_STANDARD 17 CXX_STANDARD_REQUIRED YES CXX_EXTENSIONS NO CUDA_STANDARD 17 CUDA_STANDARD_REQUIRED YES CUDA_EXTENSIONS NO CUDA_SEPARABLE_COMPILATION ON CUDA_RUNTIME_LIBRARY Shared ) target_compile_features(tiff_image PRIVATE cxx_std_17 cuda_std_17) # Use generator expression to avoid `nvcc fatal : Value '-std=c++17' is not defined for option 'Werror'` target_compile_options(tiff_image PRIVATE $<$<COMPILE_LANGUAGE:CXX>:-Werror -Wall -Wextra>) target_link_libraries(tiff_image PRIVATE cucim::cucim )
0
rapidsai_public_repos/cucim/examples/cpp
rapidsai_public_repos/cucim/examples/cpp/tiff_image/main.cpp
/* * Copyright (c) 2020, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include <cucim/cuimage.h> #include <fmt/format.h> #include <fmt/ranges.h> int main(int argc, char* argv[]) { // Check the number of parameters if (argc < 3) { fmt::print(stderr, "Usage: {} INPUT_FILE_PATH OUTPUT_FOLDER\n", argv[0]); return 1; } const char* input_file_path = argv[1]; const char* output_folder_path = argv[2]; cucim::CuImage image = cucim::CuImage(input_file_path); fmt::print("is_loaded: {}\n", image.is_loaded()); fmt::print("device: {}\n", std::string(image.device())); fmt::print("metadata: {}\n", image.metadata()); fmt::print("dims: {}\n", image.dims()); fmt::print("shape: ({})\n", fmt::join(image.shape(), ", ")); fmt::print("size('XYC'): ({})\n", fmt::join(image.size("XYC"), ", ")); fmt::print("channel_names: ({})\n", fmt::join(image.channel_names(), ", ")); auto resolutions = image.resolutions(); fmt::print("level_count: {}\n", resolutions.level_count()); fmt::print("level_dimensions: ({})\n", fmt::join(resolutions.level_dimensions(), ", ")); fmt::print("level_dimension (level 0): ({})\n", fmt::join(resolutions.level_dimension(0), ", ")); fmt::print("level_downsamples: ({})\n", fmt::join(resolutions.level_downsamples(), ", ")); fmt::print("level_tile_sizes: ({})\n", fmt::join(resolutions.level_tile_sizes(), ", ")); auto associated_images = image.associated_images(); fmt::print("associated_images: ({})\n", fmt::join(associated_images, ", ")); fmt::print("#macro\n"); auto associated_image = image.associated_image("macro"); fmt::print("is_loaded: {}\n", associated_image.is_loaded()); fmt::print("device: {}\n", std::string(associated_image.device())); fmt::print("metadata: {}\n", associated_image.metadata()); fmt::print("dims: {}\n", associated_image.dims()); fmt::print("shape: ({})\n", fmt::join(associated_image.shape(), ", ")); fmt::print("size('XYC'): ({})\n", fmt::join(associated_image.size("XYC"), ", ")); fmt::print("channel_names: ({})\n", fmt::join(associated_image.channel_names(), ", ")); fmt::print("\n"); cucim::CuImage region = image.read_region({ 10000, 10000 }, { 1024, 1024 }, 0); fmt::print("is_loaded: {}\n", region.is_loaded()); fmt::print("device: {}\n", std::string(region.device())); fmt::print("metadata: {}\n", region.metadata()); fmt::print("dims: {}\n", region.dims()); fmt::print("shape: ({})\n", fmt::join(region.shape(), ", ")); fmt::print("size('XY'): ({})\n", fmt::join(region.size("XY"), ", ")); fmt::print("channel_names: ({})\n", fmt::join(region.channel_names(), ", ")); resolutions = region.resolutions(); fmt::print("level_count: {}\n", resolutions.level_count()); fmt::print("level_dimensions: ({})\n", fmt::join(resolutions.level_dimensions(), ", ")); fmt::print("level_dimension (level 0): ({})\n", fmt::join(resolutions.level_dimension(0), ", ")); fmt::print("level_downsamples: ({})\n", fmt::join(resolutions.level_downsamples(), ", ")); fmt::print("level_tile_sizes: ({})\n", fmt::join(resolutions.level_tile_sizes(), ", ")); associated_images = region.associated_images(); fmt::print("associated_images: ({})\n", fmt::join(associated_images, ", ")); fmt::print("\n"); region.save(fmt::format("{}/output.ppm", output_folder_path)); cucim::CuImage region2 = image.read_region({ 5000, 5000 }, { 1024, 1024 }, 1); region2.save(fmt::format("{}/output2.ppm", output_folder_path)); // Batch loading image // You need to create shared pointer for cucim::CuImage. Otherwise it will cause std::bad_weak_ptr exception. auto batch_image = std::make_shared<cucim::CuImage>(input_file_path); auto region3 = std::make_shared<cucim::CuImage>(image.read_region( { 0, 0, 100, 200, 300, 300, 400, 400, 500, 500, 600, 600, 700, 700, 800, 800, 900, 900, 1000, 1000 }, { 200, 200 }, 0 /*level*/, 2 /*num_workers*/, 2 /*batch_size*/, false /*drop_last*/, 1 /*prefetch_factor*/, false /*shuffle*/, 0 /*seed*/)); for (auto batch : *region3) { fmt::print("shape: {}, data size:{}\n", fmt::join(batch->shape(), ", "), batch->container().size()); } return 0; }
0
rapidsai_public_repos
rapidsai_public_repos/rmm/.pre-commit-config.yaml
# Copyright (c) 2022-2023, NVIDIA CORPORATION. repos: - repo: https://github.com/pre-commit/pre-commit-hooks rev: v4.3.0 hooks: - id: trailing-whitespace - id: end-of-file-fixer - repo: https://github.com/rapidsai/dependency-file-generator rev: v1.5.1 hooks: - id: rapids-dependency-file-generator args: ["--clean"] - repo: https://github.com/PyCQA/isort rev: 5.12.0 hooks: - id: isort args: ["--settings-path=python/pyproject.toml"] files: python/.* types_or: [python, cython, pyi] - repo: https://github.com/ambv/black rev: 22.3.0 hooks: - id: black args: ["--config=python/pyproject.toml"] - repo: https://github.com/MarcoGorelli/cython-lint rev: v0.15.0 hooks: - id: cython-lint - repo: https://github.com/pre-commit/mirrors-clang-format rev: v16.0.6 hooks: - id: clang-format types_or: [c, c++, cuda] args: ["-fallback-style=none", "-style=file", "-i"] - repo: https://github.com/sirosen/texthooks rev: 0.4.0 hooks: - id: fix-smartquotes exclude: | (?x)^( ^benchmarks/utilities/cxxopts.hpp ) - repo: https://github.com/codespell-project/codespell rev: v2.2.4 hooks: - id: codespell exclude: | (?x)^( pyproject.toml| benchmarks/utilities/cxxopts.hpp ) - repo: local hooks: - id: cmake-format name: cmake-format entry: ./scripts/run-cmake-format.sh cmake-format language: python types: [cmake] # Note that pre-commit autoupdate does not update the versions # of dependencies, so we'll have to update this manually. additional_dependencies: - cmakelang==0.6.13 - id: cmake-lint name: cmake-lint entry: ./scripts/run-cmake-format.sh cmake-lint language: python types: [cmake] # Note that pre-commit autoupdate does not update the versions # of dependencies, so we'll have to update this manually. additional_dependencies: - cmakelang==0.6.13 - id: doxygen-check name: doxygen-check entry: ./scripts/doxygen.sh types_or: [file] language: system pass_filenames: false verbose: true - repo: https://github.com/astral-sh/ruff-pre-commit rev: v0.0.278 hooks: - id: ruff files: python/.*$ default_language_version: python: python3
0
rapidsai_public_repos
rapidsai_public_repos/rmm/pyproject.toml
[tool.codespell] # note: pre-commit passes explicit lists of files here, which this skip file list doesn't override - # this is only to allow you to run codespell interactively skip = "./pyproject.toml,./.git,./.github,./cpp/build,.*egg-info.*,./.mypy_cache,./benchmarks/utilities/cxxopts.hpp" # ignore short words, and typename parameters like OffsetT ignore-regex = "\\b(.{1,4}|[A-Z]\\w*T)\\b" ignore-words-list = "inout" builtin = "clear" quiet-level = 3 [tool.ruff] select = ["E", "F", "W"] ignore = [ # whitespace before : "E203", ] fixable = ["ALL"] exclude = [ # TODO: Remove this in a follow-up where we fix __all__. "__init__.py", ] line-length = 79
0
rapidsai_public_repos
rapidsai_public_repos/rmm/.clangd
# https://clangd.llvm.org/config # Apply a config conditionally to all C files If: PathMatch: .*\.(c|h)$ --- # Apply a config conditionally to all C++ files If: PathMatch: .*\.(c|h)pp --- # Apply a config conditionally to all CUDA files If: PathMatch: .*\.cuh? CompileFlags: Add: - "-x" - "cuda" # No error on unknown CUDA versions - "-Wno-unknown-cuda-version" # Allow variadic CUDA functions - "-Xclang=-fcuda-allow-variadic-functions" Diagnostics: Suppress: - "variadic_device_fn" - "attributes_not_allowed" --- # Tweak the clangd parse settings for all files CompileFlags: Add: # report all errors - "-ferror-limit=0" - "-fmacro-backtrace-limit=0" - "-ftemplate-backtrace-limit=0" # Skip the CUDA version check - "--no-cuda-version-check" Remove: # remove gcc's -fcoroutines - -fcoroutines # remove nvc++ flags unknown to clang - "-gpu=*" - "-stdpar*" # remove nvcc flags unknown to clang - "-arch*" - "-gencode*" - "--generate-code*" - "-ccbin*" - "-t=*" - "--threads*" - "-Xptxas*" - "-Xcudafe*" - "-Xfatbin*" - "-Xcompiler*" - "--diag-suppress*" - "--diag_suppress*" - "--compiler-options*" - "--expt-extended-lambda" - "--expt-relaxed-constexpr" - "-forward-unknown-to-host-compiler" - "-Werror=cross-execution-space-call"
0
rapidsai_public_repos
rapidsai_public_repos/rmm/CMakeLists.txt
# ============================================================================= # Copyright (c) 2018-2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except # in compliance with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software distributed under the License # is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express # or implied. See the License for the specific language governing permissions and limitations under # the License. # ============================================================================= cmake_minimum_required(VERSION 3.26.4 FATAL_ERROR) include(fetch_rapids.cmake) include(rapids-cmake) include(rapids-cpm) include(rapids-export) include(rapids-find) project( RMM VERSION 24.02.00 LANGUAGES CXX) # Write the version header rapids_cmake_write_version_file(include/rmm/version_config.hpp) # ################################################################################################## # * build type ------------------------------------------------------------------------------------- # Set a default build type if none was specified rapids_cmake_build_type(Release) # ################################################################################################## # * build options ---------------------------------------------------------------------------------- option(BUILD_TESTS "Configure CMake to build tests" ON) option(BUILD_BENCHMARKS "Configure CMake to build (google) benchmarks" OFF) set(RMM_LOGGING_LEVEL "INFO" CACHE STRING "Choose the logging level.") set_property(CACHE RMM_LOGGING_LEVEL PROPERTY STRINGS "TRACE" "DEBUG" "INFO" "WARN" "ERROR" "CRITICAL" "OFF") # Set logging level. Must go before including gtests and benchmarks. Set the possible values of # build type for cmake-gui message(STATUS "RMM: RMM_LOGGING_LEVEL = '${RMM_LOGGING_LEVEL}'") # cudart can be statically linked or dynamically linked the python ecosystem wants dynamic linking option(CUDA_STATIC_RUNTIME "Statically link the CUDA runtime" OFF) # ################################################################################################## # * compiler options ------------------------------------------------------------------------------- # find packages we depend on rapids_find_package( CUDAToolkit REQUIRED BUILD_EXPORT_SET rmm-exports INSTALL_EXPORT_SET rmm-exports) # ################################################################################################## # * dependencies ----------------------------------------------------------------------------------- # add third party dependencies using CPM rapids_cpm_init() include(cmake/thirdparty/get_fmt.cmake) include(cmake/thirdparty/get_spdlog.cmake) include(cmake/thirdparty/get_libcudacxx.cmake) include(cmake/thirdparty/get_thrust.cmake) # ################################################################################################## # * library targets -------------------------------------------------------------------------------- add_library(rmm INTERFACE) add_library(rmm::rmm ALIAS rmm) target_include_directories(rmm INTERFACE "$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include>" "$<INSTALL_INTERFACE:include>") if(CUDA_STATIC_RUNTIME) message(STATUS "RMM: Enabling static linking of cudart") target_link_libraries(rmm INTERFACE CUDA::cudart_static) target_compile_definitions(rmm INTERFACE RMM_STATIC_CUDART) else() target_link_libraries(rmm INTERFACE CUDA::cudart) endif() target_link_libraries(rmm INTERFACE libcudacxx::libcudacxx) target_link_libraries(rmm INTERFACE rmm::Thrust) target_link_libraries(rmm INTERFACE fmt::fmt-header-only) target_link_libraries(rmm INTERFACE spdlog::spdlog_header_only) target_link_libraries(rmm INTERFACE dl) target_compile_features(rmm INTERFACE cxx_std_17 $<BUILD_INTERFACE:cuda_std_17>) target_compile_definitions(rmm INTERFACE LIBCUDACXX_ENABLE_EXPERIMENTAL_MEMORY_RESOURCE) # ################################################################################################## # * tests and benchmarks --------------------------------------------------------------------------- if((BUILD_TESTS OR BUILD_BENCHMARKS) AND CMAKE_PROJECT_NAME STREQUAL PROJECT_NAME) include(rapids-cuda) rapids_cuda_init_architectures(RMM) enable_language(CUDA) # Since RMM only enables CUDA optionally we need to manually include the file that # rapids_cuda_init_architectures relies on `project` calling include("${CMAKE_PROJECT_RMM_INCLUDE}") message(STATUS "RMM: Building benchmarks with GPU Architectures: ${CMAKE_CUDA_ARCHITECTURES}") endif() # ################################################################################################## # * add tests -------------------------------------------------------------------------------------- if(BUILD_TESTS AND CMAKE_PROJECT_NAME STREQUAL PROJECT_NAME) include(cmake/thirdparty/get_gtest.cmake) include(CTest) # calls enable_testing() add_subdirectory(tests) endif() # ################################################################################################## # * add benchmarks --------------------------------------------------------------------------------- if(BUILD_BENCHMARKS AND CMAKE_PROJECT_NAME STREQUAL PROJECT_NAME) include(${rapids-cmake-dir}/cpm/gbench.cmake) rapids_cpm_gbench() add_subdirectory(benchmarks) endif() # ################################################################################################## # * install targets -------------------------------------------------------------------------------- include(CPack) # install export targets install(TARGETS rmm EXPORT rmm-exports) install(DIRECTORY include/rmm/ DESTINATION include/rmm) install(FILES ${RMM_BINARY_DIR}/include/rmm/version_config.hpp DESTINATION include/rmm) set(doc_string [=[ Provide targets for RMM: RAPIDS Memory Manager. The goal of the [RMM](https://github.com/rapidsai/rmm) is to provide: A common interface that allows customizing device and host memory allocation A collection of implementations of the interface A collection of data structures that use the interface for memory allocation ]=]) set(code_string [=[ if(NOT TARGET rmm::Thrust) thrust_create_target(rmm::Thrust FROM_OPTIONS) endif() ]=]) rapids_export( INSTALL rmm EXPORT_SET rmm-exports GLOBAL_TARGETS rmm NAMESPACE rmm:: DOCUMENTATION doc_string FINAL_CODE_BLOCK code_string) # ################################################################################################## # * build export ----------------------------------------------------------------------------------- rapids_export( BUILD rmm EXPORT_SET rmm-exports GLOBAL_TARGETS rmm NAMESPACE rmm:: DOCUMENTATION doc_string FINAL_CODE_BLOCK code_string) # ################################################################################################## # * make documentation ----------------------------------------------------------------------------- add_custom_command( OUTPUT RMM_DOXYGEN WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/doxygen COMMAND doxygen Doxyfile VERBATIM COMMENT "Custom command for RMM doxygen docs") add_custom_target( rmm_doc DEPENDS RMM_DOXYGEN COMMENT "Target for the custom command to build the RMM doxygen docs") # ################################################################################################## # * make gdb helper scripts ------------------------------------------------------------------------ # optionally assemble Thrust pretty-printers if(Thrust_SOURCE_DIR) configure_file(scripts/load-pretty-printers.in load-pretty-printers @ONLY) endif()
0
rapidsai_public_repos
rapidsai_public_repos/rmm/fetch_rapids.cmake
# ============================================================================= # Copyright (c) 2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except # in compliance with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software distributed under the License # is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express # or implied. See the License for the specific language governing permissions and limitations under # the License. # ============================================================================= if(NOT EXISTS ${CMAKE_CURRENT_BINARY_DIR}/RMM_RAPIDS.cmake) file(DOWNLOAD https://raw.githubusercontent.com/rapidsai/rapids-cmake/branch-24.02/RAPIDS.cmake ${CMAKE_CURRENT_BINARY_DIR}/RMM_RAPIDS.cmake) endif() include(${CMAKE_CURRENT_BINARY_DIR}/RMM_RAPIDS.cmake)
0
rapidsai_public_repos
rapidsai_public_repos/rmm/gcovr.cfg
exclude=build/.* exclude=tests/.* exclude=benchmarks/.* html=yes html-details=yes sort-percentage=yes exclude-throw-branches=yes
0
rapidsai_public_repos
rapidsai_public_repos/rmm/README.md
# <div align="left"><img src="img/rapids_logo.png" width="90px"/>&nbsp;RMM: RAPIDS Memory Manager</div> **NOTE:** For the latest stable [README.md](https://github.com/rapidsai/rmm/blob/main/README.md) ensure you are on the `main` branch. ## Resources - [RMM Reference Documentation](https://docs.rapids.ai/api/rmm/stable/): Python API reference, tutorials, and topic guides. - [librmm Reference Documentation](https://docs.rapids.ai/api/librmm/stable/): C/C++ CUDA library API reference. - [Getting Started](https://rapids.ai/start.html): Instructions for installing RMM. - [RAPIDS Community](https://rapids.ai/community.html): Get help, contribute, and collaborate. - [GitHub repository](https://github.com/rapidsai/rmm): Download the RMM source code. - [Issue tracker](https://github.com/rapidsai/rmm/issues): Report issues or request features. ## Overview Achieving optimal performance in GPU-centric workflows frequently requires customizing how host and device memory are allocated. For example, using "pinned" host memory for asynchronous host <-> device memory transfers, or using a device memory pool sub-allocator to reduce the cost of dynamic device memory allocation. The goal of the RAPIDS Memory Manager (RMM) is to provide: - A common interface that allows customizing [device](#device_memory_resource) and [host](#host_memory_resource) memory allocation - A collection of [implementations](#available-resources) of the interface - A collection of [data structures](#device-data-structures) that use the interface for memory allocation For information on the interface RMM provides and how to use RMM in your C++ code, see [below](#using-rmm-in-c). For a walkthrough about the design of the RAPIDS Memory Manager, read [Fast, Flexible Allocation for NVIDIA CUDA with RAPIDS Memory Manager](https://developer.nvidia.com/blog/fast-flexible-allocation-for-cuda-with-rapids-memory-manager/) on the NVIDIA Developer Blog. ## Installation ### Conda RMM can be installed with Conda ([miniconda](https://conda.io/miniconda.html), or the full [Anaconda distribution](https://www.anaconda.com/download)) from the `rapidsai` channel: ```bash conda install -c rapidsai -c conda-forge -c nvidia rmm cuda-version=12.0 ``` We also provide [nightly Conda packages](https://anaconda.org/rapidsai-nightly) built from the HEAD of our latest development branch. Note: RMM is supported only on Linux, and only tested with Python versions 3.9 and 3.10. Note: The RMM package from Conda requires building with GCC 9 or later. Otherwise, your application may fail to build. See the [Get RAPIDS version picker](https://rapids.ai/start.html) for more OS and version info. ## Building from Source ### Get RMM Dependencies Compiler requirements: * `gcc` version 9.3+ * `nvcc` version 11.4+ * `cmake` version 3.26.4+ CUDA/GPU requirements: * CUDA 11.4+ * Pascal architecture or better You can obtain CUDA from [https://developer.nvidia.com/cuda-downloads](https://developer.nvidia.com/cuda-downloads) Python requirements: * `scikit-build` * `cuda-python` * `cython` For more details, see [pyproject.toml](python/pyproject.toml) ### Script to build RMM from source To install RMM from source, ensure the dependencies are met and follow the steps below: - Clone the repository and submodules ```bash $ git clone --recurse-submodules https://github.com/rapidsai/rmm.git $ cd rmm ``` - Create the conda development environment `rmm_dev` ```bash # create the conda environment (assuming in base `rmm` directory) $ conda env create --name rmm_dev --file conda/environments/all_cuda-118_arch-x86_64.yaml # activate the environment $ conda activate rmm_dev ``` - Build and install `librmm` using cmake & make. CMake depends on the `nvcc` executable being on your path or defined in `CUDACXX` environment variable. ```bash $ mkdir build # make a build directory $ cd build # enter the build directory $ cmake .. -DCMAKE_INSTALL_PREFIX=/install/path # configure cmake ... use $CONDA_PREFIX if you're using Anaconda $ make -j # compile the library librmm.so ... '-j' will start a parallel job using the number of physical cores available on your system $ make install # install the library librmm.so to '/install/path' ``` - Building and installing `librmm` and `rmm` using build.sh. Build.sh creates build dir at root of git repository. build.sh depends on the `nvcc` executable being on your path or defined in `CUDACXX` environment variable. ```bash $ ./build.sh -h # Display help and exit $ ./build.sh -n librmm # Build librmm without installing $ ./build.sh -n rmm # Build rmm without installing $ ./build.sh -n librmm rmm # Build librmm and rmm without installing $ ./build.sh librmm rmm # Build and install librmm and rmm ``` - To run tests (Optional): ```bash $ cd build (if you are not already in build directory) $ make test ``` - Build, install, and test the `rmm` python package, in the `python` folder: ```bash $ python setup.py build_ext --inplace $ python setup.py install $ pytest -v ``` Done! You are ready to develop for the RMM OSS project. ### Caching third-party dependencies RMM uses [CPM.cmake](https://github.com/TheLartians/CPM.cmake) to handle third-party dependencies like spdlog, Thrust, GoogleTest, GoogleBenchmark. In general you won't have to worry about it. If CMake finds an appropriate version on your system, it uses it (you can help it along by setting `CMAKE_PREFIX_PATH` to point to the installed location). Otherwise those dependencies will be downloaded as part of the build. If you frequently start new builds from scratch, consider setting the environment variable `CPM_SOURCE_CACHE` to an external download directory to avoid repeated downloads of the third-party dependencies. ## Using RMM in a downstream CMake project The installed RMM library provides a set of config files that makes it easy to integrate RMM into your own CMake project. In your `CMakeLists.txt`, just add ```cmake find_package(rmm [VERSION]) # ... target_link_libraries(<your-target> (PRIVATE|PUBLIC) rmm::rmm) ``` Since RMM is a header-only library, this does not actually link RMM, but it makes the headers available and pulls in transitive dependencies. If RMM is not installed in a default location, use `CMAKE_PREFIX_PATH` or `rmm_ROOT` to point to its location. One of RMM's dependencies is the Thrust library, so the above automatically pulls in `Thrust` by means of a dependency on the `rmm::Thrust` target. By default it uses the standard configuration of Thrust. If you want to customize it, you can set the variables `THRUST_HOST_SYSTEM` and `THRUST_DEVICE_SYSTEM`; see [Thrust's CMake documentation](https://github.com/NVIDIA/thrust/blob/main/thrust/cmake/README.md). # Using RMM in C++ The first goal of RMM is to provide a common interface for device and host memory allocation. This allows both _users_ and _implementers_ of custom allocation logic to program to a single interface. To this end, RMM defines two abstract interface classes: - [`rmm::mr::device_memory_resource`](#device_memory_resource) for device memory allocation - [`rmm::mr::host_memory_resource`](#host_memory_resource) for host memory allocation These classes are based on the [`std::pmr::memory_resource`](https://en.cppreference.com/w/cpp/memory/memory_resource) interface class introduced in C++17 for polymorphic memory allocation. ## `device_memory_resource` `rmm::mr::device_memory_resource` is the base class that defines the interface for allocating and freeing device memory. It has two key functions: 1. `void* device_memory_resource::allocate(std::size_t bytes, cuda_stream_view s)` - Returns a pointer to an allocation of at least `bytes` bytes. 2. `void device_memory_resource::deallocate(void* p, std::size_t bytes, cuda_stream_view s)` - Reclaims a previous allocation of size `bytes` pointed to by `p`. - `p` *must* have been returned by a previous call to `allocate(bytes)`, otherwise behavior is undefined It is up to a derived class to provide implementations of these functions. See [available resources](#available-resources) for example `device_memory_resource` derived classes. Unlike `std::pmr::memory_resource`, `rmm::mr::device_memory_resource` does not allow specifying an alignment argument. All allocations are required to be aligned to at least 256B. Furthermore, `device_memory_resource` adds an additional `cuda_stream_view` argument to allow specifying the stream on which to perform the (de)allocation. ## `cuda_stream_view` and `cuda_stream` `rmm::cuda_stream_view` is a simple non-owning wrapper around a CUDA `cudaStream_t`. This wrapper's purpose is to provide strong type safety for stream types. (`cudaStream_t` is an alias for a pointer, which can lead to ambiguity in APIs when it is assigned `0`.) All RMM stream-ordered APIs take a `rmm::cuda_stream_view` argument. `rmm::cuda_stream` is a simple owning wrapper around a CUDA `cudaStream_t`. This class provides RAII semantics (constructor creates the CUDA stream, destructor destroys it). An `rmm::cuda_stream` can never represent the CUDA default stream or per-thread default stream; it only ever represents a single non-default stream. `rmm::cuda_stream` cannot be copied, but can be moved. ## `cuda_stream_pool` `rmm::cuda_stream_pool` provides fast access to a pool of CUDA streams. This class can be used to create a set of `cuda_stream` objects whose lifetime is equal to the `cuda_stream_pool`. Using the stream pool can be faster than creating the streams on the fly. The size of the pool is configurable. Depending on this size, multiple calls to `cuda_stream_pool::get_stream()` may return instances of `rmm::cuda_stream_view` that represent identical CUDA streams. ### Thread Safety All current device memory resources are thread safe unless documented otherwise. More specifically, calls to memory resource `allocate()` and `deallocate()` methods are safe with respect to calls to either of these functions from other threads. They are _not_ thread safe with respect to construction and destruction of the memory resource object. Note that a class `thread_safe_resource_adapter` is provided which can be used to adapt a memory resource that is not thread safe to be thread safe (as described above). This adapter is not needed with any current RMM device memory resources. ### Stream-ordered Memory Allocation `rmm::mr::device_memory_resource` is a base class that provides stream-ordered memory allocation. This allows optimizations such as re-using memory deallocated on the same stream without the overhead of synchronization. A call to `device_memory_resource::allocate(bytes, stream_a)` returns a pointer that is valid to use on `stream_a`. Using the memory on a different stream (say `stream_b`) is Undefined Behavior unless the two streams are first synchronized, for example by using `cudaStreamSynchronize(stream_a)` or by recording a CUDA event on `stream_a` and then calling `cudaStreamWaitEvent(stream_b, event)`. The stream specified to `device_memory_resource::deallocate` should be a stream on which it is valid to use the deallocated memory immediately for another allocation. Typically this is the stream on which the allocation was *last* used before the call to `deallocate`. The passed stream may be used internally by a `device_memory_resource` for managing available memory with minimal synchronization, and it may also be synchronized at a later time, for example using a call to `cudaStreamSynchronize()`. For this reason, it is Undefined Behavior to destroy a CUDA stream that is passed to `device_memory_resource::deallocate`. If the stream on which the allocation was last used has been destroyed before calling `deallocate` or it is known that it will be destroyed, it is likely better to synchronize the stream (before destroying it) and then pass a different stream to `deallocate` (e.g. the default stream). Note that device memory data structures such as `rmm::device_buffer` and `rmm::device_uvector` follow these stream-ordered memory allocation semantics and rules. For further information about stream-ordered memory allocation semantics, read [Using the NVIDIA CUDA Stream-Ordered Memory Allocator](https://developer.nvidia.com/blog/using-cuda-stream-ordered-memory-allocator-part-1/) on the NVIDIA Developer Blog. ### Available Resources RMM provides several `device_memory_resource` derived classes to satisfy various user requirements. For more detailed information about these resources, see their respective documentation. #### `cuda_memory_resource` Allocates and frees device memory using `cudaMalloc` and `cudaFree`. #### `managed_memory_resource` Allocates and frees device memory using `cudaMallocManaged` and `cudaFree`. Note that `managed_memory_resource` cannot be used with NVIDIA Virtual GPU Software (vGPU, for use with virtual machines or hypervisors) because [NVIDIA CUDA Unified Memory is not supported by NVIDIA vGPU](https://docs.nvidia.com/grid/latest/grid-vgpu-user-guide/index.html#cuda-open-cl-support-vgpu). #### `pool_memory_resource` A coalescing, best-fit pool sub-allocator. #### `fixed_size_memory_resource` A memory resource that can only allocate a single fixed size. Average allocation and deallocation cost is constant. #### `binning_memory_resource` Configurable to use multiple upstream memory resources for allocations that fall within different bin sizes. Often configured with multiple bins backed by `fixed_size_memory_resource`s and a single `pool_memory_resource` for allocations larger than the largest bin size. ### Default Resources and Per-device Resources RMM users commonly need to configure a `device_memory_resource` object to use for all allocations where another resource has not explicitly been provided. A common example is configuring a `pool_memory_resource` to use for all allocations to get fast dynamic allocation. To enable this use case, RMM provides the concept of a "default" `device_memory_resource`. This resource is used when another is not explicitly provided. Accessing and modifying the default resource is done through two functions: - `device_memory_resource* get_current_device_resource()` - Returns a pointer to the default resource for the current CUDA device. - The initial default memory resource is an instance of `cuda_memory_resource`. - This function is thread safe with respect to concurrent calls to it and `set_current_device_resource()`. - For more explicit control, you can use `get_per_device_resource()`, which takes a device ID. - `device_memory_resource* set_current_device_resource(device_memory_resource* new_mr)` - Updates the default memory resource pointer for the current CUDA device to `new_mr` - Returns the previous default resource pointer - If `new_mr` is `nullptr`, then resets the default resource to `cuda_memory_resource` - This function is thread safe with respect to concurrent calls to it and `get_current_device_resource()` - For more explicit control, you can use `set_per_device_resource()`, which takes a device ID. #### Example ```c++ rmm::mr::cuda_memory_resource cuda_mr; // Construct a resource that uses a coalescing best-fit pool allocator rmm::mr::pool_memory_resource<rmm::mr::cuda_memory_resource> pool_mr{&cuda_mr}; rmm::mr::set_current_device_resource(&pool_mr); // Updates the current device resource pointer to `pool_mr` rmm::mr::device_memory_resource* mr = rmm::mr::get_current_device_resource(); // Points to `pool_mr` ``` #### Multiple Devices A `device_memory_resource` should only be used when the active CUDA device is the same device that was active when the `device_memory_resource` was created. Otherwise behavior is undefined. If a `device_memory_resource` is used with a stream associated with a different CUDA device than the device for which the memory resource was created, behavior is undefined. Creating a `device_memory_resource` for each device requires care to set the current device before creating each resource, and to maintain the lifetime of the resources as long as they are set as per-device resources. Here is an example loop that creates `unique_ptr`s to `pool_memory_resource` objects for each device and sets them as the per-device resource for that device. ```c++ std::vector<unique_ptr<pool_memory_resource>> per_device_pools; for(int i = 0; i < N; ++i) { cudaSetDevice(i); // set device i before creating MR // Use a vector of unique_ptr to maintain the lifetime of the MRs per_device_pools.push_back(std::make_unique<pool_memory_resource>()); // Set the per-device resource for device i set_per_device_resource(cuda_device_id{i}, &per_device_pools.back()); } ``` Note that the CUDA device that is current when creating a `device_memory_resource` must also be current any time that `device_memory_resource` is used to deallocate memory, including in a destructor. This affects RAII classes like `rmm::device_buffer` and `rmm::device_uvector`. Here's an (incorrect) example that assumes the above example loop has been run to create a `pool_memory_resource` for each device. A correct example adds a call to `cudaSetDevice(0)` on the line of the error comment. ```c++ { RMM_CUDA_TRY(cudaSetDevice(0)); rmm::device_buffer buf_a(16); { RMM_CUDA_TRY(cudaSetDevice(1)); rmm::device_buffer buf_b(16); } // Error: when buf_a is destroyed, the current device must be 0, but it is 1 } ``` ### Allocators C++ interfaces commonly allow customizable memory allocation through an [`Allocator`](https://en.cppreference.com/w/cpp/named_req/Allocator) object. RMM provides several `Allocator` and `Allocator`-like classes. #### `polymorphic_allocator` A [stream-ordered](#stream-ordered-memory-allocation) allocator similar to [`std::pmr::polymorphic_allocator`](https://en.cppreference.com/w/cpp/memory/polymorphic_allocator). Unlike the standard C++ `Allocator` interface, the `allocate` and `deallocate` functions take a `cuda_stream_view` indicating the stream on which the (de)allocation occurs. #### `stream_allocator_adaptor` `stream_allocator_adaptor` can be used to adapt a stream-ordered allocator to present a standard `Allocator` interface to consumers that may not be designed to work with a stream-ordered interface. Example: ```c++ rmm::cuda_stream stream; rmm::mr::polymorphic_allocator<int> stream_alloc; // Constructs an adaptor that forwards all (de)allocations to `stream_alloc` on `stream`. auto adapted = rmm::mr::make_stream_allocator_adaptor(stream_alloc, stream); // Allocates 100 bytes using `stream_alloc` on `stream` auto p = adapted.allocate(100); ... // Deallocates using `stream_alloc` on `stream` adapted.deallocate(p,100); ``` #### `thrust_allocator` `thrust_allocator` is a device memory allocator that uses the strongly typed `thrust::device_ptr`, making it usable with containers like `thrust::device_vector`. See [below](#using-rmm-with-thrust) for more information on using RMM with Thrust. ## Device Data Structures ### `device_buffer` An untyped, uninitialized RAII class for stream ordered device memory allocation. #### Example ```c++ cuda_stream_view s{...}; // Allocates at least 100 bytes on stream `s` using the *default* resource rmm::device_buffer b{100,s}; void* p = b.data(); // Raw, untyped pointer to underlying device memory kernel<<<..., s.value()>>>(b.data()); // `b` is only safe to use on `s` rmm::mr::device_memory_resource * mr = new my_custom_resource{...}; // Allocates at least 100 bytes on stream `s` using the resource `mr` rmm::device_buffer b2{100, s, mr}; ``` ### `device_uvector<T>` A typed, uninitialized RAII class for allocation of a contiguous set of elements in device memory. Similar to a `thrust::device_vector`, but as an optimization, does not default initialize the contained elements. This optimization restricts the types `T` to trivially copyable types. #### Example ```c++ cuda_stream_view s{...}; // Allocates uninitialized storage for 100 `int32_t` elements on stream `s` using the // default resource rmm::device_uvector<int32_t> v(100, s); // Initializes the elements to 0 thrust::uninitialized_fill(thrust::cuda::par.on(s.value()), v.begin(), v.end(), int32_t{0}); rmm::mr::device_memory_resource * mr = new my_custom_resource{...}; // Allocates uninitialized storage for 100 `int32_t` elements on stream `s` using the resource `mr` rmm::device_uvector<int32_t> v2{100, s, mr}; ``` ### `device_scalar` A typed, RAII class for allocation of a single element in device memory. This is similar to a `device_uvector` with a single element, but provides convenience functions like modifying the value in device memory from the host, or retrieving the value from device to host. #### Example ```c++ cuda_stream_view s{...}; // Allocates uninitialized storage for a single `int32_t` in device memory rmm::device_scalar<int32_t> a{s}; a.set_value(42, s); // Updates the value in device memory to `42` on stream `s` kernel<<<...,s.value()>>>(a.data()); // Pass raw pointer to underlying element in device memory int32_t v = a.value(s); // Retrieves the value from device to host on stream `s` ``` ## `host_memory_resource` `rmm::mr::host_memory_resource` is the base class that defines the interface for allocating and freeing host memory. Similar to `device_memory_resource`, it has two key functions for (de)allocation: 1. `void* host_memory_resource::allocate(std::size_t bytes, std::size_t alignment)` - Returns a pointer to an allocation of at least `bytes` bytes aligned to the specified `alignment` 2. `void host_memory_resource::deallocate(void* p, std::size_t bytes, std::size_t alignment)` - Reclaims a previous allocation of size `bytes` pointed to by `p`. Unlike `device_memory_resource`, the `host_memory_resource` interface and behavior is identical to `std::pmr::memory_resource`. ### Available Resources #### `new_delete_resource` Uses the global `operator new` and `operator delete` to allocate host memory. #### `pinned_memory_resource` Allocates "pinned" host memory using `cuda(Malloc/Free)Host`. ## Host Data Structures RMM does not currently provide any data structures that interface with `host_memory_resource`. In the future, RMM will provide a similar host-side structure like `device_buffer` and an allocator that can be used with STL containers. ## Using RMM with Thrust RAPIDS and other CUDA libraries make heavy use of Thrust. Thrust uses CUDA device memory in two situations: 1. As the backing store for `thrust::device_vector`, and 2. As temporary storage inside some algorithms, such as `thrust::sort`. RMM provides `rmm::mr::thrust_allocator` as a conforming Thrust allocator that uses `device_memory_resource`s. ### Thrust Algorithms To instruct a Thrust algorithm to use `rmm::mr::thrust_allocator` to allocate temporary storage, you can use the custom Thrust CUDA device execution policy: `rmm::exec_policy(stream)`. ```c++ thrust::sort(rmm::exec_policy(stream, ...); ``` The first `stream` argument is the `stream` to use for `rmm::mr::thrust_allocator`. The second `stream` argument is what should be used to execute the Thrust algorithm. These two arguments must be identical. ## Logging RMM includes two forms of logging. Memory event logging and debug logging. ### Memory Event Logging and `logging_resource_adaptor` Memory event logging writes details of every allocation or deallocation to a CSV (comma-separated value) file. In C++, Memory Event Logging is enabled by using the `logging_resource_adaptor` as a wrapper around any other `device_memory_resource` object. Each row in the log represents either an allocation or a deallocation. The columns of the file are "Thread, Time, Action, Pointer, Size, Stream". The CSV output files of the `logging_resource_adaptor` can be used as input to `REPLAY_BENCHMARK`, which is available when building RMM from source, in the `gbenchmarks` folder in the build directory. This log replayer can be useful for profiling and debugging allocator issues. The following C++ example creates a logging version of a `cuda_memory_resource` that outputs the log to the file "logs/test1.csv". ```c++ std::string filename{"logs/test1.csv"}; rmm::mr::cuda_memory_resource upstream; rmm::mr::logging_resource_adaptor<rmm::mr::cuda_memory_resource> log_mr{&upstream, filename}; ``` If a file name is not specified, the environment variable `RMM_LOG_FILE` is queried for the file name. If `RMM_LOG_FILE` is not set, then an exception is thrown by the `logging_resource_adaptor` constructor. In Python, memory event logging is enabled when the `logging` parameter of `rmm.reinitialize()` is set to `True`. The log file name can be set using the `log_file_name` parameter. See `help(rmm.reinitialize)` for full details. ### Debug Logging RMM includes a debug logger which can be enabled to log trace and debug information to a file. This information can show when errors occur, when additional memory is allocated from upstream resources, etc. The default log file is `rmm_log.txt` in the current working directory, but the environment variable `RMM_DEBUG_LOG_FILE` can be set to specify the path and file name. There is a CMake configuration variable `RMM_LOGGING_LEVEL`, which can be set to enable compilation of more detailed logging. The default is `INFO`. Available levels are `TRACE`, `DEBUG`, `INFO`, `WARN`, `ERROR`, `CRITICAL` and `OFF`. The log relies on the [spdlog](https://github.com/gabime/spdlog.git) library. Note that to see logging below the `INFO` level, the application must also set the logging level at run time. C++ applications must must call `rmm::logger().set_level()`, for example to enable all levels of logging down to `TRACE`, call `rmm::logger().set_level(spdlog::level::trace)` (and compile librmm with `-DRMM_LOGGING_LEVEL=TRACE`). Python applications must call `rmm.set_logging_level()`, for example to enable all levels of logging down to `TRACE`, call `rmm.set_logging_level("trace")` (and compile the RMM Python module with `-DRMM_LOGGING_LEVEL=TRACE`). Note that debug logging is different from the CSV memory allocation logging provided by `rmm::mr::logging_resource_adapter`. The latter is for logging a history of allocation / deallocation actions which can be useful for replay with RMM's replay benchmark. ## RMM and CUDA Memory Bounds Checking Memory allocations taken from a memory resource that allocates a pool of memory (such as `pool_memory_resource` and `arena_memory_resource`) are part of the same low-level CUDA memory allocation. Therefore, out-of-bounds or misaligned accesses to these allocations are not likely to be detected by CUDA tools such as [CUDA Compute Sanitizer](https://docs.nvidia.com/cuda/compute-sanitizer/index.html) memcheck. Exceptions to this are `cuda_memory_resource`, which wraps `cudaMalloc`, and `cuda_async_memory_resource`, which uses `cudaMallocAsync` with CUDA's built-in memory pool functionality (CUDA 11.2 or later required). Illegal memory accesses to memory allocated by these resources are detectable with Compute Sanitizer Memcheck. It may be possible in the future to add support for memory bounds checking with other memory resources using NVTX APIs. ## Using RMM in Python Code There are two ways to use RMM in Python code: 1. Using the `rmm.DeviceBuffer` API to explicitly create and manage device memory allocations 2. Transparently via external libraries such as CuPy and Numba RMM provides a `MemoryResource` abstraction to control _how_ device memory is allocated in both the above uses. ### DeviceBuffers A DeviceBuffer represents an **untyped, uninitialized device memory allocation**. DeviceBuffers can be created by providing the size of the allocation in bytes: ```python >>> import rmm >>> buf = rmm.DeviceBuffer(size=100) ``` The size of the allocation and the memory address associated with it can be accessed via the `.size` and `.ptr` attributes respectively: ```python >>> buf.size 100 >>> buf.ptr 140202544726016 ``` DeviceBuffers can also be created by copying data from host memory: ```python >>> import rmm >>> import numpy as np >>> a = np.array([1, 2, 3], dtype='float64') >>> buf = rmm.DeviceBuffer.to_device(a.tobytes()) >>> buf.size 24 ``` Conversely, the data underlying a DeviceBuffer can be copied to the host: ```python >>> np.frombuffer(buf.tobytes()) array([1., 2., 3.]) ``` ### MemoryResource objects `MemoryResource` objects are used to configure how device memory allocations are made by RMM. By default if a `MemoryResource` is not set explicitly, RMM uses the `CudaMemoryResource`, which uses `cudaMalloc` for allocating device memory. `rmm.reinitialize()` provides an easy way to initialize RMM with specific memory resource options across multiple devices. See `help(rmm.reinitialize)` for full details. For lower-level control, the `rmm.mr.set_current_device_resource()` function can be used to set a different MemoryResource for the current CUDA device. For example, enabling the `ManagedMemoryResource` tells RMM to use `cudaMallocManaged` instead of `cudaMalloc` for allocating memory: ```python >>> import rmm >>> rmm.mr.set_current_device_resource(rmm.mr.ManagedMemoryResource()) ``` > :warning: The default resource must be set for any device **before** > allocating any device memory on that device. Setting or changing the > resource after device allocations have been made can lead to unexpected > behaviour or crashes. See [Multiple Devices](#multiple-devices) As another example, `PoolMemoryResource` allows you to allocate a large "pool" of device memory up-front. Subsequent allocations will draw from this pool of already allocated memory. The example below shows how to construct a PoolMemoryResource with an initial size of 1 GiB and a maximum size of 4 GiB. The pool uses `CudaMemoryResource` as its underlying ("upstream") memory resource: ```python >>> import rmm >>> pool = rmm.mr.PoolMemoryResource( ... rmm.mr.CudaMemoryResource(), ... initial_pool_size=2**30, ... maximum_pool_size=2**32 ... ) >>> rmm.mr.set_current_device_resource(pool) ``` Other MemoryResources include: * `FixedSizeMemoryResource` for allocating fixed blocks of memory * `BinningMemoryResource` for allocating blocks within specified "bin" sizes from different memory resources MemoryResources are highly configurable and can be composed together in different ways. See `help(rmm.mr)` for more information. ## Using RMM with third-party libraries ### Using RMM with CuPy You can configure [CuPy](https://cupy.dev/) to use RMM for memory allocations by setting the CuPy CUDA allocator to `rmm_cupy_allocator`: ```python >>> from rmm.allocators.cupy import rmm_cupy_allocator >>> import cupy >>> cupy.cuda.set_allocator(rmm_cupy_allocator) ``` **Note:** This only configures CuPy to use the current RMM resource for allocations. It does not initialize nor change the current resource, e.g., enabling a memory pool. See [here](#memoryresource-objects) for more information on changing the current memory resource. ### Using RMM with Numba You can configure Numba to use RMM for memory allocations using the Numba [EMM Plugin](https://numba.readthedocs.io/en/stable/cuda/external-memory.html#setting-emm-plugin). This can be done in two ways: 1. Setting the environment variable `NUMBA_CUDA_MEMORY_MANAGER`: ```python $ NUMBA_CUDA_MEMORY_MANAGER=rmm.allocators.numba python (args) ``` 2. Using the `set_memory_manager()` function provided by Numba: ```python >>> from numba import cuda >>> from rmm.allocators.numba import RMMNumbaManager >>> cuda.set_memory_manager(RMMNumbaManager) ``` **Note:** This only configures Numba to use the current RMM resource for allocations. It does not initialize nor change the current resource, e.g., enabling a memory pool. See [here](#memoryresource-objects) for more information on changing the current memory resource. ### Using RMM with PyTorch [PyTorch](https://pytorch.org/docs/stable/notes/cuda.html) can use RMM for memory allocation. For example, to configure PyTorch to use an RMM-managed pool: ```python import rmm from rmm.allocators.torch import rmm_torch_allocator import torch rmm.reinitialize(pool_allocator=True) torch.cuda.memory.change_current_allocator(rmm_torch_allocator) ``` PyTorch and RMM will now share the same memory pool. You can, of course, use a custom memory resource with PyTorch as well: ```python import rmm from rmm.allocators.torch import rmm_torch_allocator import torch # note that you can configure PyTorch to use RMM either before or # after changing RMM's memory resource. PyTorch will use whatever # memory resource is configured to be the "current" memory resource at # the time of allocation. torch.cuda.change_current_allocator(rmm_torch_allocator) # configure RMM to use a managed memory resource, wrapped with a # statistics resource adaptor that can report information about the # amount of memory allocated: mr = rmm.mr.StatisticsResourceAdaptor(rmm.mr.ManagedMemoryResource()) rmm.mr.set_current_device_resource(mr) x = torch.tensor([1, 2]).cuda() # the memory resource reports information about PyTorch allocations: mr.allocation_counts Out[6]: {'current_bytes': 16, 'current_count': 1, 'peak_bytes': 16, 'peak_count': 1, 'total_bytes': 16, 'total_count': 1} ```
0
rapidsai_public_repos
rapidsai_public_repos/rmm/CHANGELOG.md
# RMM 23.10.00 (11 Oct 2023) ## 🚨 Breaking Changes - Update to Cython 3.0.0 ([#1313](https://github.com/rapidsai/rmm/pull/1313)) [@vyasr](https://github.com/vyasr) ## 🐛 Bug Fixes - Compile cdef public functions from torch_allocator with C ABI ([#1350](https://github.com/rapidsai/rmm/pull/1350)) [@wence-](https://github.com/wence-) - Make doxygen only a conda dependency. ([#1344](https://github.com/rapidsai/rmm/pull/1344)) [@bdice](https://github.com/bdice) - Use `conda mambabuild` not `mamba mambabuild` ([#1338](https://github.com/rapidsai/rmm/pull/1338)) [@wence-](https://github.com/wence-) - Fix stream_ordered_memory_resource attempt to record event in stream from another device ([#1333](https://github.com/rapidsai/rmm/pull/1333)) [@harrism](https://github.com/harrism) ## 📖 Documentation - Clean up headers in CMakeLists.txt. ([#1341](https://github.com/rapidsai/rmm/pull/1341)) [@bdice](https://github.com/bdice) - Add pre-commit hook to validate doxygen ([#1334](https://github.com/rapidsai/rmm/pull/1334)) [@vyasr](https://github.com/vyasr) - Fix doxygen warnings ([#1317](https://github.com/rapidsai/rmm/pull/1317)) [@vyasr](https://github.com/vyasr) - Treat warnings as errors in Python documentation ([#1316](https://github.com/rapidsai/rmm/pull/1316)) [@vyasr](https://github.com/vyasr) ## 🚀 New Features - Enable RMM Debug Logging via Python ([#1339](https://github.com/rapidsai/rmm/pull/1339)) [@harrism](https://github.com/harrism) ## 🛠️ Improvements - Update image names ([#1346](https://github.com/rapidsai/rmm/pull/1346)) [@AyodeAwe](https://github.com/AyodeAwe) - Update to clang 16.0.6. ([#1343](https://github.com/rapidsai/rmm/pull/1343)) [@bdice](https://github.com/bdice) - Update doxygen to 1.9.1 ([#1337](https://github.com/rapidsai/rmm/pull/1337)) [@vyasr](https://github.com/vyasr) - Simplify wheel build scripts and allow alphas of RAPIDS dependencies ([#1335](https://github.com/rapidsai/rmm/pull/1335)) [@divyegala](https://github.com/divyegala) - Use `copy-pr-bot` ([#1329](https://github.com/rapidsai/rmm/pull/1329)) [@ajschmidt8](https://github.com/ajschmidt8) - Add RMM devcontainers ([#1328](https://github.com/rapidsai/rmm/pull/1328)) [@trxcllnt](https://github.com/trxcllnt) - Add Python bindings for `limiting_resource_adaptor` ([#1327](https://github.com/rapidsai/rmm/pull/1327)) [@pentschev](https://github.com/pentschev) - Fix missing jQuery error in docs ([#1321](https://github.com/rapidsai/rmm/pull/1321)) [@AyodeAwe](https://github.com/AyodeAwe) - Use fetch_rapids.cmake. ([#1319](https://github.com/rapidsai/rmm/pull/1319)) [@bdice](https://github.com/bdice) - Update to Cython 3.0.0 ([#1313](https://github.com/rapidsai/rmm/pull/1313)) [@vyasr](https://github.com/vyasr) - Branch 23.10 merge 23.08 ([#1312](https://github.com/rapidsai/rmm/pull/1312)) [@vyasr](https://github.com/vyasr) - Branch 23.10 merge 23.08 ([#1309](https://github.com/rapidsai/rmm/pull/1309)) [@vyasr](https://github.com/vyasr) # RMM 23.08.00 (9 Aug 2023) ## 🚨 Breaking Changes - Stop invoking setup.py ([#1300](https://github.com/rapidsai/rmm/pull/1300)) [@vyasr](https://github.com/vyasr) - Remove now-deprecated top-level allocator functions ([#1281](https://github.com/rapidsai/rmm/pull/1281)) [@wence-](https://github.com/wence-) - Remove padding from device_memory_resource ([#1278](https://github.com/rapidsai/rmm/pull/1278)) [@vyasr](https://github.com/vyasr) ## 🐛 Bug Fixes - Fix typo in wheels-test.yaml. ([#1310](https://github.com/rapidsai/rmm/pull/1310)) [@bdice](https://github.com/bdice) - Add a missing &#39;#include &lt;array&gt;&#39; in logger.hpp ([#1295](https://github.com/rapidsai/rmm/pull/1295)) [@valgur](https://github.com/valgur) - Use gbench `thread_index()` accessor to fix replay bench compilation ([#1293](https://github.com/rapidsai/rmm/pull/1293)) [@harrism](https://github.com/harrism) - Ensure logger tests don&#39;t generate temp directories in build dir ([#1289](https://github.com/rapidsai/rmm/pull/1289)) [@robertmaynard](https://github.com/robertmaynard) ## 🚀 New Features - Remove now-deprecated top-level allocator functions ([#1281](https://github.com/rapidsai/rmm/pull/1281)) [@wence-](https://github.com/wence-) ## 🛠️ Improvements - Switch to new CI wheel building pipeline ([#1305](https://github.com/rapidsai/rmm/pull/1305)) [@vyasr](https://github.com/vyasr) - Revert CUDA 12.0 CI workflows to branch-23.08. ([#1303](https://github.com/rapidsai/rmm/pull/1303)) [@bdice](https://github.com/bdice) - Update linters: remove flake8, add ruff, update cython-lint ([#1302](https://github.com/rapidsai/rmm/pull/1302)) [@vyasr](https://github.com/vyasr) - Adding identify minimum version requirement ([#1301](https://github.com/rapidsai/rmm/pull/1301)) [@hyperbolic2346](https://github.com/hyperbolic2346) - Stop invoking setup.py ([#1300](https://github.com/rapidsai/rmm/pull/1300)) [@vyasr](https://github.com/vyasr) - Use cuda-version to constrain cudatoolkit. ([#1296](https://github.com/rapidsai/rmm/pull/1296)) [@bdice](https://github.com/bdice) - Update to CMake 3.26.4 ([#1291](https://github.com/rapidsai/rmm/pull/1291)) [@vyasr](https://github.com/vyasr) - use rapids-upload-docs script ([#1288](https://github.com/rapidsai/rmm/pull/1288)) [@AyodeAwe](https://github.com/AyodeAwe) - Reorder parameters in RMM_EXPECTS ([#1286](https://github.com/rapidsai/rmm/pull/1286)) [@vyasr](https://github.com/vyasr) - Remove documentation build scripts for Jenkins ([#1285](https://github.com/rapidsai/rmm/pull/1285)) [@ajschmidt8](https://github.com/ajschmidt8) - Remove padding from device_memory_resource ([#1278](https://github.com/rapidsai/rmm/pull/1278)) [@vyasr](https://github.com/vyasr) - Unpin scikit-build upper bound ([#1275](https://github.com/rapidsai/rmm/pull/1275)) [@vyasr](https://github.com/vyasr) - RMM: Build CUDA 12 packages ([#1223](https://github.com/rapidsai/rmm/pull/1223)) [@bdice](https://github.com/bdice) # RMM 23.06.00 (7 Jun 2023) ## 🚨 Breaking Changes - Update minimum Python version to Python 3.9 ([#1252](https://github.com/rapidsai/rmm/pull/1252)) [@shwina](https://github.com/shwina) ## 🐛 Bug Fixes - Ensure Logger tests aren&#39;t run in parallel ([#1277](https://github.com/rapidsai/rmm/pull/1277)) [@robertmaynard](https://github.com/robertmaynard) - Pin to scikit-build&lt;0.17.2. ([#1262](https://github.com/rapidsai/rmm/pull/1262)) [@bdice](https://github.com/bdice) ## 🛠️ Improvements - Require Numba 0.57.0+ &amp; NumPy 1.21.0+ ([#1279](https://github.com/rapidsai/rmm/pull/1279)) [@jakirkham](https://github.com/jakirkham) - Align test_cpp.sh with conventions in other RAPIDS repos. ([#1269](https://github.com/rapidsai/rmm/pull/1269)) [@bdice](https://github.com/bdice) - Switch back to using primary shared-action-workflows branch ([#1268](https://github.com/rapidsai/rmm/pull/1268)) [@vyasr](https://github.com/vyasr) - Update recipes to GTest version &gt;=1.13.0 ([#1263](https://github.com/rapidsai/rmm/pull/1263)) [@bdice](https://github.com/bdice) - Support CUDA 12.0 for pip wheels ([#1259](https://github.com/rapidsai/rmm/pull/1259)) [@bdice](https://github.com/bdice) - Add build vars ([#1258](https://github.com/rapidsai/rmm/pull/1258)) [@AyodeAwe](https://github.com/AyodeAwe) - Enable sccache hits from local builds ([#1257](https://github.com/rapidsai/rmm/pull/1257)) [@AyodeAwe](https://github.com/AyodeAwe) - Revert to branch-23.06 for shared-action-workflows ([#1256](https://github.com/rapidsai/rmm/pull/1256)) [@shwina](https://github.com/shwina) - run docs builds nightly too ([#1255](https://github.com/rapidsai/rmm/pull/1255)) [@AyodeAwe](https://github.com/AyodeAwe) - Build wheels using new single image workflow ([#1254](https://github.com/rapidsai/rmm/pull/1254)) [@vyasr](https://github.com/vyasr) - Update minimum Python version to Python 3.9 ([#1252](https://github.com/rapidsai/rmm/pull/1252)) [@shwina](https://github.com/shwina) - Remove usage of rapids-get-rapids-version-from-git ([#1251](https://github.com/rapidsai/rmm/pull/1251)) [@jjacobelli](https://github.com/jjacobelli) - Remove wheel pytest verbosity ([#1249](https://github.com/rapidsai/rmm/pull/1249)) [@sevagh](https://github.com/sevagh) - Update clang-format to 16.0.1. ([#1246](https://github.com/rapidsai/rmm/pull/1246)) [@bdice](https://github.com/bdice) - Remove uses-setup-env-vars ([#1242](https://github.com/rapidsai/rmm/pull/1242)) [@vyasr](https://github.com/vyasr) - Move RMM_LOGGING_ASSERT into separate header ([#1241](https://github.com/rapidsai/rmm/pull/1241)) [@ahendriksen](https://github.com/ahendriksen) - Use ARC V2 self-hosted runners for GPU jobs ([#1239](https://github.com/rapidsai/rmm/pull/1239)) [@jjacobelli](https://github.com/jjacobelli) # RMM 23.04.00 (6 Apr 2023) ## 🐛 Bug Fixes - Remove MANIFEST.in use auto-generated one for sdists and package_data for wheels ([#1233](https://github.com/rapidsai/rmm/pull/1233)) [@vyasr](https://github.com/vyasr) - Fix update-version.sh. ([#1227](https://github.com/rapidsai/rmm/pull/1227)) [@vyasr](https://github.com/vyasr) - Specify include_package_data to setup ([#1218](https://github.com/rapidsai/rmm/pull/1218)) [@vyasr](https://github.com/vyasr) - Revert changes overriding rapids-cmake repo. ([#1209](https://github.com/rapidsai/rmm/pull/1209)) [@bdice](https://github.com/bdice) - Synchronize stream in `DeviceBuffer.c_from_unique_ptr` constructor ([#1100](https://github.com/rapidsai/rmm/pull/1100)) [@shwina](https://github.com/shwina) ## 🚀 New Features - Use rapids-cmake parallel testing feature ([#1183](https://github.com/rapidsai/rmm/pull/1183)) [@robertmaynard](https://github.com/robertmaynard) ## 🛠️ Improvements - Stop setting package version attribute in wheels ([#1236](https://github.com/rapidsai/rmm/pull/1236)) [@vyasr](https://github.com/vyasr) - Add codespell as a linter ([#1231](https://github.com/rapidsai/rmm/pull/1231)) [@bdice](https://github.com/bdice) - Pass `AWS_SESSION_TOKEN` and `SCCACHE_S3_USE_SSL` vars to conda build ([#1230](https://github.com/rapidsai/rmm/pull/1230)) [@ajschmidt8](https://github.com/ajschmidt8) - Update to GCC 11 ([#1228](https://github.com/rapidsai/rmm/pull/1228)) [@bdice](https://github.com/bdice) - Fix some minor oversights in the conversion to pyproject.toml ([#1226](https://github.com/rapidsai/rmm/pull/1226)) [@vyasr](https://github.com/vyasr) - Remove pickle compatibility layer in tests for Python &lt; 3.8. ([#1224](https://github.com/rapidsai/rmm/pull/1224)) [@bdice](https://github.com/bdice) - Move external allocators into rmm.allocators module to defer imports ([#1221](https://github.com/rapidsai/rmm/pull/1221)) [@wence-](https://github.com/wence-) - Generate pyproject.toml dependencies using dfg ([#1219](https://github.com/rapidsai/rmm/pull/1219)) [@vyasr](https://github.com/vyasr) - Run rapids-dependency-file-generator via pre-commit ([#1217](https://github.com/rapidsai/rmm/pull/1217)) [@vyasr](https://github.com/vyasr) - Skip docs job in nightly runs ([#1215](https://github.com/rapidsai/rmm/pull/1215)) [@AyodeAwe](https://github.com/AyodeAwe) - CI: Remove specification of manual stage for check_style.sh script. ([#1214](https://github.com/rapidsai/rmm/pull/1214)) [@csadorf](https://github.com/csadorf) - Use script rather than environment variable to modify package names ([#1212](https://github.com/rapidsai/rmm/pull/1212)) [@vyasr](https://github.com/vyasr) - Reduce error handling verbosity in CI tests scripts ([#1204](https://github.com/rapidsai/rmm/pull/1204)) [@AjayThorve](https://github.com/AjayThorve) - Update shared workflow branches ([#1203](https://github.com/rapidsai/rmm/pull/1203)) [@ajschmidt8](https://github.com/ajschmidt8) - Use date in build string instead of in the version. ([#1195](https://github.com/rapidsai/rmm/pull/1195)) [@bdice](https://github.com/bdice) - Stop using versioneer to manage versions ([#1190](https://github.com/rapidsai/rmm/pull/1190)) [@vyasr](https://github.com/vyasr) - Update to spdlog&gt;=1.11.0, fmt&gt;=9.1.0. ([#1177](https://github.com/rapidsai/rmm/pull/1177)) [@bdice](https://github.com/bdice) - Migrate as much as possible to `pyproject.toml` ([#1151](https://github.com/rapidsai/rmm/pull/1151)) [@jakirkham](https://github.com/jakirkham) # RMM 23.02.00 (9 Feb 2023) ## 🐛 Bug Fixes - pre-commit: Update isort version to 5.12.0 ([#1197](https://github.com/rapidsai/rmm/pull/1197)) [@wence-](https://github.com/wence-) - Revert &quot;Upgrade to spdlog 1.10 ([#1173)&quot; (#1176](https://github.com/rapidsai/rmm/pull/1173)&quot; (#1176)) [@bdice](https://github.com/bdice) - Ensure `UpstreamResourceAdaptor` is not cleared by the Python GC ([#1170](https://github.com/rapidsai/rmm/pull/1170)) [@shwina](https://github.com/shwina) ## 📖 Documentation - Fix documentation author ([#1188](https://github.com/rapidsai/rmm/pull/1188)) [@bdice](https://github.com/bdice) ## 🚀 New Features - Add RMM PyTorch allocator ([#1168](https://github.com/rapidsai/rmm/pull/1168)) [@shwina](https://github.com/shwina) ## 🛠️ Improvements - Update shared workflow branches ([#1201](https://github.com/rapidsai/rmm/pull/1201)) [@ajschmidt8](https://github.com/ajschmidt8) - Fix update-version.sh ([#1199](https://github.com/rapidsai/rmm/pull/1199)) [@raydouglass](https://github.com/raydouglass) - Use CTK 118/cp310 branch of wheel workflows ([#1193](https://github.com/rapidsai/rmm/pull/1193)) [@sevagh](https://github.com/sevagh) - Update `build.yaml` workflow to reduce verbosity ([#1192](https://github.com/rapidsai/rmm/pull/1192)) [@AyodeAwe](https://github.com/AyodeAwe) - Fix `build.yaml` workflow ([#1191](https://github.com/rapidsai/rmm/pull/1191)) [@ajschmidt8](https://github.com/ajschmidt8) - add docs_build step ([#1189](https://github.com/rapidsai/rmm/pull/1189)) [@AyodeAwe](https://github.com/AyodeAwe) - Upkeep/wheel param cleanup ([#1187](https://github.com/rapidsai/rmm/pull/1187)) [@sevagh](https://github.com/sevagh) - Update workflows for nightly tests ([#1186](https://github.com/rapidsai/rmm/pull/1186)) [@ajschmidt8](https://github.com/ajschmidt8) - Build CUDA `11.8` and Python `3.10` Packages ([#1184](https://github.com/rapidsai/rmm/pull/1184)) [@ajschmidt8](https://github.com/ajschmidt8) - Build wheels alongside conda CI ([#1182](https://github.com/rapidsai/rmm/pull/1182)) [@sevagh](https://github.com/sevagh) - Update conda recipes. ([#1180](https://github.com/rapidsai/rmm/pull/1180)) [@bdice](https://github.com/bdice) - Update PR Workflow ([#1174](https://github.com/rapidsai/rmm/pull/1174)) [@ajschmidt8](https://github.com/ajschmidt8) - Upgrade to spdlog 1.10 ([#1173](https://github.com/rapidsai/rmm/pull/1173)) [@kkraus14](https://github.com/kkraus14) - Enable `codecov` ([#1171](https://github.com/rapidsai/rmm/pull/1171)) [@ajschmidt8](https://github.com/ajschmidt8) - Add support for Python 3.10. ([#1166](https://github.com/rapidsai/rmm/pull/1166)) [@bdice](https://github.com/bdice) - Update pre-commit hooks ([#1154](https://github.com/rapidsai/rmm/pull/1154)) [@bdice](https://github.com/bdice) # RMM 22.12.00 (8 Dec 2022) ## 🐛 Bug Fixes - Don&#39;t use CMake 3.25.0 as it has a show stopping FindCUDAToolkit bug ([#1162](https://github.com/rapidsai/rmm/pull/1162)) [@robertmaynard](https://github.com/robertmaynard) - Relax test for async memory pool IPC handle support ([#1130](https://github.com/rapidsai/rmm/pull/1130)) [@bdice](https://github.com/bdice) ## 📖 Documentation - Use rapidsai CODE_OF_CONDUCT.md ([#1159](https://github.com/rapidsai/rmm/pull/1159)) [@bdice](https://github.com/bdice) - Fix doxygen formatting for set_stream. ([#1153](https://github.com/rapidsai/rmm/pull/1153)) [@bdice](https://github.com/bdice) - Document required Python dependencies to build from source ([#1146](https://github.com/rapidsai/rmm/pull/1146)) [@ccoulombe](https://github.com/ccoulombe) - fix failed automerge (Branch 22.12 merge 22.10) ([#1131](https://github.com/rapidsai/rmm/pull/1131)) [@harrism](https://github.com/harrism) ## 🚀 New Features - Add wheel builds to rmm ([#1148](https://github.com/rapidsai/rmm/pull/1148)) [@vyasr](https://github.com/vyasr) ## 🛠️ Improvements - Align __version__ with wheel version ([#1161](https://github.com/rapidsai/rmm/pull/1161)) [@sevagh](https://github.com/sevagh) - Add `ninja` &amp; Update CI environment variables ([#1155](https://github.com/rapidsai/rmm/pull/1155)) [@ajschmidt8](https://github.com/ajschmidt8) - Remove CUDA 11.0 from dependencies.yaml. ([#1152](https://github.com/rapidsai/rmm/pull/1152)) [@bdice](https://github.com/bdice) - Update dependencies schema. ([#1147](https://github.com/rapidsai/rmm/pull/1147)) [@bdice](https://github.com/bdice) - Enable sccache for python build ([#1145](https://github.com/rapidsai/rmm/pull/1145)) [@Ethyling](https://github.com/Ethyling) - Remove Jenkins scripts ([#1143](https://github.com/rapidsai/rmm/pull/1143)) [@ajschmidt8](https://github.com/ajschmidt8) - Use `ninja` in GitHub Actions ([#1142](https://github.com/rapidsai/rmm/pull/1142)) [@ajschmidt8](https://github.com/ajschmidt8) - Switch to using rapids-cmake for gbench. ([#1139](https://github.com/rapidsai/rmm/pull/1139)) [@vyasr](https://github.com/vyasr) - Remove stale labeler ([#1137](https://github.com/rapidsai/rmm/pull/1137)) [@raydouglass](https://github.com/raydouglass) - Add a public `copy` API to `DeviceBuffer` ([#1128](https://github.com/rapidsai/rmm/pull/1128)) [@galipremsagar](https://github.com/galipremsagar) - Format gdb script. ([#1127](https://github.com/rapidsai/rmm/pull/1127)) [@bdice](https://github.com/bdice) # RMM 22.10.00 (12 Oct 2022) ## 🐛 Bug Fixes - Ensure consistent spdlog dependency target no matter the source ([#1101](https://github.com/rapidsai/rmm/pull/1101)) [@robertmaynard](https://github.com/robertmaynard) - Remove cuda event deadlocking issues in device mr tests ([#1097](https://github.com/rapidsai/rmm/pull/1097)) [@robertmaynard](https://github.com/robertmaynard) - Propagate exceptions raised in Python callback functions ([#1096](https://github.com/rapidsai/rmm/pull/1096)) [@madsbk](https://github.com/madsbk) - Avoid unused parameter warnings in do_get_mem_info ([#1084](https://github.com/rapidsai/rmm/pull/1084)) [@fkallen](https://github.com/fkallen) - Use rapids-cmake 22.10 best practice for RAPIDS.cmake location ([#1083](https://github.com/rapidsai/rmm/pull/1083)) [@robertmaynard](https://github.com/robertmaynard) ## 📖 Documentation - Document that minimum required CMake version is now 3.23.1 ([#1098](https://github.com/rapidsai/rmm/pull/1098)) [@robertmaynard](https://github.com/robertmaynard) - Fix docs for module-level API ([#1091](https://github.com/rapidsai/rmm/pull/1091)) [@bdice](https://github.com/bdice) - Improve DeviceBuffer docs. ([#1090](https://github.com/rapidsai/rmm/pull/1090)) [@bdice](https://github.com/bdice) - Branch 22.10 merge 22.08 ([#1089](https://github.com/rapidsai/rmm/pull/1089)) [@harrism](https://github.com/harrism) - Improve docs formatting and update links. ([#1086](https://github.com/rapidsai/rmm/pull/1086)) [@bdice](https://github.com/bdice) - Add resources section to README. ([#1085](https://github.com/rapidsai/rmm/pull/1085)) [@bdice](https://github.com/bdice) - Simplify PR template. ([#1080](https://github.com/rapidsai/rmm/pull/1080)) [@bdice](https://github.com/bdice) ## 🚀 New Features - Add `gdb` pretty-printers for rmm types ([#1088](https://github.com/rapidsai/rmm/pull/1088)) [@upsj](https://github.com/upsj) - Support using THRUST_WRAPPED_NAMESPACE ([#1077](https://github.com/rapidsai/rmm/pull/1077)) [@robertmaynard](https://github.com/robertmaynard) ## 🛠️ Improvements - GH Actions - Enforce `checks` before builds run ([#1125](https://github.com/rapidsai/rmm/pull/1125)) [@ajschmidt8](https://github.com/ajschmidt8) - Update GH Action Workflows ([#1123](https://github.com/rapidsai/rmm/pull/1123)) [@ajschmidt8](https://github.com/ajschmidt8) - Add `cudatoolkit` versions to `dependencies.yaml` ([#1119](https://github.com/rapidsai/rmm/pull/1119)) [@ajschmidt8](https://github.com/ajschmidt8) - Remove `rmm` installation from `librmm` tests` ([#1117](https://github.com/rapidsai/rmm/pull/1117)) [@ajschmidt8](https://github.com/ajschmidt8) - Add GitHub Actions workflows ([#1104](https://github.com/rapidsai/rmm/pull/1104)) [@Ethyling](https://github.com/Ethyling) - `build.sh`: accept `--help` ([#1093](https://github.com/rapidsai/rmm/pull/1093)) [@madsbk](https://github.com/madsbk) - Move clang dependency to conda develop packages. ([#1092](https://github.com/rapidsai/rmm/pull/1092)) [@bdice](https://github.com/bdice) - Add device_uvector::reserve and device_buffer::reserve ([#1079](https://github.com/rapidsai/rmm/pull/1079)) [@upsj](https://github.com/upsj) - Bifurcate Dependency Lists ([#1073](https://github.com/rapidsai/rmm/pull/1073)) [@ajschmidt8](https://github.com/ajschmidt8) # RMM 22.08.00 (17 Aug 2022) ## 🐛 Bug Fixes - Specify `language` as `&#39;en&#39;` instead of `None` ([#1059](https://github.com/rapidsai/rmm/pull/1059)) [@jakirkham](https://github.com/jakirkham) - Add a missed `except *` ([#1057](https://github.com/rapidsai/rmm/pull/1057)) [@shwina](https://github.com/shwina) - Properly handle cudaMemHandleTypeNone and cudaErrorInvalidValue in is_export_handle_type_supported ([#1055](https://github.com/rapidsai/rmm/pull/1055)) [@gerashegalov](https://github.com/gerashegalov) ## 📖 Documentation - Centralize common css &amp; js code in docs ([#1075](https://github.com/rapidsai/rmm/pull/1075)) [@galipremsagar](https://github.com/galipremsagar) ## 🛠️ Improvements - Add the ability to register and unregister reinitialization hooks ([#1072](https://github.com/rapidsai/rmm/pull/1072)) [@shwina](https://github.com/shwina) - Update isort to 5.10.1 ([#1069](https://github.com/rapidsai/rmm/pull/1069)) [@vyasr](https://github.com/vyasr) - Forward merge 22.06 into 22.08 ([#1067](https://github.com/rapidsai/rmm/pull/1067)) [@vyasr](https://github.com/vyasr) - Forward merge 22.06 into 22.08 ([#1066](https://github.com/rapidsai/rmm/pull/1066)) [@vyasr](https://github.com/vyasr) - Pin max version of `cuda-python` to `11.7` ([#1062](https://github.com/rapidsai/rmm/pull/1062)) [@galipremsagar](https://github.com/galipremsagar) - Change build.sh to find C++ library by default and avoid shadowing CMAKE_ARGS ([#1053](https://github.com/rapidsai/rmm/pull/1053)) [@vyasr](https://github.com/vyasr) # RMM 22.06.00 (7 Jun 2022) ## 🐛 Bug Fixes - Clarifies Python requirements and version constraints ([#1037](https://github.com/rapidsai/rmm/pull/1037)) [@jakirkham](https://github.com/jakirkham) - Use `lib` (not `lib64`) for libraries ([#1024](https://github.com/rapidsai/rmm/pull/1024)) [@jakirkham](https://github.com/jakirkham) - Properly enable Cython docstrings. ([#1020](https://github.com/rapidsai/rmm/pull/1020)) [@vyasr](https://github.com/vyasr) - Update `RMMNumbaManager` to handle `NUMBA_CUDA_USE_NVIDIA_BINDING=1` ([#1004](https://github.com/rapidsai/rmm/pull/1004)) [@brandon-b-miller](https://github.com/brandon-b-miller) ## 📖 Documentation - Clarify using RMM with other Python libraries ([#1034](https://github.com/rapidsai/rmm/pull/1034)) [@jrhemstad](https://github.com/jrhemstad) - Replace `to_device` with `DeviceBuffer.to_device` ([#1033](https://github.com/rapidsai/rmm/pull/1033)) [@wence-](https://github.com/wence-) - Documentation Fix: Replace `cudf::logic_error` with `rmm::logic_error` ([#1021](https://github.com/rapidsai/rmm/pull/1021)) [@codereport](https://github.com/codereport) ## 🚀 New Features - Add rmm::exec_policy_nosync ([#1009](https://github.com/rapidsai/rmm/pull/1009)) [@fkallen](https://github.com/fkallen) - Callback memory resource ([#980](https://github.com/rapidsai/rmm/pull/980)) [@shwina](https://github.com/shwina) ## 🛠️ Improvements - Fix conda recipes for conda compilers ([#1043](https://github.com/rapidsai/rmm/pull/1043)) [@Ethyling](https://github.com/Ethyling) - Use new rapids-cython component of rapids-cmake to simplify builds ([#1031](https://github.com/rapidsai/rmm/pull/1031)) [@vyasr](https://github.com/vyasr) - Merge branch-22.04 to branch-22.06 ([#1028](https://github.com/rapidsai/rmm/pull/1028)) [@jakirkham](https://github.com/jakirkham) - Update CMake pinning to just avoid 3.23.0. ([#1023](https://github.com/rapidsai/rmm/pull/1023)) [@vyasr](https://github.com/vyasr) - Build python using conda in GPU jobs ([#1017](https://github.com/rapidsai/rmm/pull/1017)) [@Ethyling](https://github.com/Ethyling) - Remove pip requirements file. ([#1015](https://github.com/rapidsai/rmm/pull/1015)) [@bdice](https://github.com/bdice) - Clean up Thrust includes. ([#1011](https://github.com/rapidsai/rmm/pull/1011)) [@bdice](https://github.com/bdice) - Update black version ([#1010](https://github.com/rapidsai/rmm/pull/1010)) [@vyasr](https://github.com/vyasr) - Update cmake-format version for pre-commit and environments. ([#995](https://github.com/rapidsai/rmm/pull/995)) [@vyasr](https://github.com/vyasr) - Use conda compilers ([#977](https://github.com/rapidsai/rmm/pull/977)) [@Ethyling](https://github.com/Ethyling) - Build conda packages using mambabuild ([#900](https://github.com/rapidsai/rmm/pull/900)) [@Ethyling](https://github.com/Ethyling) # RMM 22.04.00 (6 Apr 2022) ## 🐛 Bug Fixes - Add cuda-python dependency to pyproject.toml ([#994](https://github.com/rapidsai/rmm/pull/994)) [@sevagh](https://github.com/sevagh) - Disable opportunistic reuse in async mr when cuda driver &lt; 11.5 ([#993](https://github.com/rapidsai/rmm/pull/993)) [@rongou](https://github.com/rongou) - Use CUDA 11.2+ features via dlopen ([#990](https://github.com/rapidsai/rmm/pull/990)) [@robertmaynard](https://github.com/robertmaynard) - Skip async mr tests when cuda runtime/driver &lt; 11.2 ([#986](https://github.com/rapidsai/rmm/pull/986)) [@rongou](https://github.com/rongou) - Fix warning/error in debug assertion in device_uvector.hpp ([#979](https://github.com/rapidsai/rmm/pull/979)) [@harrism](https://github.com/harrism) - Fix signed/unsigned comparison warning ([#970](https://github.com/rapidsai/rmm/pull/970)) [@jlowe](https://github.com/jlowe) - Fix comparison of async MRs with different underlying pools. ([#965](https://github.com/rapidsai/rmm/pull/965)) [@harrism](https://github.com/harrism) ## 🚀 New Features - Use scikit-build for the build process ([#976](https://github.com/rapidsai/rmm/pull/976)) [@vyasr](https://github.com/vyasr) ## 🛠️ Improvements - Temporarily disable new `ops-bot` functionality ([#1005](https://github.com/rapidsai/rmm/pull/1005)) [@ajschmidt8](https://github.com/ajschmidt8) - Rename `librmm_tests` to `librmm-tests` ([#1000](https://github.com/rapidsai/rmm/pull/1000)) [@ajschmidt8](https://github.com/ajschmidt8) - Update `librmm` `conda` recipe ([#997](https://github.com/rapidsai/rmm/pull/997)) [@ajschmidt8](https://github.com/ajschmidt8) - Remove `no_cma`/`has_cma` variants ([#996](https://github.com/rapidsai/rmm/pull/996)) [@ajschmidt8](https://github.com/ajschmidt8) - Fix free-before-alloc in multithreaded test ([#992](https://github.com/rapidsai/rmm/pull/992)) [@aladram](https://github.com/aladram) - Add `.github/ops-bot.yaml` config file ([#991](https://github.com/rapidsai/rmm/pull/991)) [@ajschmidt8](https://github.com/ajschmidt8) - Log allocation failures ([#988](https://github.com/rapidsai/rmm/pull/988)) [@rongou](https://github.com/rongou) - Update `librmm` `conda` outputs ([#983](https://github.com/rapidsai/rmm/pull/983)) [@ajschmidt8](https://github.com/ajschmidt8) - Bump Python requirements in `setup.cfg` and `rmm_dev.yml` ([#982](https://github.com/rapidsai/rmm/pull/982)) [@shwina](https://github.com/shwina) - New benchmark compares concurrent throughput of device_vector and device_uvector ([#981](https://github.com/rapidsai/rmm/pull/981)) [@harrism](https://github.com/harrism) - Update `librmm` recipe to output `librmm_tests` package ([#978](https://github.com/rapidsai/rmm/pull/978)) [@ajschmidt8](https://github.com/ajschmidt8) - Update upload.sh to use `--croot` ([#975](https://github.com/rapidsai/rmm/pull/975)) [@AyodeAwe](https://github.com/AyodeAwe) - Fix `conda` uploads ([#974](https://github.com/rapidsai/rmm/pull/974)) [@ajschmidt8](https://github.com/ajschmidt8) - Add CMake `install` rules for tests ([#969](https://github.com/rapidsai/rmm/pull/969)) [@ajschmidt8](https://github.com/ajschmidt8) - Add device_buffer::ssize() and device_uvector::ssize() ([#966](https://github.com/rapidsai/rmm/pull/966)) [@harrism](https://github.com/harrism) - Added yml file for cudatoolkit version 11.6 ([#964](https://github.com/rapidsai/rmm/pull/964)) [@alhad-deshpande](https://github.com/alhad-deshpande) - Replace `ccache` with `sccache` ([#963](https://github.com/rapidsai/rmm/pull/963)) [@ajschmidt8](https://github.com/ajschmidt8) - Make `pool_memory_resource::pool_size()` public ([#962](https://github.com/rapidsai/rmm/pull/962)) [@shwina](https://github.com/shwina) - Allow construction of cuda_async_memory_resource from existing pool ([#889](https://github.com/rapidsai/rmm/pull/889)) [@fkallen](https://github.com/fkallen) # RMM 22.02.00 (2 Feb 2022) ## 🐛 Bug Fixes - Use numba to get CUDA runtime version. ([#946](https://github.com/rapidsai/rmm/pull/946)) [@bdice](https://github.com/bdice) - Temporarily disable warnings for unknown pragmas ([#942](https://github.com/rapidsai/rmm/pull/942)) [@harrism](https://github.com/harrism) - Build benchmarks in RMM CI ([#941](https://github.com/rapidsai/rmm/pull/941)) [@harrism](https://github.com/harrism) - Headers that use `std::thread` now include &lt;thread&gt; ([#938](https://github.com/rapidsai/rmm/pull/938)) [@robertmaynard](https://github.com/robertmaynard) - Fix failing stream test with a debug-only death test ([#934](https://github.com/rapidsai/rmm/pull/934)) [@harrism](https://github.com/harrism) - Prevent `DeviceBuffer` DeviceMemoryResource premature release ([#931](https://github.com/rapidsai/rmm/pull/931)) [@viclafargue](https://github.com/viclafargue) - Fix failing tracking test ([#929](https://github.com/rapidsai/rmm/pull/929)) [@harrism](https://github.com/harrism) ## 🛠️ Improvements - Prepare upload scripts for Python 3.7 removal ([#952](https://github.com/rapidsai/rmm/pull/952)) [@Ethyling](https://github.com/Ethyling) - Fix imports tests syntax ([#935](https://github.com/rapidsai/rmm/pull/935)) [@Ethyling](https://github.com/Ethyling) - Remove `IncludeCategories` from `.clang-format` ([#933](https://github.com/rapidsai/rmm/pull/933)) [@codereport](https://github.com/codereport) - Replace use of custom CUDA bindings with CUDA-Python ([#930](https://github.com/rapidsai/rmm/pull/930)) [@shwina](https://github.com/shwina) - Remove `setup.py` from `update-release.sh` script ([#926](https://github.com/rapidsai/rmm/pull/926)) [@ajschmidt8](https://github.com/ajschmidt8) - Improve C++ Test Coverage ([#920](https://github.com/rapidsai/rmm/pull/920)) [@harrism](https://github.com/harrism) - Improve the Arena allocator to reduce memory fragmentation ([#916](https://github.com/rapidsai/rmm/pull/916)) [@rongou](https://github.com/rongou) - Simplify CMake linting with cmake-format ([#913](https://github.com/rapidsai/rmm/pull/913)) [@vyasr](https://github.com/vyasr) # RMM 21.12.00 (9 Dec 2021) ## 🚨 Breaking Changes - Parameterize exception type caught by failure_callback_resource_adaptor ([#898](https://github.com/rapidsai/rmm/pull/898)) [@harrism](https://github.com/harrism) ## 🐛 Bug Fixes - Update recipes for Enhanced Compatibility ([#910](https://github.com/rapidsai/rmm/pull/910)) [@ajschmidt8](https://github.com/ajschmidt8) - Fix `librmm` uploads ([#909](https://github.com/rapidsai/rmm/pull/909)) [@ajschmidt8](https://github.com/ajschmidt8) - Use spdlog/fmt/ostr.h as it supports external fmt library ([#907](https://github.com/rapidsai/rmm/pull/907)) [@robertmaynard](https://github.com/robertmaynard) - Fix variable names in logging macro calls ([#897](https://github.com/rapidsai/rmm/pull/897)) [@harrism](https://github.com/harrism) - Keep rapids cmake version in sync ([#876](https://github.com/rapidsai/rmm/pull/876)) [@robertmaynard](https://github.com/robertmaynard) ## 📖 Documentation - Replace `to_device()` in docs with `DeviceBuffer.to_device()` ([#902](https://github.com/rapidsai/rmm/pull/902)) [@shwina](https://github.com/shwina) - Fix return value docs for supports_get_mem_info ([#884](https://github.com/rapidsai/rmm/pull/884)) [@harrism](https://github.com/harrism) ## 🚀 New Features - Out-of-memory callback resource adaptor ([#892](https://github.com/rapidsai/rmm/pull/892)) [@madsbk](https://github.com/madsbk) ## 🛠️ Improvements - suppress spurious clang-tidy warnings in debug macros ([#914](https://github.com/rapidsai/rmm/pull/914)) [@rongou](https://github.com/rongou) - C++ code coverage support ([#905](https://github.com/rapidsai/rmm/pull/905)) [@harrism](https://github.com/harrism) - Provide ./build.sh flag to control CUDA async malloc support ([#901](https://github.com/rapidsai/rmm/pull/901)) [@robertmaynard](https://github.com/robertmaynard) - Parameterize exception type caught by failure_callback_resource_adaptor ([#898](https://github.com/rapidsai/rmm/pull/898)) [@harrism](https://github.com/harrism) - Throw `rmm::out_of_memory` when we know for sure ([#894](https://github.com/rapidsai/rmm/pull/894)) [@rongou](https://github.com/rongou) - Update `conda` recipes for Enhanced Compatibility effort ([#893](https://github.com/rapidsai/rmm/pull/893)) [@ajschmidt8](https://github.com/ajschmidt8) - Add functions to query the stream of device_uvector and device_scalar ([#887](https://github.com/rapidsai/rmm/pull/887)) [@fkallen](https://github.com/fkallen) - Add spdlog to install export set ([#886](https://github.com/rapidsai/rmm/pull/886)) [@trxcllnt](https://github.com/trxcllnt) # RMM 21.10.00 (7 Oct 2021) ## 🚨 Breaking Changes - Delete cuda_async_memory_resource copy/move ctors/operators ([#860](https://github.com/rapidsai/rmm/pull/860)) [@jrhemstad](https://github.com/jrhemstad) ## 🐛 Bug Fixes - Fix parameter name in asserts ([#875](https://github.com/rapidsai/rmm/pull/875)) [@vyasr](https://github.com/vyasr) - Disallow zero-size stream pools ([#873](https://github.com/rapidsai/rmm/pull/873)) [@harrism](https://github.com/harrism) - Correct namespace usage in host memory resources ([#872](https://github.com/rapidsai/rmm/pull/872)) [@divyegala](https://github.com/divyegala) - fix race condition in limiting resource adapter ([#869](https://github.com/rapidsai/rmm/pull/869)) [@rongou](https://github.com/rongou) - Install the right cudatoolkit in the conda env in gpu/build.sh ([#864](https://github.com/rapidsai/rmm/pull/864)) [@shwina](https://github.com/shwina) - Disable copy/move ctors and operator= from free_list classes ([#862](https://github.com/rapidsai/rmm/pull/862)) [@harrism](https://github.com/harrism) - Delete cuda_async_memory_resource copy/move ctors/operators ([#860](https://github.com/rapidsai/rmm/pull/860)) [@jrhemstad](https://github.com/jrhemstad) - Improve concurrency of stream_ordered_memory_resource by stealing less ([#851](https://github.com/rapidsai/rmm/pull/851)) [@harrism](https://github.com/harrism) - Use the new RAPIDS.cmake to fetch rapids-cmake ([#838](https://github.com/rapidsai/rmm/pull/838)) [@robertmaynard](https://github.com/robertmaynard) ## 📖 Documentation - Forward-merge branch-21.08 to branch-21.10 ([#846](https://github.com/rapidsai/rmm/pull/846)) [@jakirkham](https://github.com/jakirkham) ## 🛠️ Improvements - Forward-merge `branch-21.08` into `branch-21.10` ([#877](https://github.com/rapidsai/rmm/pull/877)) [@ajschmidt8](https://github.com/ajschmidt8) - Add .clang-tidy and fix clang-tidy warnings ([#857](https://github.com/rapidsai/rmm/pull/857)) [@harrism](https://github.com/harrism) - Update to use rapids-cmake 21.10 pre-configured packages ([#854](https://github.com/rapidsai/rmm/pull/854)) [@robertmaynard](https://github.com/robertmaynard) - Clean up: use std::size_t, include cstddef and aligned.hpp where missing ([#852](https://github.com/rapidsai/rmm/pull/852)) [@harrism](https://github.com/harrism) - tweak the arena mr to reduce fragmentation ([#845](https://github.com/rapidsai/rmm/pull/845)) [@rongou](https://github.com/rongou) - Fix transitive include in cuda_device header ([#843](https://github.com/rapidsai/rmm/pull/843)) [@wphicks](https://github.com/wphicks) - Refactor cmake style ([#842](https://github.com/rapidsai/rmm/pull/842)) [@robertmaynard](https://github.com/robertmaynard) - add multi stream allocations benchmark. ([#841](https://github.com/rapidsai/rmm/pull/841)) [@cwharris](https://github.com/cwharris) - Enforce default visibility for `get_map`. ([#833](https://github.com/rapidsai/rmm/pull/833)) [@trivialfis](https://github.com/trivialfis) - ENH Replace gpuci_conda_retry with gpuci_mamba_retry ([#823](https://github.com/rapidsai/rmm/pull/823)) [@dillon-cullinan](https://github.com/dillon-cullinan) - Execution policy class ([#816](https://github.com/rapidsai/rmm/pull/816)) [@viclafargue](https://github.com/viclafargue) # RMM 21.08.00 (4 Aug 2021) ## 🚨 Breaking Changes - Refactor `rmm::device_scalar` in terms of `rmm::device_uvector` ([#789](https://github.com/rapidsai/rmm/pull/789)) [@harrism](https://github.com/harrism) - Explicit streams in device_buffer ([#775](https://github.com/rapidsai/rmm/pull/775)) [@harrism](https://github.com/harrism) ## 🐛 Bug Fixes - Pin spdlog in dev conda envs ([#835](https://github.com/rapidsai/rmm/pull/835)) [@trxcllnt](https://github.com/trxcllnt) - Pinning spdlog because recent updates are causing compile issues. ([#831](https://github.com/rapidsai/rmm/pull/831)) [@cjnolet](https://github.com/cjnolet) - update isort to 5.6.4 ([#822](https://github.com/rapidsai/rmm/pull/822)) [@cwharris](https://github.com/cwharris) - fix align_up namespace in aligned_resource_adaptor.hpp ([#820](https://github.com/rapidsai/rmm/pull/820)) [@rongou](https://github.com/rongou) - Run updated isort hook on pxd files ([#812](https://github.com/rapidsai/rmm/pull/812)) [@charlesbluca](https://github.com/charlesbluca) - find_package(RMM) can now be called multiple times safely ([#811](https://github.com/rapidsai/rmm/pull/811)) [@robertmaynard](https://github.com/robertmaynard) - Fix building on CUDA 11.3 ([#809](https://github.com/rapidsai/rmm/pull/809)) [@benfred](https://github.com/benfred) - Remove leading zeros in version_config.hpp ([#793](https://github.com/rapidsai/rmm/pull/793)) [@hcho3](https://github.com/hcho3) ## 📖 Documentation - Fix PoolMemoryResource Python doc examples ([#807](https://github.com/rapidsai/rmm/pull/807)) [@harrism](https://github.com/harrism) - Fix incorrect href in README.md ([#804](https://github.com/rapidsai/rmm/pull/804)) [@benchislett](https://github.com/benchislett) - Update build instruction in README ([#797](https://github.com/rapidsai/rmm/pull/797)) [@hcho3](https://github.com/hcho3) - Document compute sanitizer memcheck support ([#790](https://github.com/rapidsai/rmm/pull/790)) [@harrism](https://github.com/harrism) ## 🚀 New Features - Bump isort, enable Cython package resorting ([#806](https://github.com/rapidsai/rmm/pull/806)) [@charlesbluca](https://github.com/charlesbluca) - Support multiple output sinks in logging_resource_adaptor ([#791](https://github.com/rapidsai/rmm/pull/791)) [@harrism](https://github.com/harrism) - Add Statistics Resource Adaptor and cython bindings to `tracking_resource_adaptor` and `statistics_resource_adaptor` ([#626](https://github.com/rapidsai/rmm/pull/626)) [@mdemoret-nv](https://github.com/mdemoret-nv) ## 🛠️ Improvements - Fix isort in cuda_stream_view.pxd ([#827](https://github.com/rapidsai/rmm/pull/827)) [@harrism](https://github.com/harrism) - Cython extension for rmm::cuda_stream_pool ([#818](https://github.com/rapidsai/rmm/pull/818)) [@divyegala](https://github.com/divyegala) - Fix building on cuda 11.4 ([#817](https://github.com/rapidsai/rmm/pull/817)) [@benfred](https://github.com/benfred) - Updating Clang Version to 11.0.0 ([#814](https://github.com/rapidsai/rmm/pull/814)) [@codereport](https://github.com/codereport) - Add spdlog to `rmm-exports` if found by CPM ([#810](https://github.com/rapidsai/rmm/pull/810)) [@trxcllnt](https://github.com/trxcllnt) - Fix `21.08` forward-merge conflicts ([#803](https://github.com/rapidsai/rmm/pull/803)) [@ajschmidt8](https://github.com/ajschmidt8) - RMM now leverages rapids-cmake to reduce CMake boilerplate ([#800](https://github.com/rapidsai/rmm/pull/800)) [@robertmaynard](https://github.com/robertmaynard) - Refactor `rmm::device_scalar` in terms of `rmm::device_uvector` ([#789](https://github.com/rapidsai/rmm/pull/789)) [@harrism](https://github.com/harrism) - make it easier to include rmm in other projects ([#788](https://github.com/rapidsai/rmm/pull/788)) [@rongou](https://github.com/rongou) - Compile Cython with C++17. ([#787](https://github.com/rapidsai/rmm/pull/787)) [@vyasr](https://github.com/vyasr) - Fix Merge Conflicts ([#786](https://github.com/rapidsai/rmm/pull/786)) [@ajschmidt8](https://github.com/ajschmidt8) - Explicit streams in device_buffer ([#775](https://github.com/rapidsai/rmm/pull/775)) [@harrism](https://github.com/harrism) # RMM 21.06.00 (9 Jun 2021) ## 🐛 Bug Fixes - FindThrust now guards against multiple inclusion by different consumers ([#784](https://github.com/rapidsai/rmm/pull/784)) [@robertmaynard](https://github.com/robertmaynard) ## 📖 Documentation - Document synchronization requirements on device_buffer copy ctors ([#772](https://github.com/rapidsai/rmm/pull/772)) [@harrism](https://github.com/harrism) ## 🚀 New Features - add a resource adapter to align on a specified size ([#768](https://github.com/rapidsai/rmm/pull/768)) [@rongou](https://github.com/rongou) ## 🛠️ Improvements - Update environment variable used to determine `cuda_version` ([#785](https://github.com/rapidsai/rmm/pull/785)) [@ajschmidt8](https://github.com/ajschmidt8) - Update `CHANGELOG.md` links for calver ([#781](https://github.com/rapidsai/rmm/pull/781)) [@ajschmidt8](https://github.com/ajschmidt8) - Merge `branch-0.19` into `branch-21.06` ([#779](https://github.com/rapidsai/rmm/pull/779)) [@ajschmidt8](https://github.com/ajschmidt8) - Update docs build script ([#776](https://github.com/rapidsai/rmm/pull/776)) [@ajschmidt8](https://github.com/ajschmidt8) - upgrade spdlog to 1.8.5 ([#658](https://github.com/rapidsai/rmm/pull/658)) [@rongou](https://github.com/rongou) # RMM 0.19.0 (21 Apr 2021) ## 🚨 Breaking Changes - Avoid potential race conditions in device_scalar/device_uvector setters ([#725](https://github.com/rapidsai/rmm/pull/725)) [@harrism](https://github.com/harrism) ## 🐛 Bug Fixes - Fix typo in setup.py ([#746](https://github.com/rapidsai/rmm/pull/746)) [@galipremsagar](https://github.com/galipremsagar) - Revert &quot;Update `rmm` conda recipe pinning of `librmm`&quot; ([#743](https://github.com/rapidsai/rmm/pull/743)) [@raydouglass](https://github.com/raydouglass) - Update `rmm` conda recipe pinning of `librmm` ([#738](https://github.com/rapidsai/rmm/pull/738)) [@mike-wendt](https://github.com/mike-wendt) - RMM doesn&#39;t require the CUDA language to be enabled by consumers ([#737](https://github.com/rapidsai/rmm/pull/737)) [@robertmaynard](https://github.com/robertmaynard) - Fix setup.py to work in a non-conda environment setup ([#733](https://github.com/rapidsai/rmm/pull/733)) [@galipremsagar](https://github.com/galipremsagar) - Fix auto-detecting GPU architectures ([#727](https://github.com/rapidsai/rmm/pull/727)) [@trxcllnt](https://github.com/trxcllnt) - CMAKE_CUDA_ARCHITECTURES doesn&#39;t change when build-system invokes cmake ([#726](https://github.com/rapidsai/rmm/pull/726)) [@robertmaynard](https://github.com/robertmaynard) - Ship memory_resource_wrappers.hpp as package_data ([#715](https://github.com/rapidsai/rmm/pull/715)) [@shwina](https://github.com/shwina) - Only include SetGPUArchs in the top-level CMakeLists.txt ([#713](https://github.com/rapidsai/rmm/pull/713)) [@trxcllnt](https://github.com/trxcllnt) - Fix unknown CMake command &quot;CPMFindPackage&quot; ([#699](https://github.com/rapidsai/rmm/pull/699)) [@standbyme](https://github.com/standbyme) ## 📖 Documentation - Fix host_memory_resource signature typo ([#728](https://github.com/rapidsai/rmm/pull/728)) [@miguelusque](https://github.com/miguelusque) ## 🚀 New Features - Clarify log file name behaviour in docs ([#722](https://github.com/rapidsai/rmm/pull/722)) [@shwina](https://github.com/shwina) - Add Cython definitions for device_uvector ([#720](https://github.com/rapidsai/rmm/pull/720)) [@shwina](https://github.com/shwina) - Python bindings for `cuda_async_memory_resource` ([#718](https://github.com/rapidsai/rmm/pull/718)) [@shwina](https://github.com/shwina) ## 🛠️ Improvements - Fix cython tests ([#749](https://github.com/rapidsai/rmm/pull/749)) [@galipremsagar](https://github.com/galipremsagar) - Add requirements for rmm ([#739](https://github.com/rapidsai/rmm/pull/739)) [@galipremsagar](https://github.com/galipremsagar) - device_uvector can be used within thrust::optional ([#734](https://github.com/rapidsai/rmm/pull/734)) [@robertmaynard](https://github.com/robertmaynard) - arena_memory_resource optimization: disable tracking allocated blocks by default ([#732](https://github.com/rapidsai/rmm/pull/732)) [@rongou](https://github.com/rongou) - Remove CMAKE_CURRENT_BINARY_DIR path in rmm&#39;s target_include_directories ([#731](https://github.com/rapidsai/rmm/pull/731)) [@trxcllnt](https://github.com/trxcllnt) - set CMAKE_CUDA_ARCHITECTURES to OFF instead of undefined ([#729](https://github.com/rapidsai/rmm/pull/729)) [@trxcllnt](https://github.com/trxcllnt) - Avoid potential race conditions in device_scalar/device_uvector setters ([#725](https://github.com/rapidsai/rmm/pull/725)) [@harrism](https://github.com/harrism) - Update Changelog Link ([#723](https://github.com/rapidsai/rmm/pull/723)) [@ajschmidt8](https://github.com/ajschmidt8) - Prepare Changelog for Automation ([#717](https://github.com/rapidsai/rmm/pull/717)) [@ajschmidt8](https://github.com/ajschmidt8) - Update 0.18 changelog entry ([#716](https://github.com/rapidsai/rmm/pull/716)) [@ajschmidt8](https://github.com/ajschmidt8) - Simplify cmake cuda architectures handling ([#709](https://github.com/rapidsai/rmm/pull/709)) [@robertmaynard](https://github.com/robertmaynard) - Build only `compute` for the newest arch in CMAKE_CUDA_ARCHITECTURES ([#706](https://github.com/rapidsai/rmm/pull/706)) [@robertmaynard](https://github.com/robertmaynard) - ENH Build with Ninja &amp; Pass ccache variables to conda recipe ([#705](https://github.com/rapidsai/rmm/pull/705)) [@dillon-cullinan](https://github.com/dillon-cullinan) - pool_memory_resource optimization: disable tracking allocated blocks by default ([#702](https://github.com/rapidsai/rmm/pull/702)) [@harrism](https://github.com/harrism) - Allow the build directory of rmm to be used for `find_package(rmm)` ([#698](https://github.com/rapidsai/rmm/pull/698)) [@robertmaynard](https://github.com/robertmaynard) - Adds a linear accessor to RMM cuda stream pool ([#696](https://github.com/rapidsai/rmm/pull/696)) [@afender](https://github.com/afender) - Fix merge conflicts for #692 ([#694](https://github.com/rapidsai/rmm/pull/694)) [@ajschmidt8](https://github.com/ajschmidt8) - Fix merge conflicts for #692 ([#693](https://github.com/rapidsai/rmm/pull/693)) [@ajschmidt8](https://github.com/ajschmidt8) - Remove C++ Wrappers in `memory_resource_adaptors.hpp` Needed by Cython ([#662](https://github.com/rapidsai/rmm/pull/662)) [@mdemoret-nv](https://github.com/mdemoret-nv) - Improve Cython Lifetime Management by Adding References in `DeviceBuffer` ([#661](https://github.com/rapidsai/rmm/pull/661)) [@mdemoret-nv](https://github.com/mdemoret-nv) - Add support for streams in CuPy allocator ([#654](https://github.com/rapidsai/rmm/pull/654)) [@pentschev](https://github.com/pentschev) # RMM 0.18.0 (24 Feb 2021) ## Breaking Changes 🚨 - Remove DeviceBuffer synchronization on default stream (#650) @pentschev - Add a Stream class that wraps CuPy/Numba/CudaStream (#636) @shwina ## Bug Fixes 🐛 - SetGPUArchs updated to work around a CMake FindCUDAToolkit issue (#695) @robertmaynard - Remove duplicate conda build command (#670) @raydouglass - Update CMakeLists.txt VERSION to 0.18.0 (#665) @trxcllnt - Fix wrong attribute names leading to DEBUG log build issues (#653) @pentschev ## Documentation 📖 - Correct inconsistencies in README and CONTRIBUTING docs (#682) @robertmaynard - Enable tag generation for doxygen (#672) @ajschmidt8 - Document that `managed_memory_resource` does not work with NVIDIA vGPU (#656) @harrism ## New Features 🚀 - Enabling/disabling logging after initialization (#678) @shwina - `cuda_async_memory_resource` built on `cudaMallocAsync` (#676) @harrism - Create labeler.yml (#669) @jolorunyomi - Expose the version string in C++ and Python (#666) @hcho3 - Add a CUDA stream pool (#659) @harrism - Add a Stream class that wraps CuPy/Numba/CudaStream (#636) @shwina ## Improvements 🛠️ - Update stale GHA with exemptions &amp; new labels (#707) @mike-wendt - Add GHA to mark issues/prs as stale/rotten (#700) @Ethyling - Auto-label PRs based on their content (#691) @ajschmidt8 - Prepare Changelog for Automation (#688) @ajschmidt8 - Build.sh use cmake --build to drive build system invocation (#686) @robertmaynard - Fix failed automerge (#683) @harrism - Auto-label PRs based on their content (#681) @jolorunyomi - Build RMM tests/benchmarks with -Wall flag (#674) @trxcllnt - Remove DeviceBuffer synchronization on default stream (#650) @pentschev - Simplify `rmm::exec_policy` and refactor Thrust support (#647) @harrism # RMM 0.17.0 (10 Dec 2020) ## New Features - PR #609 Adds `polymorphic_allocator` and `stream_allocator_adaptor` - PR #596 Add `tracking_memory_resource_adaptor` to help catch memory leaks - PR #608 Add stream wrapper type - PR #632 Add RMM Python docs ## Improvements - PR #604 CMake target cleanup, formatting, linting - PR #599 Make the arena memory resource work better with the producer/consumer mode - PR #612 Drop old Python `device_array*` API - PR #603 Always test both legacy and per-thread default stream - PR #611 Add a note to the contribution guide about requiring 2 C++ reviewers - PR #615 Improve gpuCI Scripts - PR #627 Cleanup gpuCI Scripts - PR #635 Add Python docs build to gpuCI ## Bug Fixes - PR #592 Add `auto_flush` to `make_logging_adaptor` - PR #602 Fix `device_scalar` and its tests so that they use the correct CUDA stream - PR #621 Make `rmm::cuda_stream_default` a `constexpr` - PR #625 Use `librmm` conda artifact when building `rmm` conda package - PR #631 Force local conda artifact install - PR #634 Fix conda uploads - PR #639 Fix release script version updater based on CMake reformatting - PR #641 Fix adding "LANGUAGES" after version number in CMake in release script # RMM 0.16.0 (21 Oct 2020) ## New Features - PR #529 Add debug logging and fix multithreaded replay benchmark - PR #560 Remove deprecated `get/set_default_resource` APIs - PR #543 Add an arena-based memory resource - PR #580 Install CMake config with RMM - PR #591 Allow the replay bench to simulate different GPU memory sizes - PR #594 Adding limiting memory resource adaptor ## Improvements - PR #474 Use CMake find_package(CUDAToolkit) - PR #477 Just use `None` for `strides` in `DeviceBuffer` - PR #528 Add maximum_pool_size parameter to reinitialize API - PR #532 Merge free lists in pool_memory_resource to defragment before growing from upstream - PR #537 Add CMake option to disable deprecation warnings - PR #541 Refine CMakeLists.txt to make it easy to import by external projects - PR #538 Upgrade CUB and Thrust to the latest commits - PR #542 Pin conda spdlog versions to 1.7.0 - PR #550 Remove CXX11 ABI handling from CMake - PR #578 Switch thrust to use the NVIDIA/thrust repo - PR #553 CMake cleanup - PR #556 By default, don't create a debug log file unless there are warnings/errors - PR #561 Remove CNMeM and make RMM header-only - PR #565 CMake: Simplify gtest/gbench handling - PR #566 CMake: use CPM for thirdparty dependencies - PR #568 Upgrade googletest to v1.10.0 - PR #572 CMake: prefer locally installed thirdparty packages - PR #579 CMake: handle thrust via target - PR #581 Improve logging documentation - PR #585 Update ci/local/README.md - PR #587 Replaced `move` with `std::move` - PR #588 Use installed C++ RMM in python build - PR #601 Make maximum pool size truly optional (grow until failure) ## Bug Fixes - PR #545 Fix build to support using `clang` as the host compiler - PR #534 Fix `pool_memory_resource` failure when init and max pool sizes are equal - PR #546 Remove CUDA driver linking and correct NVTX macro. - PR #569 Correct `device_scalar::set_value` to pass host value by reference to avoid copying from invalid value - PR #559 Fix `align_down` to only change unaligned values. - PR #577 Fix CMake `LOGGING_LEVEL` issue which caused verbose logging / performance regression. - PR #582 Fix handling of per-thread default stream when not compiled for PTDS - PR #590 Add missing `CODE_OF_CONDUCT.md` - PR #595 Fix pool_mr example in README.md # RMM 0.15.0 (26 Aug 2020) ## New Features - PR #375 Support out-of-band buffers in Python pickling - PR #391 Add `get_default_resource_type` - PR #396 Remove deprecated RMM APIs - PR #425 Add CUDA per-thread default stream support and thread safety to `pool_memory_resource` - PR #436 Always build and test with per-thread default stream enabled in the GPU CI build - PR #444 Add `owning_wrapper` to simplify lifetime management of resources and their upstreams - PR #449 Stream-ordered suballocator base class and per-thread default stream support and thread safety for `fixed_size_memory_resource` - PR #450 Add support for new build process (Project Flash) - PR #457 New `binning_memory_resource` (replaces `hybrid_memory_resource` and `fixed_multisize_memory_resource`). - PR #458 Add `get/set_per_device_resource` to better support multi-GPU per process applications - PR #466 Deprecate CNMeM. - PR #489 Move `cudf._cuda` into `rmm._cuda` - PR #504 Generate `gpu.pxd` based on cuda version as a preprocessor step - PR #506 Upload rmm package per version python-cuda combo ## Improvements - PR #428 Add the option to automatically flush memory allocate/free logs - PR #378 Use CMake `FetchContent` to obtain latest release of `cub` and `thrust` - PR #377 A better way to fetch `spdlog` - PR #372 Use CMake `FetchContent` to obtain `cnmem` instead of git submodule - PR #382 Rely on NumPy arrays for out-of-band pickling - PR #386 Add short commit to conda package name - PR #401 Update `get_ipc_handle()` to use cuda driver API - PR #404 Make all memory resources thread safe in Python - PR #402 Install dependencies via rapids-build-env - PR #405 Move doc customization scripts to Jenkins - PR #427 Add DeviceBuffer.release() cdef method - PR #414 Add element-wise access for device_uvector - PR #421 Capture thread id in logging and improve logger testing - PR #426 Added multi-threaded support to replay benchmark - PR #429 Fix debug build and add new CUDA assert utility - PR #435 Update conda upload versions for new supported CUDA/Python - PR #437 Test with `pickle5` (for older Python versions) - PR #443 Remove thread safe adaptor from PoolMemoryResource - PR #445 Make all resource operators/ctors explicit - PR #447 Update Python README with info about DeviceBuffer/MemoryResource and external libraries - PR #456 Minor cleanup: always use rmm/-prefixed includes - PR #461 cmake improvements to be more target-based - PR #468 update past release dates in changelog - PR #486 Document relationship between active CUDA devices and resources - PR #493 Rely on C++ lazy Memory Resource initialization behavior instead of initializing in Python ## Bug Fixes - PR #433 Fix python imports - PR #400 Fix segfault in RANDOM_ALLOCATIONS_BENCH - PR #383 Explicitly require NumPy - PR #398 Fix missing head flag in merge_blocks (pool_memory_resource) and improve block class - PR #403 Mark Cython `memory_resource_wrappers` `extern` as `nogil` - PR #406 Sets Google Benchmark to a fixed version, v1.5.1. - PR #434 Fix issue with incorrect docker image being used in local build script - PR #463 Revert cmake change for cnmem header not being added to source directory - PR #464 More completely revert cnmem.h cmake changes - PR #473 Fix initialization logic in pool_memory_resource - PR #479 Fix usage of block printing in pool_memory_resource - PR #490 Allow importing RMM without initializing CUDA driver - PR #484 Fix device_uvector copy constructor compilation error and add test - PR #498 Max pool growth less greedy - PR #500 Use tempfile rather than hardcoded path in `test_rmm_csv_log` - PR #511 Specify `--basetemp` for `py.test` run - PR #509 Fix missing : before __LINE__ in throw string of RMM_CUDA_TRY - PR #510 Fix segfault in pool_memory_resource when a CUDA stream is destroyed - PR #525 Patch Thrust to workaround `CUDA_CUB_RET_IF_FAIL` macro clearing CUDA errors # RMM 0.14.0 (03 Jun 2020) ## New Features - PR #317 Provide External Memory Management Plugin for Numba - PR #362 Add spdlog as a dependency in the conda package - PR #360 Support logging to stdout/stderr - PR #341 Enable logging - PR #343 Add in option to statically link against cudart - PR #364 Added new uninitialized device vector type, `device_uvector` ## Improvements - PR #369 Use CMake `FetchContent` to obtain `spdlog` instead of vendoring - PR #366 Remove installation of extra test dependencies - PR #354 Add CMake option for per-thread default stream - PR #350 Add .clang-format file & format all files - PR #358 Fix typo in `rmm_cupy_allocator` docstring - PR #357 Add Docker 19 support to local gpuci build - PR #365 Make .clang-format consistent with cuGRAPH and cuDF - PR #371 Add docs build script to repository - PR #363 Expose `memory_resources` in Python ## Bug Fixes - PR #373 Fix build.sh - PR #346 Add clearer exception message when RMM_LOG_FILE is unset - PR #347 Mark rmmFinalizeWrapper nogil - PR #348 Fix unintentional use of pool-managed resource. - PR #367 Fix flake8 issues - PR #368 Fix `clang-format` missing comma bug - PR #370 Fix stream and mr use in `device_buffer` methods - PR #379 Remove deprecated calls from synchronization.cpp - PR #381 Remove test_benchmark.cpp from cmakelists - PR #392 SPDLOG matches other header-only acquisition patterns # RMM 0.13.0 (31 Mar 2020) ## New Features - PR #253 Add `frombytes` to convert `bytes`-like to `DeviceBuffer` - PR #252 Add `__sizeof__` method to `DeviceBuffer` - PR #258 Define pickling behavior for `DeviceBuffer` - PR #261 Add `__bytes__` method to `DeviceBuffer` - PR #262 Moved device memory resource files to `mr/device` directory - PR #266 Drop `rmm.auto_device` - PR #268 Add Cython/Python `copy_to_host` and `to_device` - PR #272 Add `host_memory_resource`. - PR #273 Moved device memory resource tests to `device/` directory. - PR #274 Add `copy_from_host` method to `DeviceBuffer` - PR #275 Add `copy_from_device` method to `DeviceBuffer` - PR #283 Add random allocation benchmark. - PR #287 Enabled CUDA CXX11 for unit tests. - PR #292 Revamped RMM exceptions. - PR #297 Use spdlog to implement `logging_resource_adaptor`. - PR #303 Added replay benchmark. - PR #319 Add `thread_safe_resource_adaptor` class. - PR #314 New suballocator memory_resources. - PR #330 Fixed incorrect name of `stream_free_blocks_` debug symbol. - PR #331 Move to C++14 and deprecate legacy APIs. ## Improvements - PR #246 Type `DeviceBuffer` arguments to `__cinit__` - PR #249 Use `DeviceBuffer` in `device_array` - PR #255 Add standard header to all Cython files - PR #256 Cast through `uintptr_t` to `cudaStream_t` - PR #254 Use `const void*` in `DeviceBuffer.__cinit__` - PR #257 Mark Cython-exposed C++ functions that raise - PR #269 Doc sync behavior in `copy_ptr_to_host` - PR #278 Allocate a `bytes` object to fill up with RMM log data - PR #280 Drop allocation/deallocation of `offset` - PR #282 `DeviceBuffer` use default constructor for size=0 - PR #296 Use CuPy's `UnownedMemory` for RMM-backed allocations - PR #310 Improve `device_buffer` allocation logic. - PR #309 Sync default stream in `DeviceBuffer` constructor - PR #326 Sync only on copy construction - PR #308 Fix typo in README - PR #334 Replace `rmm_allocator` for Thrust allocations - PR #345 Remove stream synchronization from `device_scalar` constructor and `set_value` ## Bug Fixes - PR #298 Remove RMM_CUDA_TRY from cuda_event_timer destructor - PR #299 Fix assert condition blocking debug builds - PR #300 Fix host mr_tests compile error - PR #312 Fix libcudf compilation errors due to explicit defaulted device_buffer constructor # RMM 0.12.0 (04 Feb 2020) ## New Features - PR #218 Add `_DevicePointer` - PR #219 Add method to copy `device_buffer` back to host memory - PR #222 Expose free and total memory in Python interface - PR #235 Allow construction of `DeviceBuffer` with a `stream` ## Improvements - PR #214 Add codeowners - PR #226 Add some tests of the Python `DeviceBuffer` - PR #233 Reuse the same `CUDA_HOME` logic from cuDF - PR #234 Add missing `size_t` in `DeviceBuffer` - PR #239 Cleanup `DeviceBuffer`'s `__cinit__` - PR #242 Special case 0-size `DeviceBuffer` in `tobytes` - PR #244 Explicitly force `DeviceBuffer.size` to an `int` - PR #247 Simplify casting in `tobytes` and other cleanup ## Bug Fixes - PR #215 Catch polymorphic exceptions by reference instead of by value - PR #221 Fix segfault calling rmmGetInfo when uninitialized - PR #225 Avoid invoking Python operations in c_free - PR #230 Fix duplicate symbol issues with `copy_to_host` - PR #232 Move `copy_to_host` doc back to header file # RMM 0.11.0 (11 Dec 2019) ## New Features - PR #106 Added multi-GPU initialization - PR #167 Added value setter to `device_scalar` - PR #163 Add Cython bindings to `device_buffer` - PR #177 Add `__cuda_array_interface__` to `DeviceBuffer` - PR #198 Add `rmm.rmm_cupy_allocator()` ## Improvements - PR #161 Use `std::atexit` to finalize RMM after Python interpreter shutdown - PR #165 Align memory resource allocation sizes to 8-byte - PR #171 Change public API of RMM to only expose `reinitialize(...)` - PR #175 Drop `cython` from run requirements - PR #169 Explicit stream argument for device_buffer methods - PR #186 Add nbytes and len to DeviceBuffer - PR #188 Require kwargs in `DeviceBuffer`'s constructor - PR #194 Drop unused imports from `device_buffer.pyx` - PR #196 Remove unused CUDA conda labels - PR #200 Simplify DeviceBuffer methods ## Bug Fixes - PR #174 Make `device_buffer` default ctor explicit to work around type_dispatcher issue in libcudf. - PR #170 Always build librmm and rmm, but conditionally upload based on CUDA / Python version - PR #182 Prefix `DeviceBuffer`'s C functions - PR #189 Drop `__reduce__` from `DeviceBuffer` - PR #193 Remove thrown exception from `rmm_allocator::deallocate` - PR #224 Slice the CSV log before converting to bytes # RMM 0.10.0 (16 Oct 2019) ## New Features - PR #99 Added `device_buffer` class - PR #133 Added `device_scalar` class ## Improvements - PR #123 Remove driver install from ci scripts - PR #131 Use YYMMDD tag in nightly build - PR #137 Replace CFFI python bindings with Cython - PR #127 Use Memory Resource classes for allocations ## Bug Fixes - PR #107 Fix local build generated file ownerships - PR #110 Fix Skip Test Functionality - PR #125 Fixed order of private variables in LogIt - PR #139 Expose `_make_finalizer` python API needed by cuDF - PR #142 Fix ignored exceptions in Cython - PR #146 Fix rmmFinalize() not freeing memory pools - PR #149 Force finalization of RMM objects before RMM is finalized (Python) - PR #154 Set ptr to 0 on rmm::alloc error - PR #157 Check if initialized before freeing for Numba finalizer and use `weakref` instead of `atexit` # RMM 0.9.0 (21 Aug 2019) ## New Features - PR #96 Added `device_memory_resource` for beginning of overhaul of RMM design - PR #103 Add and use unified build script ## Improvements - PR #111 Streamline CUDA_REL environment variable - PR #113 Handle ucp.BufferRegion objects in auto_device ## Bug Fixes ... # RMM 0.8.0 (27 June 2019) ## New Features - PR #95 Add skip test functionality to build.sh ## Improvements ... ## Bug Fixes - PR #92 Update docs version # RMM 0.7.0 (10 May 2019) ## New Features - PR #67 Add random_allocate microbenchmark in tests/performance - PR #70 Create conda environments and conda recipes - PR #77 Add local build script to mimic gpuCI - PR #80 Add build script for docs ## Improvements - PR #76 Add cudatoolkit conda dependency - PR #84 Use latest release version in update-version CI script - PR #90 Avoid using c++14 auto return type for thrust_rmm_allocator.h ## Bug Fixes - PR #68 Fix signed/unsigned mismatch in random_allocate benchmark - PR #74 Fix rmm conda recipe librmm version pinning - PR #72 Remove unnecessary _BSD_SOURCE define in random_allocate.cpp # RMM 0.6.0 (18 Mar 2019) ## New Features - PR #43 Add gpuCI build & test scripts - PR #44 Added API to query whether RMM is initialized and with what options. - PR #60 Default to CXX11_ABI=ON ## Improvements ## Bug Fixes - PR #58 Eliminate unreliable check for change in available memory in test - PR #49 Fix pep8 style errors detected by flake8 # RMM 0.5.0 (28 Jan 2019) ## New Features - PR #2 Added CUDA Managed Memory allocation mode ## Improvements - PR #12 Enable building RMM as a submodule - PR #13 CMake: Added CXX11ABI option and removed Travis references - PR #16 CMake: Added PARALLEL_LEVEL environment variable handling for GTest build parallelism (matches cuDF) - PR #17 Update README with v0.5 changes including Managed Memory support ## Bug Fixes - PR #10 Change cnmem submodule URL to use https - PR #15 Temporarily disable hanging AllocateTB test for managed memory - PR #28 Fix invalid reference to local stack variable in `rmm::exec_policy` # RMM 0.4.0 (20 Dec 2018) ## New Features - PR #1 Spun off RMM from cuDF into its own repository. ## Improvements - CUDF PR #472 RMM: Created centralized rmm::device_vector alias and rmm::exec_policy - CUDF PR #465 Added templated C++ API for RMM to avoid explicit cast to `void**` ## Bug Fixes RMM was initially implemented as part of cuDF, so we include the relevant changelog history below. # cuDF 0.3.0 (23 Nov 2018) ## New Features - PR #336 CSV Reader string support ## Improvements - CUDF PR #333 Add Rapids Memory Manager documentation - CUDF PR #321 Rapids Memory Manager adds file/line location logging and convenience macros ## Bug Fixes # cuDF 0.2.0 and cuDF 0.1.0 These were initial releases of cuDF based on previously separate pyGDF and libGDF libraries. RMM was initially implemented as part of libGDF.
0
rapidsai_public_repos
rapidsai_public_repos/rmm/build.sh
#!/bin/bash # Copyright (c) 2019, NVIDIA CORPORATION. # rmm build script # This script is used to build the component(s) in this repo from # source, and can be called with various options to customize the # build as needed (see the help output for details) # Abort script on first error set -e NUMARGS=$# ARGS=$* # NOTE: ensure all dir changes are relative to the location of this # script, and that this script resides in the repo dir! REPODIR=$(cd $(dirname $0); pwd) VALIDARGS="clean librmm rmm -v -g -n -s --ptds -h tests benchmarks" HELP="$0 [clean] [librmm] [rmm] [-v] [-g] [-n] [-s] [--ptds] [--cmake-args=\"<args>\"] [-h] clean - remove all existing build artifacts and configuration (start over) librmm - build and install the librmm C++ code rmm - build and install the rmm Python package benchmarks - build benchmarks tests - build tests -v - verbose build mode -g - build for debug -n - no install step (does not affect Python) -s - statically link against cudart --ptds - enable per-thread default stream --cmake-args=\\\"<args>\\\" - pass arbitrary list of CMake configuration options (escape all quotes in argument) -h - print this text default action (no args) is to build and install 'librmm' and 'rmm' targets " LIBRMM_BUILD_DIR=${LIBRMM_BUILD_DIR:=${REPODIR}/build} RMM_BUILD_DIR="${REPODIR}/python/build ${REPODIR}/python/_skbuild" BUILD_DIRS="${LIBRMM_BUILD_DIR} ${RMM_BUILD_DIR}" # Set defaults for vars modified by flags to this script VERBOSE_FLAG="" BUILD_TYPE=Release INSTALL_TARGET=install BUILD_BENCHMARKS=OFF BUILD_TESTS=OFF CUDA_STATIC_RUNTIME=OFF PER_THREAD_DEFAULT_STREAM=OFF RAN_CMAKE=0 # Set defaults for vars that may not have been defined externally # If INSTALL_PREFIX is not set, check PREFIX, then check # CONDA_PREFIX, then fall back to install inside of $LIBRMM_BUILD_DIR INSTALL_PREFIX=${INSTALL_PREFIX:=${PREFIX:=${CONDA_PREFIX:=$LIBRMM_BUILD_DIR/install}}} export PARALLEL_LEVEL=${PARALLEL_LEVEL:-4} function hasArg { (( NUMARGS != 0 )) && (echo " ${ARGS} " | grep -q " $1 ") } function cmakeArgs { # Check for multiple cmake args options if [[ $(echo $ARGS | { grep -Eo "\-\-cmake\-args" || true; } | wc -l ) -gt 1 ]]; then echo "Multiple --cmake-args options were provided, please provide only one: ${ARGS}" exit 1 fi # Check for cmake args option if [[ -n $(echo $ARGS | { grep -E "\-\-cmake\-args" || true; } ) ]]; then # There are possible weird edge cases that may cause this regex filter to output nothing and fail silently # the true pipe will catch any weird edge cases that may happen and will cause the program to fall back # on the invalid option error EXTRA_CMAKE_ARGS=$(echo $ARGS | { grep -Eo "\-\-cmake\-args=\".+\"" || true; }) if [[ -n ${EXTRA_CMAKE_ARGS} ]]; then # Remove the full EXTRA_CMAKE_ARGS argument from list of args so that it passes validArgs function ARGS=${ARGS//$EXTRA_CMAKE_ARGS/} # Filter the full argument down to just the extra string that will be added to cmake call EXTRA_CMAKE_ARGS=$(echo $EXTRA_CMAKE_ARGS | grep -Eo "\".+\"" | sed -e 's/^"//' -e 's/"$//') fi fi } # Runs cmake if it has not been run already for build directory # LIBRMM_BUILD_DIR function ensureCMakeRan { mkdir -p "${LIBRMM_BUILD_DIR}" if (( RAN_CMAKE == 0 )); then echo "Executing cmake for librmm..." cmake -B "${LIBRMM_BUILD_DIR}" -S . \ -DCMAKE_INSTALL_PREFIX="${INSTALL_PREFIX}" \ -DCUDA_STATIC_RUNTIME="${CUDA_STATIC_RUNTIME}" \ -DPER_THREAD_DEFAULT_STREAM="${PER_THREAD_DEFAULT_STREAM}" \ -DCMAKE_BUILD_TYPE=${BUILD_TYPE} \ -DBUILD_TESTS=${BUILD_TESTS} \ -DBUILD_BENCHMARKS=${BUILD_BENCHMARKS} \ ${EXTRA_CMAKE_ARGS} RAN_CMAKE=1 fi } if hasArg -h || hasArg --help; then echo "${HELP}" exit 0 fi # Check for valid usage if (( ${NUMARGS} != 0 )); then # Check for cmake args cmakeArgs for a in ${ARGS}; do if ! (echo " ${VALIDARGS} " | grep -q " ${a} "); then echo "Invalid option or formatting, check --help: ${a}" exit 1 fi done fi # Process flags if hasArg -v; then VERBOSE_FLAG=-v set -x fi if hasArg -g; then BUILD_TYPE=Debug fi if hasArg -n; then INSTALL_TARGET="" fi if hasArg benchmarks; then BUILD_BENCHMARKS=ON fi if hasArg tests; then BUILD_TESTS=ON fi if hasArg -s; then CUDA_STATIC_RUNTIME=ON fi if hasArg --ptds; then PER_THREAD_DEFAULT_STREAM=ON fi # Append `-DFIND_RMM_CPP=ON` to CMAKE_ARGS unless a user specified the option. SKBUILD_EXTRA_CMAKE_ARGS="${EXTRA_CMAKE_ARGS}" if [[ "${EXTRA_CMAKE_ARGS}" != *"DFIND_RMM_CPP"* ]]; then SKBUILD_EXTRA_CMAKE_ARGS="${SKBUILD_EXTRA_CMAKE_ARGS} -DFIND_RMM_CPP=ON" fi # If clean given, run it prior to any other steps if hasArg clean; then # If the dirs to clean are mounted dirs in a container, the # contents should be removed but the mounted dirs will remain. # The find removes all contents but leaves the dirs, the rmdir # attempts to remove the dirs but can fail safely. for bd in ${BUILD_DIRS}; do if [ -d "${bd}" ]; then find "${bd}" -mindepth 1 -delete rmdir "${bd}" || true fi done fi ################################################################################ # Configure, build, and install librmm if (( NUMARGS == 0 )) || hasArg librmm; then ensureCMakeRan echo "building librmm..." cmake --build "${LIBRMM_BUILD_DIR}" -j${PARALLEL_LEVEL} ${VERBOSE_FLAG} if [[ ${INSTALL_TARGET} != "" ]]; then echo "installing librmm..." cmake --build "${LIBRMM_BUILD_DIR}" --target install -v ${VERBOSE_FLAG} fi fi # Build and install the rmm Python package if (( NUMARGS == 0 )) || hasArg rmm; then echo "building and installing rmm..." SKBUILD_CONFIGURE_OPTIONS="${SKBUILD_EXTRA_CMAKE_ARGS}" python -m pip install --no-build-isolation --no-deps ${REPODIR}/python fi
0
rapidsai_public_repos
rapidsai_public_repos/rmm/.clang-tidy
--- Checks: 'clang-diagnostic-*, clang-analyzer-*, cppcoreguidelines-*, modernize-*, bugprone-*, performance-*, readability-*, llvm-*, -cppcoreguidelines-macro-usage, -llvm-header-guard, -modernize-use-trailing-return-type, -readability-named-parameter' WarningsAsErrors: '' HeaderFilterRegex: '' AnalyzeTemporaryDtors: false FormatStyle: none CheckOptions: - key: cert-dcl16-c.NewSuffixes value: 'L;LL;LU;LLU' - key: cert-oop54-cpp.WarnOnlyIfThisHasSuspiciousField value: '0' - key: cert-str34-c.DiagnoseSignedUnsignedCharComparisons value: '0' - key: cppcoreguidelines-explicit-virtual-functions.IgnoreDestructors value: '1' - key: cppcoreguidelines-non-private-member-variables-in-classes.IgnoreClassesWithAllMemberVariablesBeingPublic value: '1' - key: google-readability-braces-around-statements.ShortStatementLines value: '1' - key: google-readability-function-size.StatementThreshold value: '800' - key: google-readability-namespace-comments.ShortNamespaceLines value: '10' - key: google-readability-namespace-comments.SpacesBeforeComments value: '2' - key: llvm-else-after-return.WarnOnConditionVariables value: '0' - key: llvm-else-after-return.WarnOnUnfixable value: '0' - key: llvm-qualified-auto.AddConstToQualified value: '0' - key: modernize-loop-convert.MaxCopySize value: '16' - key: modernize-loop-convert.MinConfidence value: reasonable - key: modernize-loop-convert.NamingStyle value: CamelCase - key: modernize-pass-by-value.IncludeStyle value: llvm - key: modernize-replace-auto-ptr.IncludeStyle value: llvm - key: modernize-use-nullptr.NullMacros value: 'NULL' - key: readability-identifier-length.IgnoredParameterNames value: 'mr|os' - key: readability-identifier-length.IgnoredVariableNames value: 'mr|_' #- key: readability-function-cognitive-complexity.IgnoreMacros # value: '1' - key: bugprone-easily-swappable-parameters.IgnoredParameterNames value: 'alignment' - key: cppcoreguidelines-avoid-magic-numbers.IgnorePowersOf2IntegerValues value: '1' - key: readability-magic-numbers.IgnorePowersOf2IntegerValues value: '1' - key: cppcoreguidelines-avoid-do-while.IgnoreMacros value: 'true' ...
0
rapidsai_public_repos
rapidsai_public_repos/rmm/dependencies.yaml
# Dependency list for https://github.com/rapidsai/dependency-file-generator files: all: output: conda matrix: cuda: ["11.8", "12.0"] arch: [x86_64] includes: - build - checks - cudatoolkit - develop - docs - run - test_python test_python: output: none includes: - cudatoolkit - py_version - test_python test_cpp: output: none includes: - cudatoolkit - test_cpp checks: output: none includes: - checks - py_version docs: output: none includes: - cudatoolkit - docs - py_version py_build: output: pyproject extras: table: build-system includes: - build py_run: output: pyproject extras: table: project includes: - run py_optional_test: output: pyproject extras: table: project.optional-dependencies key: test includes: - test_python channels: - rapidsai - conda-forge dependencies: build: common: - output_types: [conda, requirements, pyproject] packages: - &cmake_ver cmake>=3.26.4 - cython>=3.0.0 - ninja - scikit-build>=0.13.1 - tomli - output_types: conda packages: - c-compiler - cxx-compiler - fmt>=9.1.0,<10 - spdlog>=1.11.0,<1.12 - python>=3.9,<3.11 - output_types: pyproject packages: - wheel - setuptools>=61.0.0 specific: - output_types: conda matrices: - matrix: arch: x86_64 packages: - gcc_linux-64=11.* - sysroot_linux-64==2.17 - matrix: arch: aarch64 packages: - gcc_linux-aarch64=11.* - sysroot_linux-aarch64==2.17 - output_types: conda matrices: - matrix: arch: x86_64 cuda: "11.8" packages: - nvcc_linux-64=11.8 - matrix: arch: aarch64 cuda: "11.8" packages: - nvcc_linux-aarch64=11.8 - matrix: cuda: "12.0" packages: - cuda-version=12.0 - cuda-nvcc - output_types: [conda, requirements, pyproject] matrices: - matrix: cuda: "12.0" packages: - &cuda_python12 cuda-python>=12.0,<13.0a0 - matrix: # All CUDA 11 versions packages: - &cuda_python11 cuda-python>=11.7.1,<12.0a0 checks: common: - output_types: [conda, requirements] packages: - pre-commit # pre-commit requires identify minimum version 1.0, but clang-format requires textproto support and that was # added in 2.5.20, so we need to call out the minimum version needed for our plugins - identify>=2.5.20 - output_types: conda packages: - &doxygen doxygen=1.9.1 cudatoolkit: specific: - output_types: conda matrices: - matrix: cuda: "11.2" packages: - cuda-version=11.2 - cudatoolkit - matrix: cuda: "11.4" packages: - cuda-version=11.4 - cudatoolkit - matrix: cuda: "11.5" packages: - cuda-version=11.5 - cudatoolkit - matrix: cuda: "11.6" packages: - cuda-version=11.6 - cudatoolkit - matrix: cuda: "11.8" packages: - cuda-version=11.8 - cudatoolkit - matrix: cuda: "12.0" packages: - cuda-version=12.0 develop: common: - output_types: [conda, requirements] packages: - gcovr>=5.0 - output_types: conda packages: - clang==16.0.6 - clang-tools==16.0.6 docs: common: - output_types: conda packages: - breathe - *doxygen - graphviz - ipython - make - nbsphinx - numpydoc - sphinx - sphinx_rtd_theme - sphinx-copybutton - sphinx-markdown-tables py_version: specific: - output_types: conda matrices: - matrix: py: "3.9" packages: - python=3.9 - matrix: py: "3.10" packages: - python=3.10 run: common: - output_types: [conda, requirements, pyproject] packages: - numba>=0.57 - numpy>=1.21 specific: - output_types: [conda, requirements, pyproject] matrices: - matrix: cuda: "12.0" packages: - *cuda_python12 - matrix: # All CUDA 11 versions packages: - *cuda_python11 test_cpp: common: - output_types: conda packages: - *cmake_ver test_python: common: - output_types: [conda, requirements, pyproject] packages: - pytest - pytest-cov - output_types: conda packages: # Needed for numba in tests - cuda-nvcc
0
rapidsai_public_repos
rapidsai_public_repos/rmm/CONTRIBUTING.md
# Contributing to RMM If you are interested in contributing to RMM, your contributions will fall into three categories: 1. You want to report a bug, feature request, or documentation issue - File an [issue](https://github.com/rapidsai/rmm/issues/new/choose) describing what you encountered or what you want to see changed. - The RAPIDS team will evaluate the issues and triage them, scheduling them for a release. If you believe the issue needs priority attention comment on the issue to notify the team. 2. You want to propose a new Feature and implement it - Post about your intended feature, and we shall discuss the design and implementation. - Once we agree that the plan looks good, go ahead and implement it, using the [code contributions](#code-contributions) guide below. 3. You want to implement a feature or bug-fix for an outstanding issue - Follow the [code contributions](#code-contributions) guide below. - If you need more context on a particular issue, please ask and we shall provide. ## Code contributions ### Your first issue 1. Read the project's [README.md](https://github.com/rapidsai/rmm/blob/main/README.md) to learn how to setup the development environment 2. Find an issue to work on. The best way is to look for the [good first issue](https://github.com/rapidsai/rmm/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) or [help wanted](https://github.com/rapidsai/rmm/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) labels 3. Comment on the issue saying you are going to work on it 4. Code! Make sure to update unit tests! 5. When done, [create your pull request](https://github.com/rapidsai/rmm/compare) 6. Verify that CI passes all [status checks](https://help.github.com/articles/about-status-checks/). Fix if needed 7. Wait for other developers to review your code and update code as needed 8. Once reviewed and approved, a RAPIDS developer will merge your pull request. Note that for C++ code, two reviewers are required. To set up a development environment, follow the steps in the [README](https://github.com/rapidsai/rmm/blob/main/README.md) for cloning the repository and creating the conda environment. Once the environment is created, you can build and install RMM using ```bash $ python setup.py develop ``` This command will build the RMM Python library inside the clone and automatically make it importable when running Python anywhere on your machine. Remember, if you are unsure about anything, don't hesitate to comment on issues and ask for clarifications! ### Seasoned developers Once you have gotten your feet wet and are more comfortable with the code, you can look at the prioritized issues of our next release in our [project boards](https://github.com/rapidsai/rmm/projects). > **Pro Tip:** Always look at the release board with the highest number for issues to work on. This is where RAPIDS developers also focus their efforts. Look at the unassigned issues, and find an issue you are comfortable with contributing to. Start with _Step 3_ from above, commenting on the issue to let others know you are working on it. If you have any questions related to the implementation of the issue, ask them in the issue instead of the PR. ## Attribution Portions adopted from https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md
0
rapidsai_public_repos
rapidsai_public_repos/rmm/LICENSE
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
0
rapidsai_public_repos
rapidsai_public_repos/rmm/VERSION
24.02.00
0
rapidsai_public_repos
rapidsai_public_repos/rmm/.clang-format
--- # Refer to the following link for the explanation of each params: # http://releases.llvm.org/8.0.0/tools/clang/docs/ClangFormatStyleOptions.html Language: Cpp # BasedOnStyle: Google AccessModifierOffset: -1 AlignAfterOpenBracket: Align AlignConsecutiveAssignments: true AlignConsecutiveBitFields: true AlignConsecutiveDeclarations: false AlignConsecutiveMacros: true AlignEscapedNewlines: Left AlignOperands: true AlignTrailingComments: true AllowAllArgumentsOnNextLine: true AllowAllConstructorInitializersOnNextLine: true AllowAllParametersOfDeclarationOnNextLine: true AllowShortBlocksOnASingleLine: true AllowShortCaseLabelsOnASingleLine: true AllowShortEnumsOnASingleLine: true AllowShortFunctionsOnASingleLine: All AllowShortIfStatementsOnASingleLine: true AllowShortLambdasOnASingleLine: true AllowShortLoopsOnASingleLine: false # This is deprecated AlwaysBreakAfterDefinitionReturnType: None AlwaysBreakAfterReturnType: None AlwaysBreakBeforeMultilineStrings: true AlwaysBreakTemplateDeclarations: Yes BinPackArguments: false BinPackParameters: false BraceWrapping: AfterClass: false AfterControlStatement: false AfterEnum: false AfterFunction: false AfterNamespace: false AfterObjCDeclaration: false AfterStruct: false AfterUnion: false AfterExternBlock: false BeforeCatch: false BeforeElse: false IndentBraces: false # disabling the below splits, else, they'll just add to the vertical length of source files! SplitEmptyFunction: false SplitEmptyRecord: false SplitEmptyNamespace: false BreakAfterJavaFieldAnnotations: false BreakBeforeBinaryOperators: None BreakBeforeBraces: WebKit BreakBeforeInheritanceComma: false BreakBeforeTernaryOperators: true BreakConstructorInitializersBeforeComma: false BreakConstructorInitializers: BeforeColon BreakInheritanceList: BeforeColon BreakStringLiterals: true ColumnLimit: 100 CommentPragmas: '^ IWYU pragma:' CompactNamespaces: false ConstructorInitializerAllOnOneLineOrOnePerLine: true # Kept the below 2 to be the same as `IndentWidth` to keep everything uniform ConstructorInitializerIndentWidth: 2 ContinuationIndentWidth: 2 Cpp11BracedListStyle: true DerivePointerAlignment: false DisableFormat: false ExperimentalAutoDetectBinPacking: false FixNamespaceComments: true ForEachMacros: - foreach - Q_FOREACH - BOOST_FOREACH IncludeBlocks: Preserve IncludeIsMainRegex: '([-_](test|unittest))?$' IndentCaseLabels: true IndentPPDirectives: None IndentWidth: 2 IndentWrappedFunctionNames: false JavaScriptQuotes: Leave JavaScriptWrapImports: true KeepEmptyLinesAtTheStartOfBlocks: false MacroBlockBegin: '' MacroBlockEnd: '' MaxEmptyLinesToKeep: 1 NamespaceIndentation: None ObjCBinPackProtocolList: Never ObjCBlockIndentWidth: 2 ObjCSpaceAfterProperty: false ObjCSpaceBeforeProtocolList: true PenaltyBreakAssignment: 2 PenaltyBreakBeforeFirstCallParameter: 1 PenaltyBreakComment: 300 PenaltyBreakFirstLessLess: 120 PenaltyBreakString: 1000 PenaltyBreakTemplateDeclaration: 10 PenaltyExcessCharacter: 1000000 PenaltyReturnTypeOnItsOwnLine: 200 PointerAlignment: Left RawStringFormats: - Language: Cpp Delimiters: - cc - CC - cpp - Cpp - CPP - 'c++' - 'C++' CanonicalDelimiter: '' - Language: TextProto Delimiters: - pb - PB - proto - PROTO EnclosingFunctions: - EqualsProto - EquivToProto - PARSE_PARTIAL_TEXT_PROTO - PARSE_TEST_PROTO - PARSE_TEXT_PROTO - ParseTextOrDie - ParseTextProtoOrDie CanonicalDelimiter: '' BasedOnStyle: google # Enabling comment reflow causes doxygen comments to be messed up in their formats! ReflowComments: true SortIncludes: true SortUsingDeclarations: true SpaceAfterCStyleCast: false SpaceAfterTemplateKeyword: true SpaceBeforeAssignmentOperators: true SpaceBeforeCpp11BracedList: false SpaceBeforeCtorInitializerColon: true SpaceBeforeInheritanceColon: true SpaceBeforeParens: ControlStatements SpaceBeforeRangeBasedForLoopColon: true SpaceBeforeSquareBrackets: false SpaceInEmptyBlock: false SpaceInEmptyParentheses: false SpacesBeforeTrailingComments: 2 SpacesInAngles: false SpacesInConditionalStatement: false SpacesInContainerLiterals: true SpacesInCStyleCastParentheses: false SpacesInParentheses: false SpacesInSquareBrackets: false Standard: c++17 StatementMacros: - Q_UNUSED - QT_REQUIRE_VERSION # Be consistent with indent-width, even for people who use tab for indentation! TabWidth: 2 UseTab: Never
0
rapidsai_public_repos
rapidsai_public_repos/rmm/print_env.sh
#!/usr/bin/env bash # Reports relevant environment information useful for diagnosing and # debugging RMM issues. # Usage: # "./print_env.sh" - prints to stdout # "./print_env.sh > env.txt" - prints to file "env.txt" echo "**git***" git log --decorate -n 1 echo echo "***OS Information***" cat /etc/*-release uname -a echo echo "***GPU Information***" nvidia-smi echo echo "***CPU***" lscpu echo echo "***CMake***" which cmake && cmake --version echo echo "***g++***" which g++ && g++ --version echo echo "***nvcc***" which nvcc && nvcc --version echo echo "***Python***" which python && python --version echo echo "***Environment Variables***" printf '%-32s: %s\n' PATH $PATH printf '%-32s: %s\n' LD_LIBRARY_PATH $LD_LIBRARY_PATH printf '%-32s: %s\n' NUMBAPRO_NVVM $NUMBAPRO_NVVM printf '%-32s: %s\n' NUMBAPRO_LIBDEVICE $NUMBAPRO_LIBDEVICE printf '%-32s: %s\n' CONDA_PREFIX $CONDA_PREFIX printf '%-32s: %s\n' PYTHON_PATH $PYTHON_PATH echo # Print conda packages if conda exists if type "conda" > /dev/null; then echo '***conda packages***' which conda && conda list echo # Print pip packages if pip exists elif type "pip" > /dev/null; then echo "***pip packages***" which pip && pip list echo fi
0
rapidsai_public_repos/rmm
rapidsai_public_repos/rmm/include/doxygen_groups.h
/* * Copyright (c) 2023, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ /** * @file * @brief Doxygen group definitions */ // This header is only processed by doxygen and does // not need to be included in any source file. // Below are the main groups that doxygen uses to build // the Modules page in the specified order. // // To add a new API to an existing group, just use the // @ingroup tag to the API's doxygen comment. // Add a new group by first specifying in the hierarchy below. /** * @defgroup memory_resources Memory Resources * @{ * @defgroup device_memory_resources Device Memory Resources * @defgroup host_memory_resources Host Memory Resources * @defgroup device_resource_adaptors Device Resource Adaptors * @} * @defgroup cuda_device_management CUDA Device Management * @defgroup cuda_streams CUDA Streams * @defgroup data_containers Data Containers * @defgroup errors Errors * @defgroup logging Logging * @defgroup thrust_integrations Thrust Integrations */
0
rapidsai_public_repos/rmm/include
rapidsai_public_repos/rmm/include/rmm/device_buffer.hpp
/* * Copyright (c) 2019-2022, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/cuda_device.hpp> #include <rmm/cuda_stream_view.hpp> #include <rmm/detail/error.hpp> #include <rmm/mr/device/device_memory_resource.hpp> #include <rmm/mr/device/per_device_resource.hpp> #include <cuda_runtime_api.h> #include <cassert> #include <cstddef> #include <stdexcept> #include <utility> #include <cuda/memory_resource> namespace rmm { /** * @addtogroup data_containers * @{ * @file */ /** * @brief RAII construct for device memory allocation * * This class allocates untyped and *uninitialized* device memory using a * `device_memory_resource`. If not explicitly specified, the memory resource * returned from `get_current_device_resource()` is used. * * @note Unlike `std::vector` or `thrust::device_vector`, the device memory * allocated by a `device_buffer` is uninitialized. Therefore, it is undefined * behavior to read the contents of `data()` before first initializing it. * * Examples: * ``` * //Allocates at least 100 bytes of device memory using the default memory * //resource and default stream. * device_buffer buff(100); * * // allocates at least 100 bytes using the custom memory resource and * // specified stream * custom_memory_resource mr; * cuda_stream_view stream = cuda_stream_view{}; * device_buffer custom_buff(100, stream, &mr); * * // deep copies `buff` into a new device buffer using the specified stream * device_buffer buff_copy(buff, stream); * * // moves the memory in `from_buff` to `to_buff`. Deallocates previously allocated * // to_buff memory on `to_buff.stream()`. * device_buffer to_buff(std::move(from_buff)); * * // deep copies `buff` into a new device buffer using the specified stream * device_buffer buff_copy(buff, stream); * * // shallow copies `buff` into a new device_buffer, `buff` is now empty * device_buffer buff_move(std::move(buff)); * * // Default construction. Buffer is empty * device_buffer buff_default{}; * * // If the requested size is larger than the current size, resizes allocation to the new size and * // deep copies any previous contents. Otherwise, simply updates the value of `size()` to the * // newly requested size without any allocations or copies. Uses the specified stream. * buff_default.resize(100, stream); *``` */ class device_buffer { using async_resource_ref = cuda::mr::async_resource_ref<cuda::mr::device_accessible>; public: // The copy constructor and copy assignment operator without a stream are deleted because they // provide no way to specify an explicit stream device_buffer(device_buffer const& other) = delete; device_buffer& operator=(device_buffer const& other) = delete; /** * @brief Default constructor creates an empty `device_buffer` */ // Note: we cannot use `device_buffer() = default;` because nvcc implicitly adds // `__host__ __device__` specifiers to the defaulted constructor when it is called within the // context of both host and device functions. Specifically, the `cudf::type_dispatcher` is a host- // device function. This causes warnings/errors because this ctor invokes host-only functions. device_buffer() : _mr{rmm::mr::get_current_device_resource()} {} /** * @brief Constructs a new device buffer of `size` uninitialized bytes * * @throws rmm::bad_alloc If allocation fails. * * @param size Size in bytes to allocate in device memory. * @param stream CUDA stream on which memory may be allocated if the memory * resource supports streams. * @param mr Memory resource to use for the device memory allocation. */ explicit device_buffer(std::size_t size, cuda_stream_view stream, async_resource_ref mr = mr::get_current_device_resource()) : _stream{stream}, _mr{mr} { cuda_set_device_raii dev{_device}; allocate_async(size); } /** * @brief Construct a new device buffer by copying from a raw pointer to an existing host or * device memory allocation. * * @note This function does not synchronize `stream`. `source_data` is copied on `stream`, so the * caller is responsible for correct synchronization to ensure that `source_data` is valid when * the copy occurs. This includes destroying `source_data` in stream order after this function is * called, or synchronizing or waiting on `stream` after this function returns as necessary. * * @throws rmm::bad_alloc If creating the new allocation fails. * @throws rmm::logic_error If `source_data` is null, and `size != 0`. * @throws rmm::cuda_error if copying from the device memory fails. * * @param source_data Pointer to the host or device memory to copy from. * @param size Size in bytes to copy. * @param stream CUDA stream on which memory may be allocated if the memory * resource supports streams. * @param mr Memory resource to use for the device memory allocation */ device_buffer(void const* source_data, std::size_t size, cuda_stream_view stream, async_resource_ref mr = rmm::mr::get_current_device_resource()) : _stream{stream}, _mr{mr} { cuda_set_device_raii dev{_device}; allocate_async(size); copy_async(source_data, size); } /** * @brief Construct a new `device_buffer` by deep copying the contents of * another `device_buffer`, optionally using the specified stream and memory * resource. * * @note Only copies `other.size()` bytes from `other`, i.e., if *`other.size() != other.capacity()`, then the size and capacity of the newly * constructed `device_buffer` will be equal to `other.size()`. * * @note This function does not synchronize `stream`. `other` is copied on `stream`, so the * caller is responsible for correct synchronization to ensure that `other` is valid when * the copy occurs. This includes destroying `other` in stream order after this function is * called, or synchronizing or waiting on `stream` after this function returns as necessary. * * @throws rmm::bad_alloc If creating the new allocation fails. * @throws rmm::cuda_error if copying from `other` fails. * * @param other The `device_buffer` whose contents will be copied * @param stream The stream to use for the allocation and copy * @param mr The resource to use for allocating the new `device_buffer` */ device_buffer(device_buffer const& other, cuda_stream_view stream, async_resource_ref mr = rmm::mr::get_current_device_resource()) : device_buffer{other.data(), other.size(), stream, mr} { } /** * @brief Constructs a new `device_buffer` by moving the contents of another * `device_buffer` into the newly constructed one. * * After the new `device_buffer` is constructed, `other` is modified to be a * valid, empty `device_buffer`, i.e., `data()` returns `nullptr`, and * `size()` and `capacity()` are zero. * * @param other The `device_buffer` whose contents will be moved into the * newly constructed one. */ device_buffer(device_buffer&& other) noexcept : _data{other._data}, _size{other._size}, _capacity{other._capacity}, _stream{other.stream()}, _mr{other._mr}, _device{other._device} { other._data = nullptr; other._size = 0; other._capacity = 0; other.set_stream(cuda_stream_view{}); other._device = cuda_device_id{-1}; } /** * @brief Move assignment operator moves the contents from `other`. * * This `device_buffer`'s current device memory allocation will be deallocated * on `stream()`. * * If a different stream is required, call `set_stream()` on * the instance before assignment. After assignment, this instance's stream is * replaced by the `other.stream()`. * * @param other The `device_buffer` whose contents will be moved. * * @return A reference to this `device_buffer` */ device_buffer& operator=(device_buffer&& other) noexcept { if (&other != this) { cuda_set_device_raii dev{_device}; deallocate_async(); _data = other._data; _size = other._size; _capacity = other._capacity; set_stream(other.stream()); _mr = other._mr; _device = other._device; other._data = nullptr; other._size = 0; other._capacity = 0; other.set_stream(cuda_stream_view{}); other._device = cuda_device_id{-1}; } return *this; } /** * @brief Destroy the device buffer object * * @note If the memory resource supports streams, this destructor deallocates * using the stream most recently passed to any of this device buffer's * methods. */ ~device_buffer() noexcept { cuda_set_device_raii dev{_device}; deallocate_async(); _stream = cuda_stream_view{}; } /** * @brief Increase the capacity of the device memory allocation * * If the requested `new_capacity` is less than or equal to `capacity()`, no * action is taken. * * If `new_capacity` is larger than `capacity()`, a new allocation is made on * `stream` to satisfy `new_capacity`, and the contents of the old allocation are * copied on `stream` to the new allocation. The old allocation is then freed. * The bytes from `[size(), new_capacity)` are uninitialized. * * @throws rmm::bad_alloc If creating the new allocation fails * @throws rmm::cuda_error if the copy from the old to new allocation * fails * * @param new_capacity The requested new capacity, in bytes * @param stream The stream to use for allocation and copy */ void reserve(std::size_t new_capacity, cuda_stream_view stream) { set_stream(stream); if (new_capacity > capacity()) { cuda_set_device_raii dev{_device}; auto tmp = device_buffer{new_capacity, stream, _mr}; auto const old_size = size(); RMM_CUDA_TRY(cudaMemcpyAsync(tmp.data(), data(), size(), cudaMemcpyDefault, stream.value())); *this = std::move(tmp); _size = old_size; } } /** * @brief Resize the device memory allocation * * If the requested `new_size` is less than or equal to `capacity()`, no * action is taken other than updating the value that is returned from * `size()`. Specifically, no memory is allocated nor copied. The value * `capacity()` remains the actual size of the device memory allocation. * * @note `shrink_to_fit()` may be used to force the deallocation of unused * `capacity()`. * * If `new_size` is larger than `capacity()`, a new allocation is made on * `stream` to satisfy `new_size`, and the contents of the old allocation are * copied on `stream` to the new allocation. The old allocation is then freed. * The bytes from `[old_size, new_size)` are uninitialized. * * The invariant `size() <= capacity()` holds. * * @throws rmm::bad_alloc If creating the new allocation fails * @throws rmm::cuda_error if the copy from the old to new allocation * fails * * @param new_size The requested new size, in bytes * @param stream The stream to use for allocation and copy */ void resize(std::size_t new_size, cuda_stream_view stream) { set_stream(stream); // If the requested size is smaller than the current capacity, just update // the size without any allocations if (new_size <= capacity()) { _size = new_size; } else { cuda_set_device_raii dev{_device}; auto tmp = device_buffer{new_size, stream, _mr}; RMM_CUDA_TRY(cudaMemcpyAsync(tmp.data(), data(), size(), cudaMemcpyDefault, stream.value())); *this = std::move(tmp); } } /** * @brief Forces the deallocation of unused memory. * * Reallocates and copies on stream `stream` the contents of the device memory * allocation to reduce `capacity()` to `size()`. * * If `size() == capacity()`, no allocations or copies occur. * * @throws rmm::bad_alloc If creating the new allocation fails * @throws rmm::cuda_error If the copy from the old to new allocation fails * * @param stream The stream on which the allocation and copy are performed */ void shrink_to_fit(cuda_stream_view stream) { set_stream(stream); if (size() != capacity()) { cuda_set_device_raii dev{_device}; // Invoke copy ctor on self which only copies `[0, size())` and swap it // with self. The temporary `device_buffer` will hold the old contents // which will then be destroyed auto tmp = device_buffer{*this, stream, _mr}; std::swap(tmp, *this); } } /** * @briefreturn{Const pointer to the device memory allocation} */ [[nodiscard]] void const* data() const noexcept { return _data; } /** * @briefreturn{Pointer to the device memory allocation} */ void* data() noexcept { return _data; } /** * @briefreturn{The number of bytes} */ [[nodiscard]] std::size_t size() const noexcept { return _size; } /** * @briefreturn{The signed number of bytes} */ [[nodiscard]] std::int64_t ssize() const noexcept { assert(size() < static_cast<std::size_t>(std::numeric_limits<int64_t>::max()) && "Size overflows signed integer"); return static_cast<int64_t>(size()); } /** * @briefreturn{Whether or not the buffer currently holds any data} * * If `is_empty() == true`, the `device_buffer` may still hold an allocation * if `capacity() > 0`. */ [[nodiscard]] bool is_empty() const noexcept { return 0 == size(); } /** * @brief Returns actual size in bytes of device memory allocation. * * The invariant `size() <= capacity()` holds. * * @return The actual size in bytes of the device memory allocation */ [[nodiscard]] std::size_t capacity() const noexcept { return _capacity; } /** * @briefreturn{The stream most recently specified for allocation/deallocation} */ [[nodiscard]] cuda_stream_view stream() const noexcept { return _stream; } /** * @brief Sets the stream to be used for deallocation * * If no other rmm::device_buffer method that allocates memory is called * after this call with a different stream argument, then @p stream * will be used for deallocation in the `rmm::device_uvector` destructor. * However, if either of `resize()` or `shrink_to_fit()` is called after this, * the later stream parameter will be stored and used in the destructor. * * @param stream The stream to use for deallocation */ void set_stream(cuda_stream_view stream) noexcept { _stream = stream; } /** * @briefreturn{The async_resource_ref used to allocate and deallocate} */ [[nodiscard]] async_resource_ref memory_resource() const noexcept { return _mr; } private: void* _data{nullptr}; ///< Pointer to device memory allocation std::size_t _size{}; ///< Requested size of the device memory allocation std::size_t _capacity{}; ///< The actual size of the device memory allocation cuda_stream_view _stream{}; ///< Stream to use for device memory deallocation async_resource_ref _mr{ rmm::mr::get_current_device_resource()}; ///< The memory resource used to ///< allocate/deallocate device memory cuda_device_id _device{get_current_cuda_device()}; /** * @brief Allocates the specified amount of memory and updates the size/capacity accordingly. * * Allocates on `stream()` using the memory resource passed to the constructor. * * If `bytes == 0`, sets `_data = nullptr`. * * @param bytes The amount of memory to allocate */ void allocate_async(std::size_t bytes) { _size = bytes; _capacity = bytes; _data = (bytes > 0) ? _mr.allocate_async(bytes, stream()) : nullptr; } /** * @brief Deallocate any memory held by this `device_buffer` and clear the * size/capacity/data members. * * If the buffer doesn't hold any memory, i.e., `capacity() == 0`, doesn't * call the resource deallocation. * * Deallocates on `stream()` using the memory resource passed to the constructor. */ void deallocate_async() noexcept { if (capacity() > 0) { _mr.deallocate_async(data(), capacity(), stream()); } _size = 0; _capacity = 0; _data = nullptr; } /** * @brief Copies the specified number of `bytes` from `source` into the * internal device allocation. * * `source` can point to either host or device memory. * * This function assumes `_data` already points to an allocation large enough * to hold `bytes` bytes. * * @param source The pointer to copy from * @param bytes The number of bytes to copy */ void copy_async(void const* source, std::size_t bytes) { if (bytes > 0) { RMM_EXPECTS(nullptr != source, "Invalid copy from nullptr."); RMM_EXPECTS(nullptr != _data, "Invalid copy to nullptr."); RMM_CUDA_TRY(cudaMemcpyAsync(_data, source, bytes, cudaMemcpyDefault, stream().value())); } } }; /** @} */ // end of group } // namespace rmm
0
rapidsai_public_repos/rmm/include
rapidsai_public_repos/rmm/include/rmm/thrust_rmm_allocator.h
/* * Copyright (c) 2018-2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/cuda_stream_view.hpp> #include <rmm/device_vector.hpp> #include <rmm/mr/device/thrust_allocator_adaptor.hpp> #include <rmm/detail/thrust_namespace.h> #include <thrust/execution_policy.h> namespace rmm { using par_t = decltype(thrust::cuda::par(*(new rmm::mr::thrust_allocator<char>()))); using deleter_t = std::function<void(par_t*)>; using exec_policy_t = std::unique_ptr<par_t, deleter_t>; /** * @brief Returns a unique_ptr to a Thrust CUDA execution policy that uses RMM * for temporary memory allocation. * * @param stream The stream that the allocator will use * * @return A Thrust execution policy that will use RMM for temporary memory * allocation. */ [[deprecated("Use new exec_policy in rmm/exec_policy.hpp")]] inline exec_policy_t exec_policy( cudaStream_t stream = nullptr) { // NOLINTNEXTLINE(cppcoreguidelines-owning-memory) auto* alloc = new rmm::mr::thrust_allocator<char>(cuda_stream_view{stream}); auto deleter = [alloc](par_t* pointer) { delete alloc; // NOLINT(cppcoreguidelines-owning-memory) delete pointer; // NOLINT(cppcoreguidelines-owning-memory) }; exec_policy_t policy{new par_t(*alloc), deleter}; return policy; } } // namespace rmm
0
rapidsai_public_repos/rmm/include
rapidsai_public_repos/rmm/include/rmm/device_scalar.hpp
/* * Copyright (c) 2019-2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/cuda_stream_view.hpp> #include <rmm/device_uvector.hpp> #include <rmm/mr/device/device_memory_resource.hpp> #include <rmm/mr/device/per_device_resource.hpp> #include <type_traits> namespace rmm { /** * @addtogroup data_containers * @{ * @file */ /** * @brief Container for a single object of type `T` in device memory. * * `T` must be trivially copyable. * * @tparam T The object's type */ template <typename T> class device_scalar { public: static_assert(std::is_trivially_copyable<T>::value, "Scalar type must be trivially copyable"); using value_type = typename device_uvector<T>::value_type; ///< T, the type of the scalar element using reference = typename device_uvector<T>::reference; ///< value_type& using const_reference = typename device_uvector<T>::const_reference; ///< const value_type& using pointer = typename device_uvector<T>::pointer; ///< The type of the pointer returned by data() using const_pointer = typename device_uvector<T>::const_pointer; ///< The type of the iterator ///< returned by data() const RMM_EXEC_CHECK_DISABLE ~device_scalar() = default; RMM_EXEC_CHECK_DISABLE device_scalar(device_scalar&&) noexcept = default; ///< Default move constructor /** * @brief Default move assignment operator * * @return device_scalar& A reference to the assigned-to object */ device_scalar& operator=(device_scalar&&) noexcept = default; /** * @brief Copy ctor is deleted as it doesn't allow a stream argument */ device_scalar(device_scalar const&) = delete; /** * @brief Copy assignment is deleted as it doesn't allow a stream argument */ device_scalar& operator=(device_scalar const&) = delete; /** * @brief Default constructor is deleted as it doesn't allow a stream argument */ device_scalar() = delete; /** * @brief Construct a new uninitialized `device_scalar`. * * Does not synchronize the stream. * * @note This device_scalar is only safe to access in kernels and copies on the specified CUDA * stream, or on another stream only if a dependency is enforced (e.g. using * `cudaStreamWaitEvent()`). * * @throws rmm::bad_alloc if allocating the device memory fails. * * @param stream Stream on which to perform asynchronous allocation. * @param mr Optional, resource with which to allocate. */ explicit device_scalar( cuda_stream_view stream, rmm::mr::device_memory_resource* mr = rmm::mr::get_current_device_resource()) : _storage{1, stream, mr} { } /** * @brief Construct a new `device_scalar` with an initial value. * * Does not synchronize the stream. * * @note This device_scalar is only safe to access in kernels and copies on the specified CUDA * stream, or on another stream only if a dependency is enforced (e.g. using * `cudaStreamWaitEvent()`). * * @throws rmm::bad_alloc if allocating the device memory for `initial_value` fails. * @throws rmm::cuda_error if copying `initial_value` to device memory fails. * * @param initial_value The initial value of the object in device memory. * @param stream Optional, stream on which to perform allocation and copy. * @param mr Optional, resource with which to allocate. */ explicit device_scalar( value_type const& initial_value, cuda_stream_view stream, rmm::mr::device_memory_resource* mr = rmm::mr::get_current_device_resource()) : _storage{1, stream, mr} { set_value_async(initial_value, stream); } /** * @brief Construct a new `device_scalar` by deep copying the contents of * another `device_scalar`, using the specified stream and memory * resource. * * @throws rmm::bad_alloc If creating the new allocation fails. * @throws rmm::cuda_error if copying from `other` fails. * * @param other The `device_scalar` whose contents will be copied * @param stream The stream to use for the allocation and copy * @param mr The resource to use for allocating the new `device_scalar` */ device_scalar(device_scalar const& other, cuda_stream_view stream, rmm::mr::device_memory_resource* mr = rmm::mr::get_current_device_resource()) : _storage{other._storage, stream, mr} { } /** * @brief Copies the value from device to host, synchronizes, and returns the value. * * Synchronizes `stream` after copying the data from device to host. * * @note If the stream specified to this function is different from the stream specified * to the constructor, then an appropriate dependency must be inserted between the streams * (e.g. using `cudaStreamWaitEvent()` or `cudaStreamSynchronize()`) before calling this function, * otherwise there may be a race condition. * * @throws rmm::cuda_error If the copy fails. * @throws rmm::cuda_error If synchronizing `stream` fails. * * @return T The value of the scalar. * @param stream CUDA stream on which to perform the copy and synchronize. */ [[nodiscard]] value_type value(cuda_stream_view stream) const { return _storage.front_element(stream); } /** * @brief Sets the value of the `device_scalar` to the value of `v`. * * This specialization for fundamental types is optimized to use `cudaMemsetAsync` when * `v` is zero. * * @note If the stream specified to this function is different from the stream specified * to the constructor, then appropriate dependencies must be inserted between the streams * (e.g. using `cudaStreamWaitEvent()` or `cudaStreamSynchronize()`) before and after calling * this function, otherwise there may be a race condition. * * This function does not synchronize `stream` before returning. Therefore, the object * referenced by `v` should not be destroyed or modified until `stream` has been * synchronized. Otherwise, behavior is undefined. * * @note: This function incurs a host to device memcpy or device memset and should be used * carefully. * * Example: * \code{cpp} * rmm::device_scalar<int32_t> s; * * int v{42}; * * // Copies 42 to device storage on `stream`. Does _not_ synchronize * vec.set_value_async(v, stream); * ... * cudaStreamSynchronize(stream); * // Synchronization is required before `v` can be modified * v = 13; * \endcode * * @throws rmm::cuda_error if copying @p value to device memory fails. * * @param value The host value which will be copied to device * @param stream CUDA stream on which to perform the copy */ void set_value_async(value_type const& value, cuda_stream_view stream) { _storage.set_element_async(0, value, stream); } // Disallow passing literals to set_value to avoid race conditions where the memory holding the // literal can be freed before the async memcpy / memset executes. void set_value_async(value_type&&, cuda_stream_view) = delete; /** * @brief Sets the value of the `device_scalar` to zero on the specified stream. * * @note If the stream specified to this function is different from the stream specified * to the constructor, then appropriate dependencies must be inserted between the streams * (e.g. using `cudaStreamWaitEvent()` or `cudaStreamSynchronize()`) before and after calling * this function, otherwise there may be a race condition. * * This function does not synchronize `stream` before returning. * * @note: This function incurs a device memset and should be used carefully. * * @param stream CUDA stream on which to perform the copy */ void set_value_to_zero_async(cuda_stream_view stream) { _storage.set_element_to_zero_async(value_type{0}, stream); } /** * @brief Returns pointer to object in device memory. * * @note If the returned device pointer is used on a CUDA stream different from the stream * specified to the constructor, then appropriate dependencies must be inserted between the * streams (e.g. using `cudaStreamWaitEvent()` or `cudaStreamSynchronize()`), otherwise there may * be a race condition. * * @return Pointer to underlying device memory */ [[nodiscard]] pointer data() noexcept { return static_cast<pointer>(_storage.data()); } /** * @brief Returns const pointer to object in device memory. * * @note If the returned device pointer is used on a CUDA stream different from the stream * specified to the constructor, then appropriate dependencies must be inserted between the * streams (e.g. using `cudaStreamWaitEvent()` or `cudaStreamSynchronize()`), otherwise there may * be a race condition. * * @return Const pointer to underlying device memory */ [[nodiscard]] const_pointer data() const noexcept { return static_cast<const_pointer>(_storage.data()); } /** * @briefreturn{Stream associated with the device memory allocation} */ [[nodiscard]] cuda_stream_view stream() const noexcept { return _storage.stream(); } /** * @brief Sets the stream to be used for deallocation * * @param stream Stream to be used for deallocation */ void set_stream(cuda_stream_view stream) noexcept { _storage.set_stream(stream); } private: rmm::device_uvector<T> _storage; }; /** @} */ // end of group } // namespace rmm
0
rapidsai_public_repos/rmm/include
rapidsai_public_repos/rmm/include/rmm/cuda_stream_pool.hpp
/* * Copyright (c) 2020-2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/cuda_stream.hpp> #include <rmm/cuda_stream_view.hpp> #include <rmm/detail/error.hpp> #include <atomic> #include <cstddef> #include <vector> namespace rmm { /** * @addtogroup cuda_streams * @{ * @file */ /** * @brief A pool of CUDA streams. * * Provides efficient access to collection of CUDA stream objects. * * Successive calls may return a `cuda_stream_view` of identical streams. For example, a possible * implementation is to maintain a circular buffer of `cuda_stream` objects. */ class cuda_stream_pool { public: static constexpr std::size_t default_size{16}; ///< Default stream pool size /** * @brief Construct a new cuda stream pool object of the given non-zero size * * @throws logic_error if `pool_size` is zero * @param pool_size The number of streams in the pool */ explicit cuda_stream_pool(std::size_t pool_size = default_size) : streams_(pool_size) { RMM_EXPECTS(pool_size > 0, "Stream pool size must be greater than zero"); } ~cuda_stream_pool() = default; cuda_stream_pool(cuda_stream_pool&&) = delete; cuda_stream_pool(cuda_stream_pool const&) = delete; cuda_stream_pool& operator=(cuda_stream_pool&&) = delete; cuda_stream_pool& operator=(cuda_stream_pool const&) = delete; /** * @brief Get a `cuda_stream_view` of a stream in the pool. * * This function is thread safe with respect to other calls to the same function. * * @return rmm::cuda_stream_view */ rmm::cuda_stream_view get_stream() const noexcept { return streams_[(next_stream++) % streams_.size()].view(); } /** * @brief Get a `cuda_stream_view` of the stream associated with `stream_id`. * Equivalent values of `stream_id` return a stream_view to the same underlying stream. * * This function is thread safe with respect to other calls to the same function. * * @param stream_id Unique identifier for the desired stream * * @return rmm::cuda_stream_view */ rmm::cuda_stream_view get_stream(std::size_t stream_id) const { return streams_[stream_id % streams_.size()].view(); } /** * @brief Get the number of streams in the pool. * * This function is thread safe with respect to other calls to the same function. * * @return the number of streams in the pool */ std::size_t get_pool_size() const noexcept { return streams_.size(); } private: std::vector<rmm::cuda_stream> streams_; mutable std::atomic_size_t next_stream{}; }; /** @} */ // end of group } // namespace rmm
0
rapidsai_public_repos/rmm/include
rapidsai_public_repos/rmm/include/rmm/device_uvector.hpp
/* * Copyright (c) 2020-2022, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/cuda_stream_view.hpp> #include <rmm/detail/error.hpp> #include <rmm/detail/exec_check_disable.hpp> #include <rmm/device_buffer.hpp> #include <rmm/mr/device/device_memory_resource.hpp> #include <rmm/mr/device/per_device_resource.hpp> #include <cstddef> #include <vector> #include <cuda/memory_resource> namespace rmm { /** * @addtogroup data_containers * @{ * @file */ /** * @brief An *uninitialized* vector of elements in device memory. * * Similar to a `thrust::device_vector`, `device_uvector` is a random access container of elements * stored contiguously in device memory. However, unlike `thrust::device_vector`, `device_uvector` * does *not* default initialize the vector elements. * * If initialization is desired, this must be done explicitly by the caller, e.g., with * `thrust::uninitialized_fill`. * * Example: * @code{.cpp} * rmm::mr::device_memory_resource * mr = new my_custom_resource(); * rmm::cuda_stream_view s{}; * * // Allocates *uninitialized* device memory on stream `s` sufficient for 100 ints using the * // supplied resource `mr` * rmm::device_uvector<int> uv(100, s, mr); * * // Initializes all elements to 0 on stream `s` * thrust::uninitialized_fill(thrust::cuda::par.on(s), uv.begin(), uv.end(), 0); * @endcode * * Avoiding default initialization improves performance by eliminating the kernel launch required to * default initialize the elements. This initialization is often unnecessary, e.g., when the vector * is created to hold some output from some operation. * * However, this restricts the element type `T` to only trivially copyable types. In short, * trivially copyable types can be safely copied with `memcpy`. For more information, see * https://en.cppreference.com/w/cpp/types/is_trivially_copyable. * * Another key difference over `thrust::device_vector` is that all operations that invoke * allocation, kernels, or memcpys take a CUDA stream parameter to indicate on which stream the * operation will be performed. * * @tparam T Trivially copyable element type */ template <typename T> class device_uvector { using async_resource_ref = cuda::mr::async_resource_ref<cuda::mr::device_accessible>; static_assert(std::is_trivially_copyable<T>::value, "device_uvector only supports types that are trivially copyable."); public: using value_type = T; ///< T; stored value type using size_type = std::size_t; ///< The type used for the size of the vector using reference = value_type&; ///< value_type&; reference type returned by operator[](size_type) using const_reference = value_type const&; ///< value_type const&; constant reference type ///< returned by operator[](size_type) const using pointer = value_type*; ///< The type of the pointer returned by data() using const_pointer = value_type const*; ///< The type of the pointer returned by data() const using iterator = pointer; ///< The type of the iterator returned by begin() using const_iterator = const_pointer; ///< The type of the const iterator returned by cbegin() RMM_EXEC_CHECK_DISABLE ~device_uvector() = default; RMM_EXEC_CHECK_DISABLE device_uvector(device_uvector&&) noexcept = default; ///< @default_move_constructor device_uvector& operator=(device_uvector&&) noexcept = default; ///< @default_move_assignment{device_uvector} /** * @brief Copy ctor is deleted as it doesn't allow a stream argument */ device_uvector(device_uvector const&) = delete; /** * @brief Copy assignment is deleted as it doesn't allow a stream argument */ device_uvector& operator=(device_uvector const&) = delete; /** * @brief Default constructor is deleted as it doesn't allow a stream argument */ device_uvector() = delete; /** * @brief Construct a new `device_uvector` with sufficient uninitialized storage for `size` * elements. * * Elements are uninitialized. Reading an element before it is initialized results in undefined * behavior. * * @param size The number of elements to allocate storage for * @param stream The stream on which to perform the allocation * @param mr The resource used to allocate the device storage */ explicit device_uvector(std::size_t size, cuda_stream_view stream, async_resource_ref mr = rmm::mr::get_current_device_resource()) : _storage{elements_to_bytes(size), stream, mr} { } /** * @brief Construct a new device_uvector by deep copying the contents of another `device_uvector`. * * Elements are copied as if by `memcpy`, i.e., `T`'s copy constructor is not invoked. * * @param other The vector to copy from * @param stream The stream on which to perform the copy * @param mr The resource used to allocate device memory for the new vector */ explicit device_uvector(device_uvector const& other, cuda_stream_view stream, async_resource_ref mr = rmm::mr::get_current_device_resource()) : _storage{other._storage, stream, mr} { } /** * @brief Returns pointer to the specified element * * Behavior is undefined if `element_index >= size()`. * * @param element_index Index of the specified element. * @return T* Pointer to the desired element */ [[nodiscard]] pointer element_ptr(std::size_t element_index) noexcept { assert(element_index < size()); return data() + element_index; } /** * @brief Returns pointer to the specified element * * Behavior is undefined if `element_index >= size()`. * * @param element_index Index of the specified element. * @return T* Pointer to the desired element */ [[nodiscard]] const_pointer element_ptr(std::size_t element_index) const noexcept { assert(element_index < size()); return data() + element_index; } /** * @brief Performs an asynchronous copy of `v` to the specified element in device memory. * * This specialization for fundamental types is optimized to use `cudaMemsetAsync` when * `host_value` is zero. * * This function does not synchronize stream `s` before returning. Therefore, the object * referenced by `v` should not be destroyed or modified until `stream` has been synchronized. * Otherwise, behavior is undefined. * * @note This function incurs a host to device memcpy and should be used sparingly. * * @note Calling this function with a literal or other r-value reference for `v` is disallowed * to prevent the implementation from asynchronously copying from a literal or other implicit * temporary after it is deleted or goes out of scope. * * Example: * \code{cpp} * rmm::device_uvector<int32_t> vec(100, stream); * * int v{42}; * * // Copies 42 to element 0 on `stream`. Does _not_ synchronize * vec.set_element_async(0, v, stream); * ... * cudaStreamSynchronize(stream); * // Synchronization is required before `v` can be modified * v = 13; * \endcode * * @throws rmm::out_of_range exception if `element_index >= size()` * * @param element_index Index of the target element * @param value The value to copy to the specified element * @param stream The stream on which to perform the copy */ void set_element_async(std::size_t element_index, value_type const& value, cuda_stream_view stream) { RMM_EXPECTS( element_index < size(), "Attempt to access out of bounds element.", rmm::out_of_range); if constexpr (std::is_same<value_type, bool>::value) { RMM_CUDA_TRY( cudaMemsetAsync(element_ptr(element_index), value, sizeof(value), stream.value())); return; } if constexpr (std::is_fundamental<value_type>::value) { if (value == value_type{0}) { set_element_to_zero_async(element_index, stream); return; } } RMM_CUDA_TRY(cudaMemcpyAsync( element_ptr(element_index), &value, sizeof(value), cudaMemcpyDefault, stream.value())); } // We delete the r-value reference overload to prevent asynchronously copying from a literal or // implicit temporary value after it is deleted or goes out of scope. void set_element_async(std::size_t, value_type const&&, cuda_stream_view) = delete; /** * @brief Asynchronously sets the specified element to zero in device memory. * * This function does not synchronize stream `s` before returning * * @note This function incurs a device memset and should be used sparingly. * * Example: * \code{cpp} * rmm::device_uvector<int32_t> vec(100, stream); * * int v{42}; * * // Sets element at index 42 to 0 on `stream`. Does _not_ synchronize * vec.set_element_to_zero_async(42, stream); * \endcode * * @throws rmm::out_of_range exception if `element_index >= size()` * * @param element_index Index of the target element * @param stream The stream on which to perform the copy */ void set_element_to_zero_async(std::size_t element_index, cuda_stream_view stream) { RMM_EXPECTS( element_index < size(), "Attempt to access out of bounds element.", rmm::out_of_range); RMM_CUDA_TRY( cudaMemsetAsync(element_ptr(element_index), 0, sizeof(value_type), stream.value())); } /** * @brief Performs a synchronous copy of `v` to the specified element in device memory. * * Because this function synchronizes the stream `s`, it is safe to destroy or modify the object * referenced by `v` after this function has returned. * * @note This function incurs a host to device memcpy and should be used sparingly. * @note This function synchronizes `stream`. * * Example: * \code{cpp} * rmm::device_uvector<int32_t> vec(100, stream); * * int v{42}; * * // Copies 42 to element 0 on `stream` and synchronizes the stream * vec.set_element(0, v, stream); * * // It is safe to destroy or modify `v` * v = 13; * \endcode * * * @throws rmm::out_of_range exception if `element_index >= size()` * * @param element_index Index of the target element * @param value The value to copy to the specified element * @param stream The stream on which to perform the copy */ void set_element(std::size_t element_index, T const& value, cuda_stream_view stream) { set_element_async(element_index, value, stream); stream.synchronize_no_throw(); } /** * @brief Returns the specified element from device memory * * @note This function incurs a device to host memcpy and should be used sparingly. * @note This function synchronizes `stream`. * * @throws rmm::out_of_range exception if `element_index >= size()` * * @param element_index Index of the desired element * @param stream The stream on which to perform the copy * @return The value of the specified element */ [[nodiscard]] value_type element(std::size_t element_index, cuda_stream_view stream) const { RMM_EXPECTS( element_index < size(), "Attempt to access out of bounds element.", rmm::out_of_range); value_type value; RMM_CUDA_TRY(cudaMemcpyAsync( &value, element_ptr(element_index), sizeof(value), cudaMemcpyDefault, stream.value())); stream.synchronize(); return value; } /** * @brief Returns the first element. * * @note This function incurs a device-to-host memcpy and should be used sparingly. * @note This function synchronizes `stream`. * * @throws rmm::out_of_range exception if the vector is empty. * * @param stream The stream on which to perform the copy * @return The value of the first element */ [[nodiscard]] value_type front_element(cuda_stream_view stream) const { return element(0, stream); } /** * @brief Returns the last element. * * @note This function incurs a device-to-host memcpy and should be used sparingly. * @note This function synchronizes `stream`. * * @throws rmm::out_of_range exception if the vector is empty. * * @param stream The stream on which to perform the copy * @return The value of the last element */ [[nodiscard]] value_type back_element(cuda_stream_view stream) const { return element(size() - 1, stream); } /** * @brief Increases the capacity of the vector to `new_capacity` elements. * * If `new_capacity <= capacity()`, no action is taken. * * If `new_capacity > capacity()`, a new allocation of size `new_capacity` is created, and the * first `size()` elements from the current allocation are copied there as if by memcpy. Finally, * the old allocation is freed and replaced by the new allocation. * * @param new_capacity The desired capacity (number of elements) * @param stream The stream on which to perform the allocation/copy (if any) */ void reserve(std::size_t new_capacity, cuda_stream_view stream) { _storage.reserve(elements_to_bytes(new_capacity), stream); } /** * @brief Resizes the vector to contain `new_size` elements. * * If `new_size > size()`, the additional elements are uninitialized. * * If `new_size < capacity()`, no action is taken other than updating the value of `size()`. No * memory is allocated nor copied. `shrink_to_fit()` may be used to force deallocation of unused * memory. * * If `new_size > capacity()`, elements are copied as if by memcpy to a new allocation. * * The invariant `size() <= capacity()` holds. * * @param new_size The desired number of elements * @param stream The stream on which to perform the allocation/copy (if any) */ void resize(std::size_t new_size, cuda_stream_view stream) { _storage.resize(elements_to_bytes(new_size), stream); } /** * @brief Forces deallocation of unused device memory. * * If `capacity() > size()`, reallocates and copies vector contents to eliminate unused memory. * * @param stream Stream on which to perform allocation and copy */ void shrink_to_fit(cuda_stream_view stream) { _storage.shrink_to_fit(stream); } /** * @brief Release ownership of device memory storage. * * @return The `device_buffer` used to store the vector elements */ device_buffer release() noexcept { return std::move(_storage); } /** * @brief Returns the number of elements that can be held in currently allocated storage. * * @return std::size_t The number of elements that can be stored without requiring a new * allocation. */ [[nodiscard]] std::size_t capacity() const noexcept { return bytes_to_elements(_storage.capacity()); } /** * @brief Returns pointer to underlying device storage. * * @note If `size() == 0` it is undefined behavior to deference the returned pointer. Furthermore, * the returned pointer may or may not be equal to `nullptr`. * * @return Raw pointer to element storage in device memory. */ [[nodiscard]] pointer data() noexcept { return static_cast<pointer>(_storage.data()); } /** * @brief Returns const pointer to underlying device storage. * * @note If `size() == 0` it is undefined behavior to deference the returned pointer. Furthermore, * the returned pointer may or may not be equal to `nullptr`. * * @return const_pointer Raw const pointer to element storage in device memory. */ [[nodiscard]] const_pointer data() const noexcept { return static_cast<const_pointer>(_storage.data()); } /** * @brief Returns an iterator to the first element. * * If the vector is empty, then `begin() == end()`. * * @return Iterator to the first element. */ [[nodiscard]] iterator begin() noexcept { return data(); } /** * @brief Returns a const_iterator to the first element. * * If the vector is empty, then `cbegin() == cend()`. * * @return Immutable iterator to the first element. */ [[nodiscard]] const_iterator cbegin() const noexcept { return data(); } /** * @brief Returns a const_iterator to the first element. * * If the vector is empty, then `begin() == end()`. * * @return Immutable iterator to the first element. */ [[nodiscard]] const_iterator begin() const noexcept { return cbegin(); } /** * @brief Returns an iterator to the element following the last element of the vector. * * The element referenced by `end()` is a placeholder and dereferencing it results in undefined * behavior. * * @return Iterator to one past the last element. */ [[nodiscard]] iterator end() noexcept { return data() + size(); } /** * @brief Returns a const_iterator to the element following the last element of the vector. * * The element referenced by `end()` is a placeholder and dereferencing it results in undefined * behavior. * * @return Immutable iterator to one past the last element. */ [[nodiscard]] const_iterator cend() const noexcept { return data() + size(); } /** * @brief Returns an iterator to the element following the last element of the vector. * * The element referenced by `end()` is a placeholder and dereferencing it results in undefined * behavior. * * @return Immutable iterator to one past the last element. */ [[nodiscard]] const_iterator end() const noexcept { return cend(); } /** * @briefreturn{The number of elements in the vector} */ [[nodiscard]] std::size_t size() const noexcept { return bytes_to_elements(_storage.size()); } /** * @briefreturn{The signed number of elements in the vector} */ [[nodiscard]] std::int64_t ssize() const noexcept { assert(size() < static_cast<std::size_t>(std::numeric_limits<int64_t>::max()) && "Size overflows signed integer"); return static_cast<int64_t>(size()); } /** * @briefreturn{true if the vector contains no elements, i.e. `size() == 0`} */ [[nodiscard]] bool is_empty() const noexcept { return size() == 0; } /** * @briefreturn{The async_resource_ref used to allocate and deallocate the device storage} */ [[nodiscard]] async_resource_ref memory_resource() const noexcept { return _storage.memory_resource(); } /** * @briefreturn{Stream most recently specified for allocation/deallocation} */ [[nodiscard]] cuda_stream_view stream() const noexcept { return _storage.stream(); } /** * @brief Sets the stream to be used for deallocation * * If no other rmm::device_uvector method that allocates memory is called * after this call with a different stream argument, then @p stream * will be used for deallocation in the `rmm::device_uvector destructor. * However, if either of `resize()` or `shrink_to_fit()` is called after this, * the later stream parameter will be stored and used in the destructor. * * @param stream The stream to use for deallocation */ void set_stream(cuda_stream_view stream) noexcept { _storage.set_stream(stream); } private: device_buffer _storage{}; ///< Device memory storage for vector elements [[nodiscard]] std::size_t constexpr elements_to_bytes(std::size_t num_elements) const noexcept { return num_elements * sizeof(value_type); } [[nodiscard]] std::size_t constexpr bytes_to_elements(std::size_t num_bytes) const noexcept { return num_bytes / sizeof(value_type); } }; /** @} */ // end of group } // namespace rmm
0
rapidsai_public_repos/rmm/include
rapidsai_public_repos/rmm/include/rmm/cuda_stream.hpp
/* * Copyright (c) 2020, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <functional> #include <rmm/cuda_stream_view.hpp> #include <rmm/detail/error.hpp> #include <rmm/detail/logging_assert.hpp> #include <cuda_runtime_api.h> #include <memory> namespace rmm { /** * @addtogroup cuda_streams * @{ * @file */ /** * @brief Owning wrapper for a CUDA stream. * * Provides RAII lifetime semantics for a CUDA stream. */ class cuda_stream { public: /** * @brief Move constructor (default) * * A moved-from cuda_stream is invalid and it is Undefined Behavior to call methods that access * the owned stream. */ cuda_stream(cuda_stream&&) = default; /** * @brief Move copy assignment operator (default) * * A moved-from cuda_stream is invalid and it is Undefined Behavior to call methods that access * the owned stream. * * @return A reference to this cuda_stream */ cuda_stream& operator=(cuda_stream&&) = default; ~cuda_stream() = default; cuda_stream(cuda_stream const&) = delete; // Copying disallowed: one stream one owner cuda_stream& operator=(cuda_stream&) = delete; /** * @brief Construct a new cuda stream object * * @throw rmm::cuda_error if stream creation fails */ cuda_stream() : stream_{[]() { auto* stream = new cudaStream_t; // NOLINT(cppcoreguidelines-owning-memory) RMM_CUDA_TRY(cudaStreamCreate(stream)); return stream; }(), [](cudaStream_t* stream) { RMM_ASSERT_CUDA_SUCCESS(cudaStreamDestroy(*stream)); delete stream; // NOLINT(cppcoreguidelines-owning-memory) }} { } /** * @brief Returns true if the owned stream is non-null * * @return true If the owned stream has not been explicitly moved and is therefore non-null. * @return false If the owned stream has been explicitly moved and is therefore null. */ [[nodiscard]] bool is_valid() const { return stream_ != nullptr; } /** * @brief Get the value of the wrapped CUDA stream. * * @return cudaStream_t The wrapped CUDA stream. */ [[nodiscard]] cudaStream_t value() const { RMM_LOGGING_ASSERT(is_valid()); return *stream_; } /** * @brief Explicit conversion to cudaStream_t. */ explicit operator cudaStream_t() const noexcept { return value(); } /** * @brief Creates an immutable, non-owning view of the wrapped CUDA stream. * * @return rmm::cuda_stream_view The view of the CUDA stream */ [[nodiscard]] cuda_stream_view view() const { return cuda_stream_view{value()}; } /** * @brief Implicit conversion to cuda_stream_view * * @return A view of the owned stream */ operator cuda_stream_view() const { return view(); } /** * @brief Synchronize the owned CUDA stream. * * Calls `cudaStreamSynchronize()`. * * @throw rmm::cuda_error if stream synchronization fails */ void synchronize() const { RMM_CUDA_TRY(cudaStreamSynchronize(value())); } /** * @brief Synchronize the owned CUDA stream. Does not throw if there is an error. * * Calls `cudaStreamSynchronize()` and asserts if there is an error. */ void synchronize_no_throw() const noexcept { RMM_ASSERT_CUDA_SUCCESS(cudaStreamSynchronize(value())); } private: std::unique_ptr<cudaStream_t, std::function<void(cudaStream_t*)>> stream_; }; /** @} */ // end of group } // namespace rmm
0
rapidsai_public_repos/rmm/include
rapidsai_public_repos/rmm/include/rmm/logger.hpp
/* * Copyright (c) 2020-2023, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <fmt/format.h> #include <fmt/ostream.h> #include <spdlog/sinks/basic_file_sink.h> #include <spdlog/spdlog.h> #include <array> #include <iostream> #include <string> namespace rmm { namespace detail { /** * @brief Returns the default log filename for the RMM global logger. * * If the environment variable `RMM_DEBUG_LOG_FILE` is defined, its value is used as the path and * name of the log file. Otherwise, the file `rmm_log.txt` in the current working directory is used. * * @return std::string The default log file name. */ inline std::string default_log_filename() { auto* filename = std::getenv("RMM_DEBUG_LOG_FILE"); return (filename == nullptr) ? std::string{"rmm_log.txt"} : std::string{filename}; } /** * @brief Simple wrapper around a spdlog::logger that performs RMM-specific initialization */ struct logger_wrapper { spdlog::logger logger_; ///< The underlying logger logger_wrapper() : logger_{"RMM", std::make_shared<spdlog::sinks::basic_file_sink_mt>( default_log_filename(), true // truncate file )} { logger_.set_pattern("[%6t][%H:%M:%S:%f][%-6l] %v"); logger_.flush_on(spdlog::level::warn); #if SPDLOG_ACTIVE_LEVEL <= SPDLOG_LEVEL_INFO #ifdef CUDA_API_PER_THREAD_DEFAULT_STREAM logger_.info("----- RMM LOG BEGIN [PTDS ENABLED] -----"); #else logger_.info("----- RMM LOG BEGIN [PTDS DISABLED] -----"); #endif logger_.flush(); #endif } }; /** * @brief Represent a size in number of bytes. */ struct bytes { std::size_t value; ///< The size in bytes /** * @brief Construct a new bytes object * * @param os The output stream * @param value The size in bytes */ friend std::ostream& operator<<(std::ostream& os, bytes const& value) { static std::array units{"B", "KiB", "MiB", "GiB", "TiB", "PiB", "EiB", "ZiB", "YiB"}; int index = 0; auto size = static_cast<double>(value.value); while (size > 1024) { size /= 1024; index++; } return os << size << ' ' << units.at(index); } }; } // namespace detail /** * @brief Returns the global RMM logger * * @ingroup logging * * This is a spdlog logger. The easiest way to log messages is to use the `RMM_LOG_*` macros. * * @return spdlog::logger& The logger. */ inline spdlog::logger& logger() { static detail::logger_wrapper wrapped{}; return wrapped.logger_; } //! @cond Doxygen_Suppress // // The default is INFO, but it should be used sparingly, so that by default a log file is only // output if there is important information, warnings, errors, and critical failures // Log messages that require computation should only be used at level TRACE and DEBUG #define RMM_LOG_TRACE(...) SPDLOG_LOGGER_TRACE(&rmm::logger(), __VA_ARGS__) #define RMM_LOG_DEBUG(...) SPDLOG_LOGGER_DEBUG(&rmm::logger(), __VA_ARGS__) #define RMM_LOG_INFO(...) SPDLOG_LOGGER_INFO(&rmm::logger(), __VA_ARGS__) #define RMM_LOG_WARN(...) SPDLOG_LOGGER_WARN(&rmm::logger(), __VA_ARGS__) #define RMM_LOG_ERROR(...) SPDLOG_LOGGER_ERROR(&rmm::logger(), __VA_ARGS__) #define RMM_LOG_CRITICAL(...) SPDLOG_LOGGER_CRITICAL(&rmm::logger(), __VA_ARGS__) //! @endcond } // namespace rmm // Doxygen doesn't like this because we're overloading something from fmt //! @cond Doxygen_Suppress template <> struct fmt::formatter<rmm::detail::bytes> : fmt::ostream_formatter {}; //! @endcond
0
rapidsai_public_repos/rmm/include
rapidsai_public_repos/rmm/include/rmm/cuda_device.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/detail/error.hpp> #include <cuda_runtime_api.h> namespace rmm { /** * @addtogroup cuda_device_management * @{ * @file */ /** * @brief Strong type for a CUDA device identifier. * */ struct cuda_device_id { using value_type = int; ///< Integer type used for device identifier /** * @brief Construct a `cuda_device_id` from the specified integer value. * * @param dev_id The device's integer identifier */ explicit constexpr cuda_device_id(value_type dev_id) noexcept : id_{dev_id} {} /// @briefreturn{The wrapped integer value} [[nodiscard]] constexpr value_type value() const noexcept { return id_; } // TODO re-add doxygen comment specifier /** for these hidden friend operators once this Breathe // bug is fixed: https://github.com/breathe-doc/breathe/issues/916 //! @cond Doxygen_Suppress /** * @brief Compare two `cuda_device_id`s for equality. * * @param lhs The first `cuda_device_id` to compare. * @param rhs The second `cuda_device_id` to compare. * @return true if the two `cuda_device_id`s wrap the same integer value, false otherwise. */ [[nodiscard]] constexpr friend bool operator==(cuda_device_id const& lhs, cuda_device_id const& rhs) noexcept { return lhs.value() == rhs.value(); } /** * @brief Compare two `cuda_device_id`s for inequality. * * @param lhs The first `cuda_device_id` to compare. * @param rhs The second `cuda_device_id` to compare. * @return true if the two `cuda_device_id`s wrap different integer values, false otherwise. */ [[nodiscard]] constexpr friend bool operator!=(cuda_device_id const& lhs, cuda_device_id const& rhs) noexcept { return lhs.value() != rhs.value(); } //! @endcond private: value_type id_; }; /** * @brief Returns a `cuda_device_id` for the current device * * The current device is the device on which the calling thread executes device code. * * @return `cuda_device_id` for the current device */ inline cuda_device_id get_current_cuda_device() { cuda_device_id::value_type dev_id{-1}; RMM_ASSERT_CUDA_SUCCESS(cudaGetDevice(&dev_id)); return cuda_device_id{dev_id}; } /** * @brief Returns the number of CUDA devices in the system * * @return Number of CUDA devices in the system */ inline int get_num_cuda_devices() { cuda_device_id::value_type num_dev{-1}; RMM_ASSERT_CUDA_SUCCESS(cudaGetDeviceCount(&num_dev)); return num_dev; } /** * @brief RAII class that sets the current CUDA device to the specified device on construction * and restores the previous device on destruction. */ struct cuda_set_device_raii { /** * @brief Construct a new cuda_set_device_raii object and sets the current CUDA device to `dev_id` * * @param dev_id The device to set as the current CUDA device */ explicit cuda_set_device_raii(cuda_device_id dev_id) : old_device_{get_current_cuda_device()}, needs_reset_{dev_id.value() >= 0 && old_device_ != dev_id} { if (needs_reset_) { RMM_ASSERT_CUDA_SUCCESS(cudaSetDevice(dev_id.value())); } } /** * @brief Reactivates the previous CUDA device */ ~cuda_set_device_raii() noexcept { if (needs_reset_) { RMM_ASSERT_CUDA_SUCCESS(cudaSetDevice(old_device_.value())); } } cuda_set_device_raii(cuda_set_device_raii const&) = delete; cuda_set_device_raii& operator=(cuda_set_device_raii const&) = delete; cuda_set_device_raii(cuda_set_device_raii&&) = delete; cuda_set_device_raii& operator=(cuda_set_device_raii&&) = delete; private: cuda_device_id old_device_; bool needs_reset_; }; /** @} */ // end of group } // namespace rmm
0
rapidsai_public_repos/rmm/include
rapidsai_public_repos/rmm/include/rmm/cuda_stream_view.hpp
/* * Copyright (c) 2020-2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/detail/error.hpp> #include <cuda_runtime_api.h> #include <cuda/stream_ref> #include <atomic> #include <cstddef> #include <cstdint> namespace rmm { /** * @addtogroup cuda_streams * @{ * @file */ /** * @brief Strongly-typed non-owning wrapper for CUDA streams with default constructor. * * This wrapper is simply a "view": it does not own the lifetime of the stream it wraps. */ class cuda_stream_view { public: constexpr cuda_stream_view() = default; ~cuda_stream_view() = default; constexpr cuda_stream_view(cuda_stream_view const&) = default; ///< @default_copy_constructor constexpr cuda_stream_view(cuda_stream_view&&) = default; ///< @default_move_constructor constexpr cuda_stream_view& operator=(cuda_stream_view const&) = default; ///< @default_copy_assignment{cuda_stream_view} constexpr cuda_stream_view& operator=(cuda_stream_view&&) = default; ///< @default_move_assignment{cuda_stream_view} // Disable construction from literal 0 constexpr cuda_stream_view(int) = delete; //< Prevent cast from 0 constexpr cuda_stream_view(std::nullptr_t) = delete; //< Prevent cast from nullptr /** * @brief Constructor from a cudaStream_t * * @param stream The underlying stream for this view */ constexpr cuda_stream_view(cudaStream_t stream) noexcept : stream_{stream} {} /** * @brief Implicit conversion from stream_ref. * * @param stream The underlying stream for this view */ constexpr cuda_stream_view(cuda::stream_ref stream) noexcept : stream_{stream.get()} {} /** * @brief Get the wrapped stream. * * @return cudaStream_t The underlying stream referenced by this cuda_stream_view */ [[nodiscard]] constexpr cudaStream_t value() const noexcept { return stream_; } /** * @brief Implicit conversion to cudaStream_t. * * @return cudaStream_t The underlying stream referenced by this cuda_stream_view */ constexpr operator cudaStream_t() const noexcept { return value(); } /** * @brief Implicit conversion to stream_ref. * * @return stream_ref The underlying stream referenced by this cuda_stream_view */ constexpr operator cuda::stream_ref() const noexcept { return value(); } /** * @briefreturn{true if the wrapped stream is the CUDA per-thread default stream} */ [[nodiscard]] inline bool is_per_thread_default() const noexcept; /** * @briefreturn{true if the wrapped stream is explicitly the CUDA legacy default stream} */ [[nodiscard]] inline bool is_default() const noexcept; /** * @brief Synchronize the viewed CUDA stream. * * Calls `cudaStreamSynchronize()`. * * @throw rmm::cuda_error if stream synchronization fails */ void synchronize() const { RMM_CUDA_TRY(cudaStreamSynchronize(stream_)); } /** * @brief Synchronize the viewed CUDA stream. Does not throw if there is an error. * * Calls `cudaStreamSynchronize()` and asserts if there is an error. */ void synchronize_no_throw() const noexcept { RMM_ASSERT_CUDA_SUCCESS(cudaStreamSynchronize(stream_)); } private: cudaStream_t stream_{}; }; /** * @brief Static cuda_stream_view of the default stream (stream 0), for convenience */ static constexpr cuda_stream_view cuda_stream_default{}; /** * @brief Static cuda_stream_view of cudaStreamLegacy, for convenience */ static const cuda_stream_view cuda_stream_legacy{ cudaStreamLegacy // NOLINT(cppcoreguidelines-pro-type-cstyle-cast) }; /** * @brief Static cuda_stream_view of cudaStreamPerThread, for convenience */ static const cuda_stream_view cuda_stream_per_thread{ cudaStreamPerThread // NOLINT(cppcoreguidelines-pro-type-cstyle-cast) }; // Need to avoid putting is_per_thread_default and is_default into the group twice. /** @} */ // end of group [[nodiscard]] inline bool cuda_stream_view::is_per_thread_default() const noexcept { #ifdef CUDA_API_PER_THREAD_DEFAULT_STREAM return value() == cuda_stream_per_thread || value() == nullptr; #else return value() == cuda_stream_per_thread; #endif } [[nodiscard]] inline bool cuda_stream_view::is_default() const noexcept { #ifdef CUDA_API_PER_THREAD_DEFAULT_STREAM return value() == cuda_stream_legacy; #else return value() == cuda_stream_legacy || value() == nullptr; #endif } /** * @addtogroup cuda_streams * @{ */ /** * @brief Equality comparison operator for streams * * @param lhs The first stream view to compare * @param rhs The second stream view to compare * @return true if equal, false if unequal */ inline bool operator==(cuda_stream_view lhs, cuda_stream_view rhs) { return lhs.value() == rhs.value(); } /** * @brief Inequality comparison operator for streams * * @param lhs The first stream view to compare * @param rhs The second stream view to compare * @return true if unequal, false if equal */ inline bool operator!=(cuda_stream_view lhs, cuda_stream_view rhs) { return not(lhs == rhs); } /** * @brief Output stream operator for printing / logging streams * * @param os The output ostream * @param stream The cuda_stream_view to output * @return std::ostream& The output ostream */ inline std::ostream& operator<<(std::ostream& os, cuda_stream_view stream) { os << stream.value(); return os; } /** @} */ // end of group } // namespace rmm
0
rapidsai_public_repos/rmm/include
rapidsai_public_repos/rmm/include/rmm/exec_policy.hpp
/* * Copyright (c) 2020-2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ /** @file exec_policy.hpp Thrust execution policy that uses RMM's Thrust Allocator Adaptor. */ #pragma once #include <rmm/cuda_stream_view.hpp> #include <rmm/mr/device/thrust_allocator_adaptor.hpp> #include <rmm/detail/thrust_namespace.h> #include <thrust/system/cuda/execution_policy.h> #include <thrust/version.h> namespace rmm { /** * @addtogroup thrust_integrations * @{ * @file */ /** * @brief Synchronous execution policy for allocations using thrust */ using thrust_exec_policy_t = thrust::detail::execute_with_allocator<rmm::mr::thrust_allocator<char>, thrust::cuda_cub::execute_on_stream_base>; /** * @brief Helper class usable as a Thrust CUDA execution policy * that uses RMM for temporary memory allocation on the specified stream. */ class exec_policy : public thrust_exec_policy_t { public: /** * @brief Construct a new execution policy object * * @param stream The stream on which to allocate temporary memory * @param mr The resource to use for allocating temporary memory */ explicit exec_policy(cuda_stream_view stream = cuda_stream_default, rmm::mr::device_memory_resource* mr = mr::get_current_device_resource()) : thrust_exec_policy_t( thrust::cuda::par(rmm::mr::thrust_allocator<char>(stream, mr)).on(stream.value())) { } }; #if THRUST_VERSION >= 101600 /** * @brief Asynchronous execution policy for allocations using thrust */ using thrust_exec_policy_nosync_t = thrust::detail::execute_with_allocator<rmm::mr::thrust_allocator<char>, thrust::cuda_cub::execute_on_stream_nosync_base>; /** * @brief Helper class usable as a Thrust CUDA execution policy * that uses RMM for temporary memory allocation on the specified stream * and which allows the Thrust backend to skip stream synchronizations that * are not required for correctness. */ class exec_policy_nosync : public thrust_exec_policy_nosync_t { public: explicit exec_policy_nosync( cuda_stream_view stream = cuda_stream_default, rmm::mr::device_memory_resource* mr = mr::get_current_device_resource()) : thrust_exec_policy_nosync_t( thrust::cuda::par_nosync(rmm::mr::thrust_allocator<char>(stream, mr)).on(stream.value())) { } }; #else using thrust_exec_policy_nosync_t = thrust_exec_policy_t; ///< When used with Thrust < 1.16.0, thrust_exec_policy_nosync_t is an ///< alias for thrust_exec_policy_t using exec_policy_nosync = exec_policy; ///< When used with Thrust < 1.16.0, exec_policy_nosync is an alias for exec_policy #endif /** @} */ // end of group } // namespace rmm
0
rapidsai_public_repos/rmm/include
rapidsai_public_repos/rmm/include/rmm/device_vector.hpp
/* * Copyright (c) 2020, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/mr/device/thrust_allocator_adaptor.hpp> #include <rmm/detail/thrust_namespace.h> #include <thrust/device_vector.h> namespace rmm { /** * @addtogroup thrust_integrations * @{ * @file */ /** * @brief Alias for a thrust::device_vector that uses RMM for memory allocation. * */ template <typename T> using device_vector = thrust::device_vector<T, rmm::mr::thrust_allocator<T>>; /** @} */ // end of group } // namespace rmm
0
rapidsai_public_repos/rmm/include/rmm
rapidsai_public_repos/rmm/include/rmm/detail/logging_assert.hpp
/* * Copyright (c) 2020, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once // Only include <rmm/logger.hpp> if needed in RMM_LOGGING_ASSERT below. The // logger can be extremely expensive to compile, so we want to avoid including // it. #if !defined(NDEBUG) #include <cassert> #include <rmm/detail/error.hpp> #include <rmm/logger.hpp> #endif /** * @brief Assertion that logs a CRITICAL log message on failure. */ #ifdef NDEBUG #define RMM_LOGGING_ASSERT(_expr) (void)0 #elif SPDLOG_ACTIVE_LEVEL < SPDLOG_LEVEL_OFF #define RMM_LOGGING_ASSERT(_expr) \ do { \ bool const success = (_expr); \ if (!success) { \ RMM_LOG_CRITICAL( \ "[" __FILE__ ":" RMM_STRINGIFY(__LINE__) "] Assertion " RMM_STRINGIFY(_expr) " failed."); \ rmm::logger().flush(); \ /* NOLINTNEXTLINE(cppcoreguidelines-pro-bounds-array-to-pointer-decay) */ \ assert(success); \ } \ } while (0) #else #define RMM_LOGGING_ASSERT(_expr) assert((_expr)); #endif
0
rapidsai_public_repos/rmm/include/rmm
rapidsai_public_repos/rmm/include/rmm/detail/exec_check_disable.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once /** * @brief Macro for suppressing __host__ / __device__ function markup * checks that the NVCC compiler does. * * At times it is useful to place rmm host only types inside containers * that work on both host and device. Doing so will generate warnings * of using a host only type inside a host / device type. * * This macro can be used to silence said warnings * */ // #pragma nv_exec_check_disable is only recognized by NVCC so verify // that we have both the NVCC compiler and we are compiling a CUDA // source #if defined(__CUDACC__) && defined(__NVCC__) #define RMM_EXEC_CHECK_DISABLE _Pragma("nv_exec_check_disable") #else #define RMM_EXEC_CHECK_DISABLE #endif
0
rapidsai_public_repos/rmm/include/rmm
rapidsai_public_repos/rmm/include/rmm/detail/export.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once // Macros used for defining symbol visibility, only GLIBC is supported #if (defined(__GNUC__) && !defined(__MINGW32__) && !defined(__MINGW64__)) #define RMM_EXPORT __attribute__((visibility("default"))) #define RMM_HIDDEN __attribute__((visibility("hidden"))) #else #define RMM_EXPORT #define RMM_HIDDEN #endif
0
rapidsai_public_repos/rmm/include/rmm
rapidsai_public_repos/rmm/include/rmm/detail/dynamic_load_runtime.hpp
/* * Copyright (c) 2022, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/cuda_device.hpp> #include <cuda_runtime_api.h> #include <dlfcn.h> #include <memory> #include <optional> namespace rmm::detail { /** * @brief `dynamic_load_runtime` loads the cuda runtime library at runtime * * By loading the cudart library at runtime we can use functions that * are added in newer minor versions of the cuda runtime. */ struct dynamic_load_runtime { static void* get_cuda_runtime_handle() { auto close_cudart = [](void* handle) { ::dlclose(handle); }; auto open_cudart = []() { ::dlerror(); const int major = CUDART_VERSION / 1000; // In CUDA 12 the SONAME is correctly defined as libcudart.12, but for // CUDA<=11 it includes an extra 0 minor version e.g. libcudart.11.0. We // also allow finding the linker name. const std::string libname_ver_cuda_11 = "libcudart.so." + std::to_string(major) + ".0"; const std::string libname_ver_cuda_12 = "libcudart.so." + std::to_string(major); const std::string libname = "libcudart.so"; void* ptr = nullptr; for (auto&& name : {libname_ver_cuda_12, libname_ver_cuda_11, libname}) { ptr = dlopen(name.c_str(), RTLD_LAZY); if (ptr != nullptr) break; } if (ptr != nullptr) { return ptr; } RMM_FAIL("Unable to dlopen cudart"); }; static std::unique_ptr<void, decltype(close_cudart)> cudart_handle{open_cudart(), close_cudart}; return cudart_handle.get(); } template <typename... Args> using function_sig = std::add_pointer_t<cudaError_t(Args...)>; template <typename signature> static std::optional<signature> function(const char* func_name) { auto* runtime = get_cuda_runtime_handle(); auto* handle = ::dlsym(runtime, func_name); if (!handle) { return std::nullopt; } auto* function_ptr = reinterpret_cast<signature>(handle); return std::optional<signature>(function_ptr); } }; #if defined(RMM_STATIC_CUDART) // clang-format off #define RMM_CUDART_API_WRAPPER(name, signature) \ template <typename... Args> \ static cudaError_t name(Args... args) \ { \ _Pragma("GCC diagnostic push") \ _Pragma("GCC diagnostic ignored \"-Waddress\"") \ static_assert(static_cast<signature>(::name), \ "Failed to find #name function with arguments #signature"); \ _Pragma("GCC diagnostic pop") \ return ::name(args...); \ } // clang-format on #else #define RMM_CUDART_API_WRAPPER(name, signature) \ template <typename... Args> \ static cudaError_t name(Args... args) \ { \ static const auto func = dynamic_load_runtime::function<signature>(#name); \ if (func) { return (*func)(args...); } \ RMM_FAIL("Failed to find #name function in libcudart.so"); \ } #endif #if CUDART_VERSION >= 11020 // 11.2 introduced cudaMallocAsync /** * @brief Bind to the stream-ordered memory allocator functions * at runtime. * * This allows RMM users to compile/link against CUDA 11.2+ and run with * < CUDA 11.2 runtime as these functions are found at call time. */ struct async_alloc { static bool is_supported() { #if defined(RMM_STATIC_CUDART) static bool runtime_supports_pool = (CUDART_VERSION >= 11020); #else static bool runtime_supports_pool = dynamic_load_runtime::function<dynamic_load_runtime::function_sig<void*, cudaStream_t>>( "cudaFreeAsync") .has_value(); #endif static auto driver_supports_pool{[] { int cuda_pool_supported{}; auto result = cudaDeviceGetAttribute(&cuda_pool_supported, cudaDevAttrMemoryPoolsSupported, rmm::get_current_cuda_device().value()); return result == cudaSuccess and cuda_pool_supported == 1; }()}; return runtime_supports_pool and driver_supports_pool; } /** * @brief Check whether the specified `cudaMemAllocationHandleType` is supported on the present * CUDA driver/runtime version. * * @note This query was introduced in CUDA 11.3 so on CUDA 11.2 this function will only return * true for `cudaMemHandleTypeNone`. * * @param handle_type An IPC export handle type to check for support. * @return true if supported * @return false if unsupported */ static bool is_export_handle_type_supported(cudaMemAllocationHandleType handle_type) { int supported_handle_types_bitmask{}; #if CUDART_VERSION >= 11030 // 11.3 introduced cudaDevAttrMemoryPoolSupportedHandleTypes if (cudaMemHandleTypeNone != handle_type) { auto const result = cudaDeviceGetAttribute(&supported_handle_types_bitmask, cudaDevAttrMemoryPoolSupportedHandleTypes, rmm::get_current_cuda_device().value()); // Don't throw on cudaErrorInvalidValue auto const unsupported_runtime = (result == cudaErrorInvalidValue); if (unsupported_runtime) return false; // throw any other error that may have occurred RMM_CUDA_TRY(result); } #endif return (supported_handle_types_bitmask & handle_type) == handle_type; } template <typename... Args> using cudart_sig = dynamic_load_runtime::function_sig<Args...>; using cudaMemPoolCreate_sig = cudart_sig<cudaMemPool_t*, const cudaMemPoolProps*>; RMM_CUDART_API_WRAPPER(cudaMemPoolCreate, cudaMemPoolCreate_sig); using cudaMemPoolSetAttribute_sig = cudart_sig<cudaMemPool_t, cudaMemPoolAttr, void*>; RMM_CUDART_API_WRAPPER(cudaMemPoolSetAttribute, cudaMemPoolSetAttribute_sig); using cudaMemPoolDestroy_sig = cudart_sig<cudaMemPool_t>; RMM_CUDART_API_WRAPPER(cudaMemPoolDestroy, cudaMemPoolDestroy_sig); using cudaMallocFromPoolAsync_sig = cudart_sig<void**, size_t, cudaMemPool_t, cudaStream_t>; RMM_CUDART_API_WRAPPER(cudaMallocFromPoolAsync, cudaMallocFromPoolAsync_sig); using cudaFreeAsync_sig = cudart_sig<void*, cudaStream_t>; RMM_CUDART_API_WRAPPER(cudaFreeAsync, cudaFreeAsync_sig); using cudaDeviceGetDefaultMemPool_sig = cudart_sig<cudaMemPool_t*, int>; RMM_CUDART_API_WRAPPER(cudaDeviceGetDefaultMemPool, cudaDeviceGetDefaultMemPool_sig); }; #endif #undef RMM_CUDART_API_WRAPPER } // namespace rmm::detail
0
rapidsai_public_repos/rmm/include/rmm
rapidsai_public_repos/rmm/include/rmm/detail/aligned.hpp
/* * Copyright (c) 2020-2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <cassert> #include <cstddef> #include <cstdint> #include <memory> #include <new> namespace rmm::detail { /** * @brief Default alignment used for host memory allocated by RMM. * */ static constexpr std::size_t RMM_DEFAULT_HOST_ALIGNMENT{alignof(std::max_align_t)}; /** * @brief Default alignment used for CUDA memory allocation. * */ static constexpr std::size_t CUDA_ALLOCATION_ALIGNMENT{256}; /** * @brief Returns whether or not `n` is a power of 2. * */ constexpr bool is_pow2(std::size_t value) { return (0 == (value & (value - 1))); } /** * @brief Returns whether or not `alignment` is a valid memory alignment. * */ constexpr bool is_supported_alignment(std::size_t alignment) { return is_pow2(alignment); } /** * @brief Align up to nearest multiple of specified power of 2 * * @param[in] v value to align * @param[in] alignment amount, in bytes, must be a power of 2 * * @return Return the aligned value, as one would expect */ constexpr std::size_t align_up(std::size_t value, std::size_t alignment) noexcept { assert(is_supported_alignment(alignment)); return (value + (alignment - 1)) & ~(alignment - 1); } /** * @brief Align down to the nearest multiple of specified power of 2 * * @param[in] v value to align * @param[in] alignment amount, in bytes, must be a power of 2 * * @return Return the aligned value, as one would expect */ constexpr std::size_t align_down(std::size_t value, std::size_t alignment) noexcept { assert(is_supported_alignment(alignment)); return value & ~(alignment - 1); } /** * @brief Checks whether a value is aligned to a multiple of a specified power of 2 * * @param[in] v value to check for alignment * @param[in] alignment amount, in bytes, must be a power of 2 * * @return true if aligned */ constexpr bool is_aligned(std::size_t value, std::size_t alignment) noexcept { assert(is_supported_alignment(alignment)); return value == align_down(value, alignment); } inline bool is_pointer_aligned(void* ptr, std::size_t alignment = CUDA_ALLOCATION_ALIGNMENT) { // NOLINTNEXTLINE(cppcoreguidelines-pro-type-reinterpret-cast) return rmm::detail::is_aligned(reinterpret_cast<ptrdiff_t>(ptr), alignment); } /** * @brief Allocates sufficient memory to satisfy the requested size `bytes` with * alignment `alignment` using the unary callable `alloc` to allocate memory. * * Given a pointer `p` to an allocation of size `n` returned from the unary * callable `alloc`, the pointer `q` returned from `aligned_alloc` points to a * location within the `n` bytes with sufficient space for `bytes` that * satisfies `alignment`. * * In order to retrieve the original allocation pointer `p`, the offset * between `p` and `q` is stored at `q - sizeof(std::ptrdiff_t)`. * * Allocations returned from `aligned_allocate` *MUST* be freed by calling * `aligned_deallocate` with the same arguments for `bytes` and `alignment` with * a compatible unary `dealloc` callable capable of freeing the memory returned * from `alloc`. * * If `alignment` is not a power of 2, behavior is undefined. * * @param bytes The desired size of the allocation * @param alignment Desired alignment of allocation * @param alloc Unary callable given a size `n` will allocate at least `n` bytes * of host memory. * @tparam Alloc a unary callable type that allocates memory. * @return void* Pointer into allocation of at least `bytes` with desired * `alignment`. */ template <typename Alloc> void* aligned_allocate(std::size_t bytes, std::size_t alignment, Alloc alloc) { assert(is_pow2(alignment)); // allocate memory for bytes, plus potential alignment correction, // plus store of the correction offset std::size_t padded_allocation_size{bytes + alignment + sizeof(std::ptrdiff_t)}; char* const original = static_cast<char*>(alloc(padded_allocation_size)); // account for storage of offset immediately prior to the aligned pointer // NOLINTNEXTLINE(cppcoreguidelines-pro-bounds-pointer-arithmetic) void* aligned{original + sizeof(std::ptrdiff_t)}; // std::align modifies `aligned` to point to the first aligned location std::align(alignment, bytes, aligned, padded_allocation_size); // Compute the offset between the original and aligned pointers std::ptrdiff_t offset = static_cast<char*>(aligned) - original; // Store the offset immediately before the aligned pointer // NOLINTNEXTLINE(cppcoreguidelines-pro-bounds-pointer-arithmetic) *(static_cast<std::ptrdiff_t*>(aligned) - 1) = offset; return aligned; } /** * @brief Frees an allocation returned from `aligned_allocate`. * * Allocations returned from `aligned_allocate` *MUST* be freed by calling * `aligned_deallocate` with the same arguments for `bytes` and `alignment` * with a compatible unary `dealloc` callable capable of freeing the memory * returned from `alloc`. * * @param p The aligned pointer to deallocate * @param bytes The number of bytes requested from `aligned_allocate` * @param alignment The alignment required from `aligned_allocate` * @param dealloc A unary callable capable of freeing memory returned from * `alloc` in `aligned_allocate`. * @tparam Dealloc A unary callable type that deallocates memory. */ template <typename Dealloc> // NOLINTNEXTLINE(bugprone-easily-swappable-parameters) void aligned_deallocate(void* ptr, std::size_t bytes, std::size_t alignment, Dealloc dealloc) { (void)alignment; // Get offset from the location immediately prior to the aligned pointer // NOLINTNEXTLINE std::ptrdiff_t const offset = *(reinterpret_cast<std::ptrdiff_t*>(ptr) - 1); // NOLINTNEXTLINE(cppcoreguidelines-pro-bounds-pointer-arithmetic) void* const original = static_cast<char*>(ptr) - offset; dealloc(original); } } // namespace rmm::detail
0
rapidsai_public_repos/rmm/include/rmm
rapidsai_public_repos/rmm/include/rmm/detail/error.hpp
/* * Copyright (c) 2020, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <cuda_runtime_api.h> #include <cassert> #include <iostream> #include <stdexcept> #include <string> namespace rmm { /** * @brief Exception thrown when logical precondition is violated. * * @ingroup errors * * This exception should not be thrown directly and is instead thrown by the * RMM_EXPECTS macro. * */ struct logic_error : public std::logic_error { using std::logic_error::logic_error; }; /** * @brief Exception thrown when a CUDA error is encountered. * * @ingroup errors * */ struct cuda_error : public std::runtime_error { using std::runtime_error::runtime_error; }; /** * @brief Exception thrown when an RMM allocation fails * * @ingroup errors * */ class bad_alloc : public std::bad_alloc { public: /** * @brief Constructs a bad_alloc with the error message. * * @param msg Message to be associated with the exception */ bad_alloc(const char* msg) : _what{std::string{std::bad_alloc::what()} + ": " + msg} {} /** * @brief Constructs a bad_alloc with the error message. * * @param msg Message to be associated with the exception */ bad_alloc(std::string const& msg) : bad_alloc{msg.c_str()} {} /** * @briefreturn{The explanatory string} */ [[nodiscard]] const char* what() const noexcept override { return _what.c_str(); } private: std::string _what; }; /** * @brief Exception thrown when RMM runs out of memory * * @ingroup errors * * This error should only be thrown when we know for sure a resource is out of memory. */ class out_of_memory : public bad_alloc { public: /** * @brief Constructs an out_of_memory with the error message. * * @param msg Message to be associated with the exception */ out_of_memory(const char* msg) : bad_alloc{std::string{"out_of_memory: "} + msg} {} /** * @brief Constructs an out_of_memory with the error message. * * @param msg Message to be associated with the exception */ out_of_memory(std::string const& msg) : out_of_memory{msg.c_str()} {} }; /** * @brief Exception thrown when attempting to access outside of a defined range * * @ingroup errors * */ class out_of_range : public std::out_of_range { using std::out_of_range::out_of_range; }; } // namespace rmm #define STRINGIFY_DETAIL(x) #x #define RMM_STRINGIFY(x) STRINGIFY_DETAIL(x) /** * @brief Macro for checking (pre-)conditions that throws an exception when * a condition is violated. * * Defaults to throwing rmm::logic_error, but a custom exception may also be * specified. * * Example usage: * ``` * // throws rmm::logic_error * RMM_EXPECTS(p != nullptr, "Unexpected null pointer"); * * // throws std::runtime_error * RMM_EXPECTS(p != nullptr, "Unexpected nullptr", std::runtime_error); * ``` * @param ... This macro accepts either two or three arguments: * - The first argument must be an expression that evaluates to true or * false, and is the condition being checked. * - The second argument is a string literal used to construct the `what` of * the exception. * - When given, the third argument is the exception to be thrown. When not * specified, defaults to `rmm::logic_error`. * @throw `_exception_type` if the condition evaluates to 0 (false). */ #define RMM_EXPECTS(...) \ GET_RMM_EXPECTS_MACRO(__VA_ARGS__, RMM_EXPECTS_3, RMM_EXPECTS_2) \ (__VA_ARGS__) #define GET_RMM_EXPECTS_MACRO(_1, _2, _3, NAME, ...) NAME #define RMM_EXPECTS_3(_condition, _reason, _exception_type) \ (!!(_condition)) ? static_cast<void>(0) \ : throw _exception_type /*NOLINT(bugprone-macro-parentheses)*/ \ { \ "RMM failure at: " __FILE__ ":" RMM_STRINGIFY(__LINE__) ": " _reason \ } #define RMM_EXPECTS_2(_condition, _reason) RMM_EXPECTS_3(_condition, _reason, rmm::logic_error) /** * @brief Indicates that an erroneous code path has been taken. * * Example usage: * ```c++ * // Throws rmm::logic_error * RMM_FAIL("Unsupported code path"); * * // Throws `std::runtime_error` * RMM_FAIL("Unsupported code path", std::runtime_error); * ``` */ #define RMM_FAIL(...) \ GET_RMM_FAIL_MACRO(__VA_ARGS__, RMM_FAIL_2, RMM_FAIL_1) \ (__VA_ARGS__) #define GET_RMM_FAIL_MACRO(_1, _2, NAME, ...) NAME #define RMM_FAIL_2(_what, _exception_type) \ /*NOLINTNEXTLINE(bugprone-macro-parentheses)*/ \ throw _exception_type{"RMM failure at:" __FILE__ ":" RMM_STRINGIFY(__LINE__) ": " _what}; #define RMM_FAIL_1(_what) RMM_FAIL_2(_what, rmm::logic_error) /** * @brief Error checking macro for CUDA runtime API functions. * * Invokes a CUDA runtime API function call. If the call does not return * `cudaSuccess`, invokes cudaGetLastError() to clear the error and throws an * exception detailing the CUDA error that occurred * * Defaults to throwing rmm::cuda_error, but a custom exception may also be * specified. * * Example: * ```c++ * * // Throws rmm::cuda_error if `cudaMalloc` fails * RMM_CUDA_TRY(cudaMalloc(&p, 100)); * * // Throws std::runtime_error if `cudaMalloc` fails * RMM_CUDA_TRY(cudaMalloc(&p, 100), std::runtime_error); * ``` * */ #define RMM_CUDA_TRY(...) \ GET_RMM_CUDA_TRY_MACRO(__VA_ARGS__, RMM_CUDA_TRY_2, RMM_CUDA_TRY_1) \ (__VA_ARGS__) #define GET_RMM_CUDA_TRY_MACRO(_1, _2, NAME, ...) NAME #define RMM_CUDA_TRY_2(_call, _exception_type) \ do { \ cudaError_t const error = (_call); \ if (cudaSuccess != error) { \ cudaGetLastError(); \ /*NOLINTNEXTLINE(bugprone-macro-parentheses)*/ \ throw _exception_type{std::string{"CUDA error at: "} + __FILE__ + ":" + \ RMM_STRINGIFY(__LINE__) + ": " + cudaGetErrorName(error) + " " + \ cudaGetErrorString(error)}; \ } \ } while (0) #define RMM_CUDA_TRY_1(_call) RMM_CUDA_TRY_2(_call, rmm::cuda_error) /** * @brief Error checking macro for CUDA memory allocation calls. * * Invokes a CUDA memory allocation function call. If the call does not return * `cudaSuccess`, invokes cudaGetLastError() to clear the error and throws an * exception detailing the CUDA error that occurred * * Defaults to throwing rmm::bad_alloc, but when `cudaErrorMemoryAllocation` is returned, * rmm::out_of_memory is thrown instead. */ #define RMM_CUDA_TRY_ALLOC(_call) \ do { \ cudaError_t const error = (_call); \ if (cudaSuccess != error) { \ cudaGetLastError(); \ auto const msg = std::string{"CUDA error at: "} + __FILE__ + ":" + RMM_STRINGIFY(__LINE__) + \ ": " + cudaGetErrorName(error) + " " + cudaGetErrorString(error); \ if (cudaErrorMemoryAllocation == error) { \ throw rmm::out_of_memory{msg}; \ } else { \ throw rmm::bad_alloc{msg}; \ } \ } \ } while (0) /** * @brief Error checking macro similar to `assert` for CUDA runtime API calls * * This utility should be used in situations where extra error checking is desired in "Debug" * builds, or in situations where an error case cannot throw an exception (such as a class * destructor). * * In "Release" builds, simply invokes the `_call`. * * In "Debug" builds, invokes `_call` and uses `assert` to verify the returned `cudaError_t` is * equal to `cudaSuccess`. * * * Replaces usecases such as: * ``` * auto status = cudaRuntimeApi(...); * assert(status == cudaSuccess); * ``` * * Example: * ``` * RMM_ASSERT_CUDA_SUCCESS(cudaRuntimeApi(...)); * ``` * */ #ifdef NDEBUG #define RMM_ASSERT_CUDA_SUCCESS(_call) \ do { \ (_call); \ } while (0); #else #define RMM_ASSERT_CUDA_SUCCESS(_call) \ do { \ cudaError_t const status__ = (_call); \ if (status__ != cudaSuccess) { \ std::cerr << "CUDA Error detected. " << cudaGetErrorName(status__) << " " \ << cudaGetErrorString(status__) << std::endl; \ } \ /* NOLINTNEXTLINE(cppcoreguidelines-pro-bounds-array-to-pointer-decay) */ \ assert(status__ == cudaSuccess); \ } while (0) #endif
0
rapidsai_public_repos/rmm/include/rmm
rapidsai_public_repos/rmm/include/rmm/detail/thrust_namespace.h
/* * Copyright (c) 2022, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <thrust/detail/config.h> // namespace macros #ifdef THRUST_WRAPPED_NAMESPACE // Ensure the namespace exist before we import it // so that this include can occur before thrust includes namespace THRUST_WRAPPED_NAMESPACE { namespace thrust { } } // namespace THRUST_WRAPPED_NAMESPACE namespace rmm { using namespace THRUST_WRAPPED_NAMESPACE; } #endif
0
rapidsai_public_repos/rmm/include/rmm
rapidsai_public_repos/rmm/include/rmm/detail/stack_trace.hpp
/* * Copyright (c) 2020-2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/detail/error.hpp> // execinfo is a linux-only library, so stack traces will only be available on // linux systems. #if (defined(__GNUC__) && !defined(__MINGW32__) && !defined(__MINGW64__)) #define RMM_ENABLE_STACK_TRACES #endif #include <sstream> #if defined(RMM_ENABLE_STACK_TRACES) #include <cxxabi.h> #include <dlfcn.h> #include <execinfo.h> #include <cstddef> #include <memory> #include <vector> #endif namespace rmm::detail { /** * @brief stack_trace is a class that will capture a stack on instantiation for output later. * It can then be used in an output stream to display stack information. * * rmm::detail::stack_trace saved_stack; * * std::cout << "callstack: " << saved_stack; * */ class stack_trace { public: stack_trace() { #if defined(RMM_ENABLE_STACK_TRACES) const int MaxStackDepth = 64; std::array<void*, MaxStackDepth> stack{}; auto const depth = backtrace(stack.begin(), MaxStackDepth); stack_ptrs.insert(stack_ptrs.end(), stack.begin(), stack.begin() + depth); #endif // RMM_ENABLE_STACK_TRACES } friend std::ostream& operator<<(std::ostream& os, const stack_trace& trace) { #if defined(RMM_ENABLE_STACK_TRACES) std::unique_ptr<char*, decltype(&::free)> strings( backtrace_symbols(trace.stack_ptrs.data(), static_cast<int>(trace.stack_ptrs.size())), &::free); RMM_EXPECTS(strings != nullptr, "Unexpected null stack trace symbols"); // Iterate over the stack pointers converting to a string for (std::size_t i = 0; i < trace.stack_ptrs.size(); ++i) { // Leading index os << "#" << i << " in "; auto const str = [&] { Dl_info info; if (dladdr(trace.stack_ptrs[i], &info) != 0) { int status = -1; // Demangle the name. This can occasionally fail std::unique_ptr<char, decltype(&::free)> demangled( abi::__cxa_demangle(info.dli_sname, nullptr, nullptr, &status), &::free); // If it fails, fallback to the dli_name. if (status == 0 or (info.dli_sname != nullptr)) { auto const* name = status == 0 ? demangled.get() : info.dli_sname; return name + std::string(" from ") + info.dli_fname; } } return std::string(strings.get()[i]); }(); os << str << std::endl; } #else os << "stack traces disabled" << std::endl; #endif // RMM_ENABLE_STACK_TRACES return os; }; #if defined(RMM_ENABLE_STACK_TRACES) private: std::vector<void*> stack_ptrs; #endif // RMM_ENABLE_STACK_TRACES }; } // namespace rmm::detail
0
rapidsai_public_repos/rmm/include/rmm
rapidsai_public_repos/rmm/include/rmm/detail/cuda_util.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/detail/error.hpp> namespace rmm::detail { /// Gets the available and total device memory in bytes for the current device inline std::pair<std::size_t, std::size_t> available_device_memory() { std::size_t free{}; std::size_t total{}; RMM_CUDA_TRY(cudaMemGetInfo(&free, &total)); return {free, total}; } } // namespace rmm::detail
0
rapidsai_public_repos/rmm/include/rmm/mr
rapidsai_public_repos/rmm/include/rmm/mr/host/pinned_memory_resource.hpp
/* * Copyright (c) 2020-2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/cuda_stream_view.hpp> #include <rmm/detail/aligned.hpp> #include <rmm/detail/error.hpp> #include <rmm/mr/host/host_memory_resource.hpp> #include <cstddef> #include <utility> namespace rmm::mr { /** * @addtogroup host_memory_resources * @{ * @file */ /** * @brief A `host_memory_resource` that uses `cudaMallocHost` to allocate * pinned/page-locked host memory. * * See https://devblogs.nvidia.com/how-optimize-data-transfers-cuda-cc/ */ class pinned_memory_resource final : public host_memory_resource { public: pinned_memory_resource() = default; ~pinned_memory_resource() override = default; pinned_memory_resource(pinned_memory_resource const&) = default; ///< @default_copy_constructor pinned_memory_resource(pinned_memory_resource&&) = default; ///< @default_move_constructor pinned_memory_resource& operator=(pinned_memory_resource const&) = default; ///< @default_copy_assignment{pinned_memory_resource} pinned_memory_resource& operator=(pinned_memory_resource&&) = default; ///< @default_move_assignment{pinned_memory_resource} /** * @brief Query whether the pinned_memory_resource supports use of non-null CUDA streams for * allocation/deallocation. * * @returns bool false. */ [[nodiscard]] bool supports_streams() const noexcept { return false; } /** * @brief Query whether the resource supports the get_mem_info API. * * @return bool false. */ [[nodiscard]] bool supports_get_mem_info() const noexcept { return false; } /** * @brief Queries the amount of free and total memory for the resource. * * @param stream the stream whose memory manager we want to retrieve * * @returns a pair containing the free memory in bytes in .first and total amount of memory in * .second */ [[nodiscard]] std::pair<std::size_t, std::size_t> get_mem_info(cuda_stream_view stream) const { return std::make_pair(0, 0); } /** * @brief Pretend to support the allocate_async interface, falling back to stream 0 * * @throws rmm::bad_alloc When the requested `bytes` cannot be allocated on * the specified `stream`. * * @param bytes The size of the allocation * @param alignment The expected alignment of the allocation * @return void* Pointer to the newly allocated memory */ [[nodiscard]] void* allocate_async(std::size_t bytes, std::size_t alignment, cuda_stream_view) { return do_allocate(bytes, alignment); } /** * @brief Pretend to support the allocate_async interface, falling back to stream 0 * * @throws rmm::bad_alloc When the requested `bytes` cannot be allocated on * the specified `stream`. * * @param bytes The size of the allocation * @return void* Pointer to the newly allocated memory */ [[nodiscard]] void* allocate_async(std::size_t bytes, cuda_stream_view) { return do_allocate(bytes); } /** * @brief Pretend to support the deallocate_async interface, falling back to stream 0 * * @param ptr Pointer to be deallocated * @param bytes The size in bytes of the allocation. This must be equal to the * value of `bytes` that was passed to the `allocate` call that returned `p`. * @param alignment The alignment that was passed to the `allocate` call that returned `p` */ void deallocate_async(void* ptr, std::size_t bytes, std::size_t alignment, cuda_stream_view) { do_deallocate(ptr, rmm::detail::align_up(bytes, alignment)); } /** * @brief Enables the `cuda::mr::device_accessible` property * * This property declares that a `pinned_memory_resource` provides device accessible memory */ friend void get_property(pinned_memory_resource const&, cuda::mr::device_accessible) noexcept {} private: /** * @brief Allocates pinned memory on the host of size at least `bytes` bytes. * * The returned storage is aligned to the specified `alignment` if supported, and to * `alignof(std::max_align_t)` otherwise. * * @throws std::bad_alloc When the requested `bytes` and `alignment` cannot be allocated. * * @param bytes The size of the allocation * @param alignment Alignment of the allocation * @return void* Pointer to the newly allocated memory */ void* do_allocate(std::size_t bytes, std::size_t alignment = alignof(std::max_align_t)) override { // don't allocate anything if the user requested zero bytes if (0 == bytes) { return nullptr; } // If the requested alignment isn't supported, use default alignment = (rmm::detail::is_supported_alignment(alignment)) ? alignment : rmm::detail::RMM_DEFAULT_HOST_ALIGNMENT; return rmm::detail::aligned_allocate(bytes, alignment, [](std::size_t size) { void* ptr{nullptr}; auto status = cudaMallocHost(&ptr, size); if (cudaSuccess != status) { throw std::bad_alloc{}; } return ptr; }); } /** * @brief Deallocate memory pointed to by `ptr`. * * `ptr` must have been returned by a prior call to `allocate(bytes,alignment)` on a * `host_memory_resource` that compares equal to `*this`, and the storage it points to must not * yet have been deallocated, otherwise behavior is undefined. * * @param ptr Pointer to be deallocated * @param bytes The size in bytes of the allocation. This must be equal to the value of `bytes` * that was passed to the `allocate` call that returned `ptr`. * @param alignment Alignment of the allocation. This must be equal to the value of `alignment` * that was passed to the `allocate` call that returned `ptr`. */ void do_deallocate(void* ptr, std::size_t bytes, std::size_t alignment = alignof(std::max_align_t)) override { if (nullptr == ptr) { return; } rmm::detail::aligned_deallocate( ptr, bytes, alignment, [](void* ptr) { RMM_ASSERT_CUDA_SUCCESS(cudaFreeHost(ptr)); }); } }; static_assert(cuda::mr::async_resource_with<pinned_memory_resource, cuda::mr::host_accessible, cuda::mr::device_accessible>); /** @} */ // end of group } // namespace rmm::mr
0
rapidsai_public_repos/rmm/include/rmm/mr
rapidsai_public_repos/rmm/include/rmm/mr/host/new_delete_resource.hpp
/* * Copyright (c) 2020-2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/mr/host/host_memory_resource.hpp> #include <rmm/detail/aligned.hpp> #include <cstddef> #include <utility> namespace rmm::mr { /** * @addtogroup host_memory_resources * @{ * @file */ /** * @brief A `host_memory_resource` that uses the global `operator new` and `operator delete` to * allocate host memory. */ class new_delete_resource final : public host_memory_resource { public: new_delete_resource() = default; ~new_delete_resource() override = default; new_delete_resource(new_delete_resource const&) = default; ///< @default_copy_constructor new_delete_resource(new_delete_resource&&) = default; ///< @default_move_constructor new_delete_resource& operator=(new_delete_resource const&) = default; ///< @default_copy_assignment{new_delete_resource} new_delete_resource& operator=(new_delete_resource&&) = default; ///< @default_move_assignment{new_delete_resource} private: /** * @brief Allocates memory on the host of size at least `bytes` bytes. * * The returned storage is aligned to the specified `alignment` if supported, and to * `alignof(std::max_align_t)` otherwise. * * @throws std::bad_alloc When the requested `bytes` and `alignment` cannot be allocated. * * @param bytes The size of the allocation * @param alignment Alignment of the allocation * @return Pointer to the newly allocated memory */ void* do_allocate(std::size_t bytes, std::size_t alignment = rmm::detail::RMM_DEFAULT_HOST_ALIGNMENT) override { // If the requested alignment isn't supported, use default alignment = (rmm::detail::is_supported_alignment(alignment)) ? alignment : rmm::detail::RMM_DEFAULT_HOST_ALIGNMENT; return rmm::detail::aligned_allocate( bytes, alignment, [](std::size_t size) { return ::operator new(size); }); } /** * @brief Deallocate memory pointed to by `ptr`. * * `ptr` must have been returned by a prior call to `allocate(bytes,alignment)` on a * `host_memory_resource` that compares equal to `*this`, and the storage it points to must not * yet have been deallocated, otherwise behavior is undefined. * * @param ptr Pointer to be deallocated * @param bytes The size in bytes of the allocation. This must be equal to the value of `bytes` * that was passed to the `allocate` call that returned `ptr`. * @param alignment Alignment of the allocation. This must be equal to the value of `alignment` * that was passed to the `allocate` call that returned `ptr`. */ void do_deallocate(void* ptr, std::size_t bytes, std::size_t alignment = rmm::detail::RMM_DEFAULT_HOST_ALIGNMENT) override { rmm::detail::aligned_deallocate( ptr, bytes, alignment, [](void* ptr) { ::operator delete(ptr); }); } }; /** @} */ // end of group } // namespace rmm::mr
0
rapidsai_public_repos/rmm/include/rmm/mr
rapidsai_public_repos/rmm/include/rmm/mr/host/host_memory_resource.hpp
/* * Copyright (c) 2020-2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <cuda/memory_resource> #include <cstddef> #include <utility> namespace rmm::mr { /** * @addtogroup host_memory_resources * @{ * @file */ /** * @brief Base class for host memory allocation. * * This is based on `std::pmr::memory_resource`: * https://en.cppreference.com/w/cpp/memory/memory_resource * * When C++17 is available for use in RMM, `rmm::host_memory_resource` should * inherit from `std::pmr::memory_resource`. * * This class serves as the interface that all host memory resource * implementations must satisfy. * * There are two private, pure virtual functions that all derived classes must * implement: `do_allocate` and `do_deallocate`. Optionally, derived classes may * also override `is_equal`. By default, `is_equal` simply performs an identity * comparison. * * The public, non-virtual functions `allocate`, `deallocate`, and `is_equal` * simply call the private virtual functions. The reason for this is to allow * implementing shared, default behavior in the base class. For example, the * base class' `allocate` function may log every allocation, no matter what * derived class implementation is used. * */ class host_memory_resource { public: host_memory_resource() = default; virtual ~host_memory_resource() = default; host_memory_resource(host_memory_resource const&) = default; ///< @default_copy_constructor host_memory_resource(host_memory_resource&&) noexcept = default; ///< @default_move_constructor host_memory_resource& operator=(host_memory_resource const&) = default; ///< @default_copy_assignment{host_memory_resource} host_memory_resource& operator=(host_memory_resource&&) noexcept = default; ///< @default_move_assignment{host_memory_resource} /** * @brief Allocates memory on the host of size at least `bytes` bytes. * * The returned storage is aligned to the specified `alignment` if supported, and to * `alignof(std::max_align_t)` otherwise. * * @throws std::bad_alloc When the requested `bytes` and `alignment` cannot be allocated. * * @param bytes The size of the allocation * @param alignment Alignment of the allocation * @return void* Pointer to the newly allocated memory */ void* allocate(std::size_t bytes, std::size_t alignment = alignof(std::max_align_t)) { return do_allocate(bytes, alignment); } /** * @brief Deallocate memory pointed to by `ptr`. * * `ptr` must have been returned by a prior call to `allocate(bytes,alignment)` on a * `host_memory_resource` that compares equal to `*this`, and the storage it points to must not * yet have been deallocated, otherwise behavior is undefined. * * @param ptr Pointer to be deallocated * @param bytes The size in bytes of the allocation. This must be equal to the value of `bytes` * that was passed to the `allocate` call that returned `ptr`. * @param alignment Alignment of the allocation. This must be equal to the value of `alignment` * that was passed to the `allocate` call that returned `ptr`. */ void deallocate(void* ptr, std::size_t bytes, std::size_t alignment = alignof(std::max_align_t)) { do_deallocate(ptr, bytes, alignment); } /** * @brief Compare this resource to another. * * Two `host_memory_resource`s compare equal if and only if memory allocated from one * `host_memory_resource` can be deallocated from the other and vice versa. * * By default, simply checks if \p *this and \p other refer to the same object, i.e., does not * check if they are two objects of the same class. * * @param other The other resource to compare to * @returns true if the two resources are equivalent */ [[nodiscard]] bool is_equal(host_memory_resource const& other) const noexcept { return do_is_equal(other); } /** * @brief Comparison operator with another device_memory_resource * * @param other The other resource to compare to * @return true If the two resources are equivalent * @return false If the two resources are not equivalent */ [[nodiscard]] bool operator==(host_memory_resource const& other) const noexcept { return do_is_equal(other); } /** * @brief Comparison operator with another device_memory_resource * * @param other The other resource to compare to * @return false If the two resources are equivalent * @return true If the two resources are not equivalent */ [[nodiscard]] bool operator!=(host_memory_resource const& other) const noexcept { return !do_is_equal(other); } /** * @brief Enables the `cuda::mr::host_accessible` property * * This property declares that a `host_memory_resource` provides host accessible memory */ friend void get_property(host_memory_resource const&, cuda::mr::host_accessible) noexcept {} private: /** * @brief Allocates memory on the host of size at least `bytes` bytes. * * The returned storage is aligned to the specified `alignment` if supported, and to * `alignof(std::max_align_t)` otherwise. * * @throws std::bad_alloc When the requested `bytes` and `alignment` cannot be allocated. * * @param bytes The size of the allocation * @param alignment Alignment of the allocation * @return void* Pointer to the newly allocated memory */ virtual void* do_allocate(std::size_t bytes, std::size_t alignment = alignof(std::max_align_t)) = 0; /** * @brief Deallocate memory pointed to by `ptr`. * * `ptr` must have been returned by a prior call to `allocate(bytes,alignment)` on a * `host_memory_resource` that compares equal to `*this`, and the storage it points to must not * yet have been deallocated, otherwise behavior is undefined. * * @param ptr Pointer to be deallocated * @param bytes The size in bytes of the allocation. This must be equal to the value of `bytes` * that was passed to the `allocate` call that returned `ptr`. * @param alignment Alignment of the allocation. This must be equal to the value of `alignment` * that was passed to the `allocate` call that returned `ptr`. */ virtual void do_deallocate(void* ptr, std::size_t bytes, std::size_t alignment = alignof(std::max_align_t)) = 0; /** * @brief Compare this resource to another. * * Two host_memory_resources compare equal if and only if memory allocated from one * host_memory_resource can be deallocated from the other and vice versa. * * By default, simply checks if `*this` and `other` refer to the same object, i.e., does not check * whether they are two objects of the same class. * * @param other The other resource to compare to * @return true If the two resources are equivalent */ [[nodiscard]] virtual bool do_is_equal(host_memory_resource const& other) const noexcept { return this == &other; } }; static_assert(cuda::mr::resource_with<host_memory_resource, cuda::mr::host_accessible>); /** @} */ // end of group } // namespace rmm::mr
0
rapidsai_public_repos/rmm/include/rmm/mr
rapidsai_public_repos/rmm/include/rmm/mr/device/fixed_size_memory_resource.hpp
/* * Copyright (c) 2020-2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/cuda_stream_view.hpp> #include <rmm/detail/aligned.hpp> #include <rmm/detail/error.hpp> #include <rmm/detail/logging_assert.hpp> #include <rmm/mr/device/detail/fixed_size_free_list.hpp> #include <rmm/mr/device/detail/stream_ordered_memory_resource.hpp> #include <rmm/detail/thrust_namespace.h> #include <thrust/iterator/counting_iterator.h> #include <thrust/iterator/transform_iterator.h> #include <cuda_runtime_api.h> #include <algorithm> #include <cstddef> #include <list> #include <map> #include <utility> #include <vector> namespace rmm::mr { /** * @addtogroup device_memory_resources * @{ * @file */ /** * @brief A `device_memory_resource` which allocates memory blocks of a single fixed size. * * Supports only allocations of size smaller than the configured block_size. */ template <typename Upstream> class fixed_size_memory_resource : public detail::stream_ordered_memory_resource<fixed_size_memory_resource<Upstream>, detail::fixed_size_free_list> { public: friend class detail::stream_ordered_memory_resource<fixed_size_memory_resource<Upstream>, detail::fixed_size_free_list>; static constexpr std::size_t default_block_size = 1 << 20; ///< Default allocation block size /// The number of blocks that the pool starts out with, and also the number of /// blocks by which the pool grows when all of its current blocks are allocated static constexpr std::size_t default_blocks_to_preallocate = 128; /** * @brief Construct a new `fixed_size_memory_resource` that allocates memory from * `upstream_resource`. * * When the pool of blocks is all allocated, grows the pool by allocating * `blocks_to_preallocate` more blocks from `upstream_mr`. * * @param upstream_mr The memory_resource from which to allocate blocks for the pool. * @param block_size The size of blocks to allocate. * @param blocks_to_preallocate The number of blocks to allocate to initialize the pool. */ explicit fixed_size_memory_resource( Upstream* upstream_mr, std::size_t block_size = default_block_size, std::size_t blocks_to_preallocate = default_blocks_to_preallocate) : upstream_mr_{upstream_mr}, block_size_{rmm::detail::align_up(block_size, rmm::detail::CUDA_ALLOCATION_ALIGNMENT)}, upstream_chunk_size_{block_size * blocks_to_preallocate} { // allocate initial blocks and insert into free list this->insert_blocks(std::move(blocks_from_upstream(cuda_stream_legacy)), cuda_stream_legacy); } /** * @brief Destroy the `fixed_size_memory_resource` and free all memory allocated from upstream. * */ ~fixed_size_memory_resource() override { release(); } fixed_size_memory_resource() = delete; fixed_size_memory_resource(fixed_size_memory_resource const&) = delete; fixed_size_memory_resource(fixed_size_memory_resource&&) = delete; fixed_size_memory_resource& operator=(fixed_size_memory_resource const&) = delete; fixed_size_memory_resource& operator=(fixed_size_memory_resource&&) = delete; /** * @brief Query whether the resource supports use of non-null streams for * allocation/deallocation. * * @returns true */ [[nodiscard]] bool supports_streams() const noexcept override { return true; } /** * @brief Query whether the resource supports the get_mem_info API. * * @return false */ [[nodiscard]] bool supports_get_mem_info() const noexcept override { return false; } /** * @brief Get the upstream memory_resource object. * * @return UpstreamResource* the upstream memory resource. */ Upstream* get_upstream() const noexcept { return upstream_mr_; } /** * @brief Get the size of blocks allocated by this memory resource. * * @return std::size_t size in bytes of allocated blocks. */ [[nodiscard]] std::size_t get_block_size() const noexcept { return block_size_; } protected: using free_list = detail::fixed_size_free_list; ///< The free list type using block_type = free_list::block_type; ///< The type of block managed by the free list using typename detail::stream_ordered_memory_resource<fixed_size_memory_resource<Upstream>, detail::fixed_size_free_list>::split_block; using lock_guard = std::lock_guard<std::mutex>; ///< Type of lock used to synchronize access /** * @brief Get the (fixed) size of allocations supported by this memory resource * * @return std::size_t The (fixed) maximum size of a single allocation supported by this memory * resource */ [[nodiscard]] std::size_t get_maximum_allocation_size() const { return get_block_size(); } /** * @brief Allocate a block from upstream to supply the suballocation pool. * * Note typically the allocated size will be larger than requested, and is based on the growth * strategy (see `size_to_grow()`). * * @param size The minimum size to allocate * @param blocks The set of blocks from which to allocate * @param stream The stream on which the memory is to be used. * @return block_type The allocated block */ block_type expand_pool(std::size_t size, free_list& blocks, cuda_stream_view stream) { blocks.insert(std::move(blocks_from_upstream(stream))); return blocks.get_block(size); } /** * @brief Allocate blocks from upstream to expand the suballocation pool. * * @param stream The stream on which the memory is to be used. * @return block_type The allocated block */ free_list blocks_from_upstream(cuda_stream_view stream) { void* ptr = get_upstream()->allocate(upstream_chunk_size_, stream); block_type block{ptr}; upstream_blocks_.push_back(block); auto num_blocks = upstream_chunk_size_ / block_size_; auto block_gen = [ptr, this](int index) { // NOLINTNEXTLINE(cppcoreguidelines-pro-bounds-pointer-arithmetic) return block_type{static_cast<char*>(ptr) + index * block_size_}; }; auto first = thrust::make_transform_iterator(thrust::make_counting_iterator(std::size_t{0}), block_gen); return free_list(first, first + num_blocks); } /** * @brief Splits block if necessary to return a pointer to memory of `size` bytes. * * If the block is split, the remainder is returned to the pool. * * @param block The block to allocate from. * @param size The size in bytes of the requested allocation. * @return A pair comprising the allocated pointer and any unallocated remainder of the input * block. */ split_block allocate_from_block(block_type const& block, std::size_t size) { return {block, block_type{nullptr}}; } /** * @brief Finds, frees and returns the block associated with pointer. * * @param ptr The pointer to the memory to free. * @param size The size of the memory to free. Must be equal to the original allocation size. * @return The (now freed) block associated with `p`. The caller is expected to return the block * to the pool. */ block_type free_block(void* ptr, std::size_t size) noexcept { // Deallocating a fixed-size block just inserts it in the free list, which is // handled by the parent class RMM_LOGGING_ASSERT(rmm::detail::align_up(size, rmm::detail::CUDA_ALLOCATION_ALIGNMENT) <= block_size_); return block_type{ptr}; } /** * @brief Get free and available memory for memory resource * * @throws std::runtime_error if we could not get free / total memory * * @param stream the stream being executed on * @return std::pair with available and free memory for resource */ [[nodiscard]] std::pair<std::size_t, std::size_t> do_get_mem_info( [[maybe_unused]] cuda_stream_view stream) const override { return std::make_pair(0, 0); } /** * @brief free all memory allocated using the upstream resource. * */ void release() { lock_guard lock(this->get_mutex()); for (auto block : upstream_blocks_) { get_upstream()->deallocate(block.pointer(), upstream_chunk_size_); } upstream_blocks_.clear(); } #ifdef RMM_DEBUG_PRINT void print() { lock_guard lock(this->get_mutex()); auto const [free, total] = get_upstream()->get_mem_info(rmm::cuda_stream_default); std::cout << "GPU free memory: " << free << " total: " << total << "\n"; std::cout << "upstream_blocks: " << upstream_blocks_.size() << "\n"; std::size_t upstream_total{0}; for (auto blocks : upstream_blocks_) { blocks.print(); upstream_total += upstream_chunk_size_; } std::cout << "total upstream: " << upstream_total << " B\n"; this->print_free_blocks(); } #endif /** * @brief Get the largest available block size and total free size in the specified free list * * This is intended only for debugging * * @param blocks The free list from which to return the summary * @return std::pair<std::size_t, std::size_t> Pair of largest available block, total free size */ std::pair<std::size_t, std::size_t> free_list_summary(free_list const& blocks) { return blocks.is_empty() ? std::make_pair(std::size_t{0}, std::size_t{0}) : std::make_pair(block_size_, blocks.size() * block_size_); } private: Upstream* upstream_mr_; // The resource from which to allocate new blocks std::size_t const block_size_; // size of blocks this MR allocates std::size_t const upstream_chunk_size_; // size of chunks allocated from heap MR // blocks allocated from heap: so they can be easily freed std::vector<block_type> upstream_blocks_; }; /** @} */ // end of group } // namespace rmm::mr
0
rapidsai_public_repos/rmm/include/rmm/mr
rapidsai_public_repos/rmm/include/rmm/mr/device/thread_safe_resource_adaptor.hpp
/* * Copyright (c) 2020-2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/cuda_stream_view.hpp> #include <rmm/detail/error.hpp> #include <rmm/mr/device/device_memory_resource.hpp> #include <cstddef> #include <mutex> namespace rmm::mr { /** * @addtogroup device_resource_adaptors * @{ * @file */ /** * @brief Resource that adapts `Upstream` memory resource adaptor to be thread safe. * * An instance of this resource can be constructured with an existing, upstream resource in order * to satisfy allocation requests. This adaptor wraps allocations and deallocations from Upstream * in a mutex lock. * * @tparam Upstream Type of the upstream resource used for allocation/deallocation. */ template <typename Upstream> class thread_safe_resource_adaptor final : public device_memory_resource { public: using lock_t = std::lock_guard<std::mutex>; ///< Type of lock used to synchronize access /** * @brief Construct a new thread safe resource adaptor using `upstream` to satisfy * allocation requests. * * All allocations and frees are protected by a mutex lock * * @throws rmm::logic_error if `upstream == nullptr` * * @param upstream The resource used for allocating/deallocating device memory. */ thread_safe_resource_adaptor(Upstream* upstream) : upstream_{upstream} { RMM_EXPECTS(nullptr != upstream, "Unexpected null upstream resource pointer."); } thread_safe_resource_adaptor() = delete; ~thread_safe_resource_adaptor() override = default; thread_safe_resource_adaptor(thread_safe_resource_adaptor const&) = delete; thread_safe_resource_adaptor(thread_safe_resource_adaptor&&) = delete; thread_safe_resource_adaptor& operator=(thread_safe_resource_adaptor const&) = delete; thread_safe_resource_adaptor& operator=(thread_safe_resource_adaptor&&) = delete; /** * @brief Get the upstream memory resource. * * @return Upstream* pointer to a memory resource object. */ Upstream* get_upstream() const noexcept { return upstream_; } /** * @copydoc rmm::mr::device_memory_resource::supports_streams() */ bool supports_streams() const noexcept override { return upstream_->supports_streams(); } /** * @brief Query whether the resource supports the get_mem_info API. * * @return bool true if the upstream resource supports get_mem_info, false otherwise. */ bool supports_get_mem_info() const noexcept override { return upstream_->supports_get_mem_info(); } private: /** * @brief Allocates memory of size at least `bytes` using the upstream * resource with thread safety. * * @throws rmm::bad_alloc if the requested allocation could not be fulfilled * by the upstream resource. * * @param bytes The size, in bytes, of the allocation * @param stream Stream on which to perform the allocation * @return void* Pointer to the newly allocated memory */ void* do_allocate(std::size_t bytes, cuda_stream_view stream) override { lock_t lock(mtx); return upstream_->allocate(bytes, stream); } /** * @brief Free allocation of size `bytes` pointed to to by `ptr`.s * * @param ptr Pointer to be deallocated * @param bytes Size of the allocation * @param stream Stream on which to perform the deallocation */ void do_deallocate(void* ptr, std::size_t bytes, cuda_stream_view stream) override { lock_t lock(mtx); upstream_->deallocate(ptr, bytes, stream); } /** * @brief Compare the upstream resource to another. * * @param other The other resource to compare to * @return true If the two resources are equivalent * @return false If the two resources are not equivalent */ bool do_is_equal(device_memory_resource const& other) const noexcept override { if (this == &other) { return true; } auto thread_safe_other = dynamic_cast<thread_safe_resource_adaptor<Upstream> const*>(&other); if (thread_safe_other != nullptr) { return upstream_->is_equal(*thread_safe_other->get_upstream()); } return upstream_->is_equal(other); } /** * @brief Get free and available memory from upstream resource. * * @throws rmm::cuda_error if unable to retrieve memory info. * * @param stream Stream on which to get the mem info. * @return std::pair contaiing free_size and total_size of memory */ std::pair<std::size_t, std::size_t> do_get_mem_info(cuda_stream_view stream) const override { lock_t lock(mtx); return upstream_->get_mem_info(stream); } std::mutex mutable mtx; // mutex for thread safe access to upstream Upstream* upstream_; ///< The upstream resource used for satisfying allocation requests }; /** @} */ // end of group } // namespace rmm::mr
0
rapidsai_public_repos/rmm/include/rmm/mr
rapidsai_public_repos/rmm/include/rmm/mr/device/per_device_resource.hpp
/* * Copyright (c) 2020-2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/cuda_device.hpp> #include <rmm/detail/export.hpp> #include <rmm/mr/device/cuda_memory_resource.hpp> #include <rmm/mr/device/device_memory_resource.hpp> #include <map> #include <mutex> /** * @file per_device_resource.hpp * @brief Management of per-device `device_memory_resource`s * * One might wish to construct a `device_memory_resource` and use it for (de)allocation * without explicit dependency injection, i.e., passing a reference to that object to all places it * is to be used. Instead, one might want to set their resource as a "default" and have it be used * in all places where another resource has not been explicitly specified. In applications with * multiple GPUs in the same process, it may also be necessary to maintain independent default * resources for each device. To this end, the `set_per_device_resource` and * `get_per_device_resource` functions enable mapping a CUDA device id to a `device_memory_resource` * pointer. * * For example, given a pointer, `mr`, to a `device_memory_resource` object, calling * `set_per_device_resource(cuda_device_id{0}, mr)` will establish a mapping between CUDA device 0 * and `mr` such that all future calls to `get_per_device_resource(cuda_device_id{0})` will return * the same pointer, `mr`. In this way, all places that use the resource returned from * `get_per_device_resource` for (de)allocation will use the user provided resource, `mr`. * * @note `device_memory_resource`s make CUDA API calls without setting the current CUDA device. * Therefore a memory resource should only be used with the current CUDA device set to the device * that was active when the memory resource was created. Calling `set_per_device_resource(id, mr)` * is only valid if `id` refers to the CUDA device that was active when `mr` was created. * * If no resource was explicitly set for a given device specified by `id`, then * `get_per_device_resource(id)` will return a pointer to a `cuda_memory_resource`. * * To fetch and modify the resource for the current CUDA device, `get_current_device_resource()` and * `set_current_device_resource()` will automatically use the current CUDA device id from * `cudaGetDevice()`. * * Creating a device_memory_resource for each device requires care to set the current device * before creating each resource, and to maintain the lifetime of the resources as long as they * are set as per-device resources. Here is an example loop that creates `unique_ptr`s to * pool_memory_resource objects for each device and sets them as the per-device resource for that * device. * * @code{.cpp} * std::vector<unique_ptr<pool_memory_resource>> per_device_pools; * for(int i = 0; i < N; ++i) { * cudaSetDevice(i); * per_device_pools.push_back(std::make_unique<pool_memory_resource>()); * set_per_device_resource(cuda_device_id{i}, &per_device_pools.back()); * } * @endcode */ namespace rmm::mr { /** * @addtogroup memory_resources * @{ */ namespace detail { /** * @brief Returns a pointer to the initial resource. * * Returns a global instance of a `cuda_memory_resource` as a function local static. * * @return Pointer to the static cuda_memory_resource used as the initial, default resource */ inline device_memory_resource* initial_resource() { static cuda_memory_resource mr{}; return &mr; } /** * @briefreturn{Reference to the lock} */ inline std::mutex& map_lock() { static std::mutex map_lock; return map_lock; } // This symbol must have default visibility, see: https://github.com/rapidsai/rmm/issues/826 /** * @briefreturn{Reference to the map from device id -> resource} */ RMM_EXPORT inline auto& get_map() { static std::map<cuda_device_id::value_type, device_memory_resource*> device_id_to_resource; return device_id_to_resource; } } // namespace detail /** * @brief Get the resource for the specified device. * * Returns a pointer to the `device_memory_resource` for the specified device. The initial * resource is a `cuda_memory_resource`. * * `id.value()` must be in the range `[0, cudaGetDeviceCount())`, otherwise behavior is undefined. * * This function is thread-safe with respect to concurrent calls to `set_per_device_resource`, * `get_per_device_resource`, `get_current_device_resource`, and `set_current_device_resource`. * Concurrent calls to any of these functions will result in a valid state, but the order of * execution is undefined. * * @note The returned `device_memory_resource` should only be used when CUDA device `id` is the * current device (e.g. set using `cudaSetDevice()`). The behavior of a device_memory_resource is * undefined if used while the active CUDA device is a different device from the one that was active * when the device_memory_resource was created. * * @param device_id The id of the target device * @return Pointer to the current `device_memory_resource` for device `id` */ inline device_memory_resource* get_per_device_resource(cuda_device_id device_id) { std::lock_guard<std::mutex> lock{detail::map_lock()}; auto& map = detail::get_map(); // If a resource was never set for `id`, set to the initial resource auto const found = map.find(device_id.value()); return (found == map.end()) ? (map[device_id.value()] = detail::initial_resource()) : found->second; } /** * @brief Set the `device_memory_resource` for the specified device. * * If `new_mr` is not `nullptr`, sets the memory resource pointer for the device specified by `id` * to `new_mr`. Otherwise, resets `id`s resource to the initial `cuda_memory_resource`. * * `id.value()` must be in the range `[0, cudaGetDeviceCount())`, otherwise behavior is undefined. * * The object pointed to by `new_mr` must outlive the last use of the resource, otherwise behavior * is undefined. It is the caller's responsibility to maintain the lifetime of the resource * object. * * This function is thread-safe with respect to concurrent calls to `set_per_device_resource`, * `get_per_device_resource`, `get_current_device_resource`, and `set_current_device_resource`. * Concurrent calls to any of these functions will result in a valid state, but the order of * execution is undefined. * * @note The resource passed in `new_mr` must have been created when device `id` was the current * CUDA device (e.g. set using `cudaSetDevice()`). The behavior of a device_memory_resource is * undefined if used while the active CUDA device is a different device from the one that was active * when the device_memory_resource was created. * * @param device_id The id of the target device * @param new_mr If not `nullptr`, pointer to new `device_memory_resource` to use as new resource * for `id` * @return Pointer to the previous memory resource for `id` */ inline device_memory_resource* set_per_device_resource(cuda_device_id device_id, device_memory_resource* new_mr) { std::lock_guard<std::mutex> lock{detail::map_lock()}; auto& map = detail::get_map(); auto const old_itr = map.find(device_id.value()); // If a resource didn't previously exist for `id`, return pointer to initial_resource auto* old_mr = (old_itr == map.end()) ? detail::initial_resource() : old_itr->second; map[device_id.value()] = (new_mr == nullptr) ? detail::initial_resource() : new_mr; return old_mr; } /** * @brief Get the memory resource for the current device. * * Returns a pointer to the resource set for the current device. The initial resource is a * `cuda_memory_resource`. * * The "current device" is the device returned by `cudaGetDevice`. * * This function is thread-safe with respect to concurrent calls to `set_per_device_resource`, * `get_per_device_resource`, `get_current_device_resource`, and `set_current_device_resource`. * Concurrent calls to any of these functions will result in a valid state, but the order of * execution is undefined. * * @note The returned `device_memory_resource` should only be used with the current CUDA device. * Changing the current device (e.g. using `cudaSetDevice()`) and then using the returned resource * can result in undefined behavior. The behavior of a device_memory_resource is undefined if used * while the active CUDA device is a different device from the one that was active when the * device_memory_resource was created. * * @return Pointer to the resource for the current device */ inline device_memory_resource* get_current_device_resource() { return get_per_device_resource(rmm::get_current_cuda_device()); } /** * @brief Set the memory resource for the current device. * * If `new_mr` is not `nullptr`, sets the resource pointer for the current device to * `new_mr`. Otherwise, resets the resource to the initial `cuda_memory_resource`. * * The "current device" is the device returned by `cudaGetDevice`. * * The object pointed to by `new_mr` must outlive the last use of the resource, otherwise behavior * is undefined. It is the caller's responsibility to maintain the lifetime of the resource * object. * * This function is thread-safe with respect to concurrent calls to `set_per_device_resource`, * `get_per_device_resource`, `get_current_device_resource`, and `set_current_device_resource`. * Concurrent calls to any of these functions will result in a valid state, but the order of * execution is undefined. * * @note The resource passed in `new_mr` must have been created for the current CUDA device. The * behavior of a device_memory_resource is undefined if used while the active CUDA device is a * different device from the one that was active when the device_memory_resource was created. * * @param new_mr If not `nullptr`, pointer to new resource to use for the current device * @return Pointer to the previous resource for the current device */ inline device_memory_resource* set_current_device_resource(device_memory_resource* new_mr) { return set_per_device_resource(rmm::get_current_cuda_device(), new_mr); } /** @} */ // end of group } // namespace rmm::mr
0
rapidsai_public_repos/rmm/include/rmm/mr
rapidsai_public_repos/rmm/include/rmm/mr/device/arena_memory_resource.hpp
/* * Copyright (c) 2020-2022, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/detail/error.hpp> #include <rmm/detail/logging_assert.hpp> #include <rmm/logger.hpp> #include <rmm/mr/device/detail/arena.hpp> #include <rmm/mr/device/device_memory_resource.hpp> #include <cuda_runtime_api.h> #include <spdlog/common.h> #include <cstddef> #include <map> #include <shared_mutex> #include <thread> namespace rmm::mr { /** * @addtogroup device_memory_resources * @{ * @file */ /** * @brief A suballocator that emphasizes fragmentation avoidance and scalable concurrency support. * * Allocation (do_allocate()) and deallocation (do_deallocate()) are thread-safe. Also, * this class is compatible with CUDA per-thread default stream. * * GPU memory is divided into a global arena, per-thread arenas for default streams, and per-stream * arenas for non-default streams. Each arena allocates memory from the global arena in chunks * called superblocks. * * Blocks in each arena are allocated using address-ordered first fit. When a block is freed, it is * coalesced with neighbouring free blocks if the addresses are contiguous. Free superblocks are * returned to the global arena. * * In real-world applications, allocation sizes tend to follow a power law distribution in which * large allocations are rare, but small ones quite common. By handling small allocations in the * per-thread arena, adequate performance can be achieved without introducing excessive memory * fragmentation under high concurrency. * * This design is inspired by several existing CPU memory allocators targeting multi-threaded * applications (glibc malloc, Hoard, jemalloc, TCMalloc), albeit in a simpler form. Possible future * improvements include using size classes, allocation caches, and more fine-grained locking or * lock-free approaches. * * \see Wilson, P. R., Johnstone, M. S., Neely, M., & Boles, D. (1995, September). Dynamic storage * allocation: A survey and critical review. In International Workshop on Memory Management (pp. * 1-116). Springer, Berlin, Heidelberg. * \see Berger, E. D., McKinley, K. S., Blumofe, R. D., & Wilson, P. R. (2000). Hoard: A scalable * memory allocator for multithreaded applications. ACM Sigplan Notices, 35(11), 117-128. * \see Evans, J. (2006, April). A scalable concurrent malloc (3) implementation for FreeBSD. In * Proc. of the bsdcan conference, ottawa, canada. * \see https://sourceware.org/glibc/wiki/MallocInternals * \see http://hoard.org/ * \see http://jemalloc.net/ * \see https://github.com/google/tcmalloc * * @tparam Upstream Memory resource to use for allocating memory for the global arena. Implements * rmm::mr::device_memory_resource interface. */ template <typename Upstream> class arena_memory_resource final : public device_memory_resource { public: /** * @brief Construct an `arena_memory_resource`. * * @throws rmm::logic_error if `upstream_mr == nullptr`. * * @param upstream_mr The memory resource from which to allocate blocks for the global arena. * @param arena_size Size in bytes of the global arena. Defaults to half of the available memory * on the current device. * @param dump_log_on_failure If true, dump memory log when running out of memory. */ explicit arena_memory_resource(Upstream* upstream_mr, std::optional<std::size_t> arena_size = std::nullopt, bool dump_log_on_failure = false) : global_arena_{upstream_mr, arena_size}, dump_log_on_failure_{dump_log_on_failure} { if (dump_log_on_failure_) { logger_ = spdlog::basic_logger_mt("arena_memory_dump", "rmm_arena_memory_dump.log"); // Set the level to `debug` for more detailed output. logger_->set_level(spdlog::level::info); } } ~arena_memory_resource() override = default; // Disable copy (and move) semantics. arena_memory_resource(arena_memory_resource const&) = delete; arena_memory_resource& operator=(arena_memory_resource const&) = delete; arena_memory_resource(arena_memory_resource&&) noexcept = delete; arena_memory_resource& operator=(arena_memory_resource&&) noexcept = delete; /** * @brief Queries whether the resource supports use of non-null CUDA streams for * allocation/deallocation. * * @returns bool true. */ bool supports_streams() const noexcept override { return true; } /** * @brief Query whether the resource supports the get_mem_info API. * * @return bool false. */ bool supports_get_mem_info() const noexcept override { return false; } private: using global_arena = rmm::mr::detail::arena::global_arena<Upstream>; using arena = rmm::mr::detail::arena::arena<Upstream>; /** * @brief Allocates memory of size at least `bytes`. * * The returned pointer has at least 256-byte alignment. * * @throws `rmm::out_of_memory` if no more memory is available for the requested size. * * @param bytes The size in bytes of the allocation. * @param stream The stream to associate this allocation with. * @return void* Pointer to the newly allocated memory. */ void* do_allocate(std::size_t bytes, cuda_stream_view stream) override { if (bytes <= 0) { return nullptr; } #ifdef RMM_ARENA_USE_SIZE_CLASSES bytes = rmm::mr::detail::arena::align_to_size_class(bytes); #else bytes = rmm::detail::align_up(bytes, rmm::detail::CUDA_ALLOCATION_ALIGNMENT); #endif auto& arena = get_arena(stream); { std::shared_lock lock(mtx_); void* pointer = arena.allocate(bytes); if (pointer != nullptr) { return pointer; } } { std::unique_lock lock(mtx_); defragment(); void* pointer = arena.allocate(bytes); if (pointer == nullptr) { if (dump_log_on_failure_) { dump_memory_log(bytes); } RMM_FAIL("Maximum pool size exceeded", rmm::out_of_memory); } return pointer; } } /** * @brief Defragment memory by returning all superblocks to the global arena. */ void defragment() { RMM_CUDA_TRY(cudaDeviceSynchronize()); for (auto& thread_arena : thread_arenas_) { thread_arena.second->clean(); } for (auto& stream_arena : stream_arenas_) { stream_arena.second.clean(); } } /** * @brief Deallocate memory pointed to by `ptr`. * * @param ptr Pointer to be deallocated. * @param bytes The size in bytes of the allocation. This must be equal to the * value of `bytes` that was passed to the `allocate` call that returned `ptr`. * @param stream Stream on which to perform deallocation. */ void do_deallocate(void* ptr, std::size_t bytes, cuda_stream_view stream) override { if (ptr == nullptr || bytes <= 0) { return; } #ifdef RMM_ARENA_USE_SIZE_CLASSES bytes = rmm::mr::detail::arena::align_to_size_class(bytes); #else bytes = rmm::detail::align_up(bytes, rmm::detail::CUDA_ALLOCATION_ALIGNMENT); #endif auto& arena = get_arena(stream); { std::shared_lock lock(mtx_); // If the memory being freed does not belong to the arena, the following will return false. if (arena.deallocate(ptr, bytes, stream)) { return; } } { // Since we are returning this memory to another stream, we need to make sure the current // stream is caught up. stream.synchronize_no_throw(); std::unique_lock lock(mtx_); deallocate_from_other_arena(ptr, bytes, stream); } } /** * @brief Deallocate memory pointed to by `ptr` that was allocated in a different arena. * * @param ptr Pointer to be deallocated. * @param bytes The size in bytes of the allocation. This must be equal to the * value of `bytes` that was passed to the `allocate` call that returned `ptr`. * @param stream Stream on which to perform deallocation. */ void deallocate_from_other_arena(void* ptr, std::size_t bytes, cuda_stream_view stream) { if (use_per_thread_arena(stream)) { for (auto const& thread_arena : thread_arenas_) { if (thread_arena.second->deallocate(ptr, bytes)) { return; } } } else { for (auto& stream_arena : stream_arenas_) { if (stream_arena.second.deallocate(ptr, bytes)) { return; } } } if (!global_arena_.deallocate(ptr, bytes)) { RMM_FAIL("allocation not found"); } } /** * @brief Get the arena associated with the current thread or the given stream. * * @param stream The stream associated with the arena. * @return arena& The arena associated with the current thread or the given stream. */ arena& get_arena(cuda_stream_view stream) { if (use_per_thread_arena(stream)) { return get_thread_arena(); } return get_stream_arena(stream); } /** * @brief Get the arena associated with the current thread. * * @return arena& The arena associated with the current thread. */ arena& get_thread_arena() { auto const thread_id = std::this_thread::get_id(); { std::shared_lock lock(map_mtx_); auto const iter = thread_arenas_.find(thread_id); if (iter != thread_arenas_.end()) { return *iter->second; } } { std::unique_lock lock(map_mtx_); auto thread_arena = std::make_shared<arena>(global_arena_); thread_arenas_.emplace(thread_id, thread_arena); thread_local detail::arena::arena_cleaner<Upstream> cleaner{thread_arena}; return *thread_arena; } } /** * @brief Get the arena associated with the given stream. * * @return arena& The arena associated with the given stream. */ arena& get_stream_arena(cuda_stream_view stream) { RMM_LOGGING_ASSERT(!use_per_thread_arena(stream)); { std::shared_lock lock(map_mtx_); auto const iter = stream_arenas_.find(stream.value()); if (iter != stream_arenas_.end()) { return iter->second; } } { std::unique_lock lock(map_mtx_); stream_arenas_.emplace(stream.value(), global_arena_); return stream_arenas_.at(stream.value()); } } /** * @brief Get free and available memory for memory resource. * * @param stream to execute on. * @return std::pair containing free_size and total_size of memory. */ std::pair<std::size_t, std::size_t> do_get_mem_info( [[maybe_unused]] cuda_stream_view stream) const override { return std::make_pair(0, 0); } /** * Dump memory to log. * * @param bytes the number of bytes requested for allocation */ void dump_memory_log(size_t bytes) { logger_->info("**************************************************"); logger_->info("Ran out of memory trying to allocate {}.", rmm::detail::bytes{bytes}); logger_->info("**************************************************"); logger_->info("Global arena:"); global_arena_.dump_memory_log(logger_); logger_->flush(); } /** * @brief Should a per-thread arena be used given the CUDA stream. * * @param stream to check. * @return true if per-thread arena should be used, false otherwise. */ static bool use_per_thread_arena(cuda_stream_view stream) { return stream.is_per_thread_default(); } /// The global arena to allocate superblocks from. global_arena global_arena_; /// Arenas for default streams, one per thread. /// Implementation note: for small sizes, map is more efficient than unordered_map. std::map<std::thread::id, std::shared_ptr<arena>> thread_arenas_; /// Arenas for non-default streams, one per stream. /// Implementation note: for small sizes, map is more efficient than unordered_map. std::map<cudaStream_t, arena> stream_arenas_; /// If true, dump memory information to log on allocation failure. bool dump_log_on_failure_{}; /// The logger for memory dump. std::shared_ptr<spdlog::logger> logger_{}; /// Mutex for read and write locks on arena maps. mutable std::shared_mutex map_mtx_; /// Mutex for shared and unique locks on the mr. mutable std::shared_mutex mtx_; }; /** @} */ // end of group } // namespace rmm::mr
0
rapidsai_public_repos/rmm/include/rmm/mr
rapidsai_public_repos/rmm/include/rmm/mr/device/polymorphic_allocator.hpp
/* * Copyright (c) 2020-2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/cuda_stream_view.hpp> #include <rmm/mr/device/device_memory_resource.hpp> #include <rmm/mr/device/per_device_resource.hpp> #include <cstddef> #include <memory> #include <type_traits> namespace rmm::mr { /** * @brief A stream ordered Allocator using a `rmm::mr::device_memory_resource` to satisfy * (de)allocations. * * Similar to `std::pmr::polymorphic_allocator`, uses the runtime polymorphism of * `device_memory_resource` to allow containers with `polymorphic_allocator` as their static * allocator type to be interoperable, but exhibit different behavior depending on resource used. * * Unlike STL allocators, `polymorphic_allocator`'s `allocate` and `deallocate` functions are stream * ordered. Use `stream_allocator_adaptor` to allow interoperability with interfaces that require * standard, non stream-ordered `Allocator` interfaces. * * @tparam T The allocators value type. */ template <typename T> class polymorphic_allocator { public: using value_type = T; ///< T, the value type of objects allocated by this allocator /** * @brief Construct a `polymorphic_allocator` using the return value of * `rmm::mr::get_current_device_resource()` as the underlying memory resource. * */ polymorphic_allocator() = default; /** * @brief Construct a `polymorphic_allocator` using the provided memory resource. * * This constructor provides an implicit conversion from `memory_resource*`. * * @param mr The `device_memory_resource` to use as the underlying resource. */ polymorphic_allocator(device_memory_resource* mr) : mr_{mr} {} /** * @brief Construct a `polymorphic_allocator` using `other.resource()` as the underlying memory * resource. * * @param other The `polymorphic_resource` whose `resource()` will be used as the underlying * resource of the new `polymorphic_allocator`. */ template <typename U> polymorphic_allocator(polymorphic_allocator<U> const& other) noexcept : mr_{other.resource()} { } /** * @brief Allocates storage for `num` objects of type `T` using the underlying memory resource. * * @param num The number of objects to allocate storage for * @param stream The stream on which to perform the allocation * @return Pointer to the allocated storage */ value_type* allocate(std::size_t num, cuda_stream_view stream) { return static_cast<value_type*>(resource()->allocate(num * sizeof(T), stream)); } /** * @brief Deallocates storage pointed to by `ptr`. * * `ptr` must have been allocated from a `rmm::mr::device_memory_resource` `r` that compares equal * to `*resource()` using `r.allocate(n * sizeof(T))`. * * @param ptr Pointer to memory to deallocate * @param num Number of objects originally allocated * @param stream Stream on which to perform the deallocation */ void deallocate(value_type* ptr, std::size_t num, cuda_stream_view stream) { resource()->deallocate(ptr, num * sizeof(T), stream); } /** * @brief Returns pointer to the underlying `rmm::mr::device_memory_resource`. * * @return Pointer to the underlying resource. */ [[nodiscard]] device_memory_resource* resource() const noexcept { return mr_; } private: device_memory_resource* mr_{ get_current_device_resource()}; ///< Underlying resource used for (de)allocation }; template <typename T, typename U> bool operator==(polymorphic_allocator<T> const& lhs, polymorphic_allocator<U> const& rhs) { return lhs.resource()->is_equal(*rhs.resource()); } template <typename T, typename U> bool operator!=(polymorphic_allocator<T> const& lhs, polymorphic_allocator<U> const& rhs) { return not(lhs == rhs); } /** * @brief Adapts a stream ordered allocator to provide a standard `Allocator` interface * * A stream-ordered allocator (i.e., `allocate/deallocate` use a `cuda_stream_view`) cannot be used * in an interface that expects a standard C++ `Allocator` interface. `stream_allocator_adaptor` * wraps a stream-ordered allocator and a stream to provide a standard `Allocator` interface. The * adaptor uses the wrapped stream in calls to the underlying allocator's `allocate` and *`deallocate` functions. * * Example: *\code{.cpp} * my_stream_ordered_allocator<int> a{...}; * cuda_stream_view s = // create stream; * * auto adapted = make_stream_allocator_adaptor(a, s); * * // Allocates storage for `n` int's on stream `s` * int * p = std::allocator_traits<decltype(adapted)>::allocate(adapted, n); *\endcode * * @tparam Allocator Stream ordered allocator type to adapt */ template <typename Allocator> class stream_allocator_adaptor { public: using value_type = typename std::allocator_traits<Allocator>::value_type; ///< The value type of objects allocated ///< by this allocator stream_allocator_adaptor() = delete; /** * @brief Construct a `stream_allocator_adaptor` using `a` as the underlying allocator. * * @note: The `stream` must not be destroyed before the `stream_allocator_adaptor`, otherwise * behavior is undefined. * * @param allocator The stream ordered allocator to use as the underlying allocator * @param stream The stream used with the underlying allocator */ stream_allocator_adaptor(Allocator const& allocator, cuda_stream_view stream) : alloc_{allocator}, stream_{stream} { } /** * @brief Construct a `stream_allocator_adaptor` using `other.underlying_allocator()` and * `other.stream()` as the underlying allocator and stream. * * @tparam OtherAllocator Type of `other`'s underlying allocator * @param other The other `stream_allocator_adaptor` whose underlying allocator and stream will be * copied */ template <typename OtherAllocator> stream_allocator_adaptor(stream_allocator_adaptor<OtherAllocator> const& other) : stream_allocator_adaptor{other.underlying_allocator(), other.stream()} { } /** * @brief Rebinds the allocator to the specified type. * * @tparam T The desired `value_type` of the rebound allocator type */ template <typename T> struct rebind { using other = stream_allocator_adaptor<typename std::allocator_traits< Allocator>::template rebind_alloc<T>>; ///< The type to bind to }; /** * @brief Allocates storage for `num` objects of type `T` using the underlying allocator on * `stream()`. * * @param num The number of objects to allocate storage for * @return Pointer to the allocated storage */ value_type* allocate(std::size_t num) { return alloc_.allocate(num, stream()); } /** * @brief Deallocates storage pointed to by `ptr` using the underlying allocator on `stream()`. * * `ptr` must have been allocated from by an allocator `a` that compares equal to * `underlying_allocator()` using `a.allocate(n)`. * * @param ptr Pointer to memory to deallocate * @param num Number of objects originally allocated */ void deallocate(value_type* ptr, std::size_t num) { alloc_.deallocate(ptr, num, stream()); } /** * @briefreturn{The stream on which calls to the underlying allocator are made} */ [[nodiscard]] cuda_stream_view stream() const noexcept { return stream_; } /** * @briefreturn{The underlying allocator} */ [[nodiscard]] Allocator underlying_allocator() const noexcept { return alloc_; } private: Allocator alloc_; ///< Underlying allocator used for (de)allocation cuda_stream_view stream_; ///< Stream on which (de)allocations are performed }; template <typename A, typename O> bool operator==(stream_allocator_adaptor<A> const& lhs, stream_allocator_adaptor<O> const& rhs) { return lhs.underlying_allocator() == rhs.underlying_allocator(); } template <typename A, typename O> bool operator!=(stream_allocator_adaptor<A> const& lhs, stream_allocator_adaptor<O> const& rhs) { return not(lhs == rhs); } /** * @brief Factory to construct a `stream_allocator_adaptor` from an existing stream-ordered * allocator. * * @tparam Allocator Type of the stream-ordered allocator * @param allocator The allocator to use as the underlying allocator of the * `stream_allocator_adaptor` * @param stream The stream on which the `stream_allocator_adaptor` will perform (de)allocations * @return A `stream_allocator_adaptor` wrapping `allocator` and `s` */ template <typename Allocator> auto make_stream_allocator_adaptor(Allocator const& allocator, cuda_stream_view stream) { return stream_allocator_adaptor<Allocator>{allocator, stream}; } } // namespace rmm::mr
0
rapidsai_public_repos/rmm/include/rmm/mr
rapidsai_public_repos/rmm/include/rmm/mr/device/logging_resource_adaptor.hpp
/* * Copyright (c) 2020-2023, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/cuda_stream_view.hpp> #include <rmm/detail/error.hpp> #include <rmm/mr/device/device_memory_resource.hpp> #include <fmt/core.h> #include <spdlog/common.h> #include <spdlog/sinks/basic_file_sink.h> #include <spdlog/sinks/ostream_sink.h> #include <spdlog/spdlog.h> #include <cstddef> #include <memory> #include <sstream> #include <string_view> namespace rmm::mr { /** * @addtogroup device_resource_adaptors * @{ * @file */ /** * @brief Resource that uses `Upstream` to allocate memory and logs information * about the requested allocation/deallocations. * * An instance of this resource can be constructed with an existing, upstream * resource in order to satisfy allocation requests and log * allocation/deallocation activity. * * @tparam Upstream Type of the upstream resource used for * allocation/deallocation. */ template <typename Upstream> class logging_resource_adaptor final : public device_memory_resource { public: /** * @brief Construct a new logging resource adaptor using `upstream` to satisfy * allocation requests and logging information about each allocation/free to * the file specified by `filename`. * * The logfile will be written using CSV formatting. * * Clears the contents of `filename` if it already exists. * * Creating multiple `logging_resource_adaptor`s with the same `filename` will * result in undefined behavior. * * @throws rmm::logic_error if `upstream == nullptr` * @throws spdlog::spdlog_ex if opening `filename` failed * * @param upstream The resource used for allocating/deallocating device memory * @param filename Name of file to write log info. If not specified, retrieves * the file name from the environment variable "RMM_LOG_FILE". * @param auto_flush If true, flushes the log for every (de)allocation. Warning, this will degrade * performance. */ logging_resource_adaptor(Upstream* upstream, std::string const& filename = get_default_filename(), bool auto_flush = false) : logger_{make_logger(filename)}, upstream_{upstream} { RMM_EXPECTS(nullptr != upstream, "Unexpected null upstream resource pointer."); init_logger(auto_flush); } /** * @brief Construct a new logging resource adaptor using `upstream` to satisfy * allocation requests and logging information about each allocation/free to * the ostream specified by `stream`. * * The logfile will be written using CSV formatting. * * @throws rmm::logic_error if `upstream == nullptr` * * @param upstream The resource used for allocating/deallocating device memory * @param stream The ostream to write log info. * @param auto_flush If true, flushes the log for every (de)allocation. Warning, this will degrade * performance. */ logging_resource_adaptor(Upstream* upstream, std::ostream& stream, bool auto_flush = false) : logger_{make_logger(stream)}, upstream_{upstream} { RMM_EXPECTS(nullptr != upstream, "Unexpected null upstream resource pointer."); init_logger(auto_flush); } /** * @brief Construct a new logging resource adaptor using `upstream` to satisfy * allocation requests and logging information about each allocation/free to * the ostream specified by `stream`. * * The logfile will be written using CSV formatting. * * @throws rmm::logic_error if `upstream == nullptr` * * @param upstream The resource used for allocating/deallocating device memory * @param sinks A list of logging sinks to which log output will be written. * @param auto_flush If true, flushes the log for every (de)allocation. Warning, this will degrade * performance. */ logging_resource_adaptor(Upstream* upstream, spdlog::sinks_init_list sinks, bool auto_flush = false) : logger_{make_logger(sinks)}, upstream_{upstream} { RMM_EXPECTS(nullptr != upstream, "Unexpected null upstream resource pointer."); init_logger(auto_flush); } logging_resource_adaptor() = delete; ~logging_resource_adaptor() override = default; logging_resource_adaptor(logging_resource_adaptor const&) = delete; logging_resource_adaptor& operator=(logging_resource_adaptor const&) = delete; logging_resource_adaptor(logging_resource_adaptor&&) noexcept = default; ///< @default_move_constructor logging_resource_adaptor& operator=(logging_resource_adaptor&&) noexcept = default; ///< @default_move_assignment{logging_resource_adaptor} /** * @brief Return pointer to the upstream resource. * * @return Upstream* Pointer to the upstream resource. */ [[nodiscard]] Upstream* get_upstream() const noexcept { return upstream_; } /** * @brief Checks whether the upstream resource supports streams. * * @return true The upstream resource supports streams * @return false The upstream resource does not support streams. */ [[nodiscard]] bool supports_streams() const noexcept override { return upstream_->supports_streams(); } /** * @brief Query whether the resource supports the get_mem_info API. * * @return bool true if the upstream resource supports get_mem_info, false otherwise. */ [[nodiscard]] bool supports_get_mem_info() const noexcept override { return upstream_->supports_get_mem_info(); } /** * @brief Flush logger contents. */ void flush() { logger_->flush(); } /** * @brief Return the CSV header string * * @return CSV formatted header string of column names */ [[nodiscard]] std::string header() const { return std::string{"Thread,Time,Action,Pointer,Size,Stream"}; } /** * @brief Return the value of the environment variable RMM_LOG_FILE. * * @throws rmm::logic_error if `RMM_LOG_FILE` is not set. * * @return The value of RMM_LOG_FILE as `std::string`. */ static std::string get_default_filename() { auto* filename = std::getenv("RMM_LOG_FILE"); RMM_EXPECTS(filename != nullptr, "RMM logging requested without an explicit file name, but RMM_LOG_FILE is unset"); return std::string{filename}; } private: static auto make_logger(std::ostream& stream) { return std::make_shared<spdlog::logger>( "RMM", std::make_shared<spdlog::sinks::ostream_sink_mt>(stream)); } static auto make_logger(std::string const& filename) { return std::make_shared<spdlog::logger>( "RMM", std::make_shared<spdlog::sinks::basic_file_sink_mt>(filename, true /*truncate file*/)); } static auto make_logger(spdlog::sinks_init_list sinks) { return std::make_shared<spdlog::logger>("RMM", sinks); } /** * @brief Initialize the logger. */ void init_logger(bool auto_flush) { if (auto_flush) { logger_->flush_on(spdlog::level::info); } logger_->set_pattern("%v"); logger_->info(header()); logger_->set_pattern("%t,%H:%M:%S.%f,%v"); } /** * @brief Allocates memory of size at least `bytes` using the upstream * resource and logs the allocation. * * If the upstream allocation is successful, logs the following CSV formatted * line to the file specified at construction: * ``` * thread_id,*TIMESTAMP*,"allocate",*pointer*,*bytes*,*stream* * ``` * * If the upstream allocation failed, logs the following CSV formatted line * to the file specified at construction: * ``` * thread_id,*TIMESTAMP*,"allocate failure",0x0,*bytes*,*stream* * ``` * * The returned pointer has at least 256B alignment. * * @throws rmm::bad_alloc if the requested allocation could not be fulfilled * by the upstream resource. * * @param bytes The size, in bytes, of the allocation * @param stream Stream on which to perform the allocation * @return void* Pointer to the newly allocated memory */ void* do_allocate(std::size_t bytes, cuda_stream_view stream) override { try { auto const ptr = upstream_->allocate(bytes, stream); logger_->info("allocate,{},{},{}", ptr, bytes, fmt::ptr(stream.value())); return ptr; } catch (...) { logger_->info("allocate failure,{},{},{}", nullptr, bytes, fmt::ptr(stream.value())); throw; } } /** * @brief Free allocation of size `bytes` pointed to by `ptr` and log the * deallocation. * * Every invocation of `logging_resource_adaptor::do_deallocate` will write * the following CSV formatted line to the file specified at construction: * ``` * thread_id,*TIMESTAMP*,"free",*bytes*,*stream* * ``` * * @param ptr Pointer to be deallocated * @param bytes Size of the allocation * @param stream Stream on which to perform the deallocation */ void do_deallocate(void* ptr, std::size_t bytes, cuda_stream_view stream) override { logger_->info("free,{},{},{}", ptr, bytes, fmt::ptr(stream.value())); upstream_->deallocate(ptr, bytes, stream); } /** * @brief Compare the upstream resource to another. * * @param other The other resource to compare to * @return true If the two resources are equivalent * @return false If the two resources are not equal */ [[nodiscard]] bool do_is_equal(device_memory_resource const& other) const noexcept override { if (this == &other) { return true; } auto const* cast = dynamic_cast<logging_resource_adaptor<Upstream> const*>(&other); if (cast != nullptr) { return upstream_->is_equal(*cast->get_upstream()); } return upstream_->is_equal(other); } /** * @brief Get free and available memory from upstream resource. * * @throws rmm::cuda_error if unable to retrieve memory info. * * @param stream Stream on which to get the mem info. * @return std::pair contaiing free_size and total_size of memory */ [[nodiscard]] std::pair<std::size_t, std::size_t> do_get_mem_info( cuda_stream_view stream) const override { return upstream_->get_mem_info(stream); } // make_logging_adaptor needs access to private get_default_filename template <typename T> // NOLINTNEXTLINE(readability-redundant-declaration) friend logging_resource_adaptor<T> make_logging_adaptor(T* upstream, std::string const& filename, bool auto_flush); std::shared_ptr<spdlog::logger> logger_; ///< spdlog logger object Upstream* upstream_; ///< The upstream resource used for satisfying ///< allocation requests }; /** * @brief Convenience factory to return a `logging_resource_adaptor` around the * upstream resource `upstream`. * * @tparam Upstream Type of the upstream `device_memory_resource`. * @param upstream Pointer to the upstream resource * @param filename Name of the file to write log info. If not specified, * retrieves the log file name from the environment variable "RMM_LOG_FILE". * @param auto_flush If true, flushes the log for every (de)allocation. Warning, this will degrade * performance. * @return The new logging resource adaptor */ template <typename Upstream> logging_resource_adaptor<Upstream> make_logging_adaptor( Upstream* upstream, std::string const& filename = logging_resource_adaptor<Upstream>::get_default_filename(), bool auto_flush = false) { return logging_resource_adaptor<Upstream>{upstream, filename, auto_flush}; } /** * @brief Convenience factory to return a `logging_resource_adaptor` around the * upstream resource `upstream`. * * @tparam Upstream Type of the upstream `device_memory_resource`. * @param upstream Pointer to the upstream resource * @param stream The ostream to write log info. * @param auto_flush If true, flushes the log for every (de)allocation. Warning, this will degrade * performance. * @return The new logging resource adaptor */ template <typename Upstream> logging_resource_adaptor<Upstream> make_logging_adaptor(Upstream* upstream, std::ostream& stream, bool auto_flush = false) { return logging_resource_adaptor<Upstream>{upstream, stream, auto_flush}; } /** @} */ // end of group } // namespace rmm::mr
0
rapidsai_public_repos/rmm/include/rmm/mr
rapidsai_public_repos/rmm/include/rmm/mr/device/callback_memory_resource.hpp
/* * Copyright (c) 2022, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/mr/device/device_memory_resource.hpp> #include <cstddef> #include <functional> #include <utility> namespace rmm::mr { /** * @addtogroup device_memory_resources * @{ * @file */ /** * @brief Callback function type used by callback memory resource for allocation. * * The signature of the callback function is: * `void* allocate_callback_t(std::size_t bytes, cuda_stream_view stream, void* arg); * * * Returns a pointer to an allocation of at least `bytes` usable immediately on * `stream`. The stream-ordered behavior requirements are identical to * `device_memory_resource::allocate`. * * * This signature is compatible with `do_allocate` but adds the extra function * parameter `arg`. The `arg` is provided to the constructor of the * `callback_memory_resource` and will be forwarded along to every invocation * of the callback function. */ using allocate_callback_t = std::function<void*(std::size_t, cuda_stream_view, void*)>; /** * @brief Callback function type used by callback_memory_resource for deallocation. * * The signature of the callback function is: * `void deallocate_callback_t(void* ptr, std::size_t bytes, cuda_stream_view stream, void* arg); * * * Deallocates memory pointed to by `ptr`. `bytes` specifies the size of the allocation * in bytes, and must equal the value of `bytes` that was passed to the allocate callback * function. The stream-ordered behavior requirements are identical to * `device_memory_resource::deallocate`. * * * This signature is compatible with `do_deallocate` but adds the extra function * parameter `arg`. The `arg` is provided to the constructor of the * `callback_memory_resource` and will be forwarded along to every invocation * of the callback function. */ using deallocate_callback_t = std::function<void(void*, std::size_t, cuda_stream_view, void*)>; /** * @brief A device memory resource that uses the provided callbacks for memory allocation * and deallocation. */ class callback_memory_resource final : public device_memory_resource { public: /** * @brief Construct a new callback memory resource. * * Constructs a callback memory resource that uses the user-provided callbacks * `allocate_callback` for allocation and `deallocate_callback` for deallocation. * * @param allocate_callback The callback function used for allocation * @param deallocate_callback The callback function used for deallocation * @param allocate_callback_arg Additional context passed to `allocate_callback`. * It is the caller's responsibility to maintain the lifetime of the pointed-to data * for the duration of the lifetime of the `callback_memory_resource`. * @param deallocate_callback_arg Additional context passed to `deallocate_callback`. * It is the caller's responsibility to maintain the lifetime of the pointed-to data * for the duration of the lifetime of the `callback_memory_resource`. */ callback_memory_resource(allocate_callback_t allocate_callback, deallocate_callback_t deallocate_callback, void* allocate_callback_arg = nullptr, void* deallocate_callback_arg = nullptr) noexcept : allocate_callback_(allocate_callback), deallocate_callback_(deallocate_callback), allocate_callback_arg_(allocate_callback_arg), deallocate_callback_arg_(deallocate_callback_arg) { } callback_memory_resource() = delete; ~callback_memory_resource() override = default; callback_memory_resource(callback_memory_resource const&) = delete; callback_memory_resource& operator=(callback_memory_resource const&) = delete; callback_memory_resource(callback_memory_resource&&) noexcept = default; ///< @default_move_constructor callback_memory_resource& operator=(callback_memory_resource&&) noexcept = default; ///< @default_move_assignment{callback_memory_resource} private: /** * @brief Allocates memory of size at least \p bytes. * * The returned pointer will have at minimum 256 byte alignment. * * If supported by the callback, this operation may optionally be executed on * a stream. Otherwise, the stream is ignored and the null stream is used. * * @param bytes The size of the allocation * @param stream Stream on which to perform allocation * @return void* Pointer to the newly allocated memory */ void* do_allocate(std::size_t bytes, cuda_stream_view stream) override { return allocate_callback_(bytes, stream, allocate_callback_arg_); } /** * @brief Deallocate memory pointed to by \p p. * * If supported by the callback, this operation may optionally be executed on * a stream. Otherwise, the stream is ignored and the null stream is used. * * @param ptr Pointer to be deallocated * @param bytes The size in bytes of the allocation. This must be equal to the * value of `bytes` that was passed to the `allocate` call that returned `p`. * @param stream Stream on which to perform deallocation */ void do_deallocate(void* ptr, std::size_t bytes, cuda_stream_view stream) override { deallocate_callback_(ptr, bytes, stream, deallocate_callback_arg_); } [[nodiscard]] std::pair<std::size_t, std::size_t> do_get_mem_info(cuda_stream_view) const override { throw std::runtime_error("cannot get free / total memory"); } [[nodiscard]] bool supports_streams() const noexcept override { return false; } [[nodiscard]] bool supports_get_mem_info() const noexcept override { return false; } allocate_callback_t allocate_callback_; deallocate_callback_t deallocate_callback_; void* allocate_callback_arg_; void* deallocate_callback_arg_; }; /** @} */ // end of group } // namespace rmm::mr
0
rapidsai_public_repos/rmm/include/rmm/mr
rapidsai_public_repos/rmm/include/rmm/mr/device/tracking_resource_adaptor.hpp
/* * Copyright (c) 2020-2023, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/detail/error.hpp> #include <rmm/detail/stack_trace.hpp> #include <rmm/logger.hpp> #include <rmm/mr/device/device_memory_resource.hpp> #include <fmt/core.h> #include <cstddef> #include <map> #include <mutex> #include <shared_mutex> #include <sstream> namespace rmm::mr { /** * @addtogroup device_resource_adaptors * @{ * @file */ /** * @brief Resource that uses `Upstream` to allocate memory and tracks allocations. * * An instance of this resource can be constructed with an existing, upstream * resource in order to satisfy allocation requests, but any existing allocations * will be untracked. Tracking stores a size and pointer for every allocation, and a stack * frame if `capture_stacks` is true, so it can add significant overhead. * `tracking_resource_adaptor` is intended as a debug adaptor and shouldn't be used in * performance-sensitive code. Note that callstacks may not contain all symbols unless * the project is linked with `-rdynamic`. This can be accomplished with * `add_link_options(-rdynamic)` in cmake. * * @tparam Upstream Type of the upstream resource used for * allocation/deallocation. */ template <typename Upstream> class tracking_resource_adaptor final : public device_memory_resource { public: // can be a std::shared_mutex once C++17 is adopted using read_lock_t = std::shared_lock<std::shared_timed_mutex>; ///< Type of lock used to synchronize read access using write_lock_t = std::unique_lock<std::shared_timed_mutex>; ///< Type of lock used to synchronize write access /** * @brief Information stored about an allocation. Includes the size * and a stack trace if the `tracking_resource_adaptor` was initialized * to capture stacks. * */ struct allocation_info { std::unique_ptr<rmm::detail::stack_trace> strace; ///< Stack trace of the allocation std::size_t allocation_size; ///< Size of the allocation allocation_info() = delete; /** * @brief Construct a new allocation info object * * @param size Size of the allocation * @param capture_stack If true, capture the stack trace for the allocation */ allocation_info(std::size_t size, bool capture_stack) : strace{[&]() { return capture_stack ? std::make_unique<rmm::detail::stack_trace>() : nullptr; }()}, allocation_size{size} {}; }; /** * @brief Construct a new tracking resource adaptor using `upstream` to satisfy * allocation requests. * * @throws rmm::logic_error if `upstream == nullptr` * * @param upstream The resource used for allocating/deallocating device memory * @param capture_stacks If true, capture stacks for allocation calls */ tracking_resource_adaptor(Upstream* upstream, bool capture_stacks = false) : capture_stacks_{capture_stacks}, allocated_bytes_{0}, upstream_{upstream} { RMM_EXPECTS(nullptr != upstream, "Unexpected null upstream resource pointer."); } tracking_resource_adaptor() = delete; ~tracking_resource_adaptor() override = default; tracking_resource_adaptor(tracking_resource_adaptor const&) = delete; tracking_resource_adaptor(tracking_resource_adaptor&&) noexcept = default; ///< @default_move_constructor tracking_resource_adaptor& operator=(tracking_resource_adaptor const&) = delete; tracking_resource_adaptor& operator=(tracking_resource_adaptor&&) noexcept = default; ///< @default_move_assignment{tracking_resource_adaptor} /** * @briefreturn{Pointer to the upstream resource} */ Upstream* get_upstream() const noexcept { return upstream_; } /** * @brief Checks whether the upstream resource supports streams. * * @return true The upstream resource supports streams * @return false The upstream resource does not support streams. */ bool supports_streams() const noexcept override { return upstream_->supports_streams(); } /** * @brief Query whether the resource supports the get_mem_info API. * * @return bool true if the upstream resource supports get_mem_info, false otherwise. */ bool supports_get_mem_info() const noexcept override { return upstream_->supports_get_mem_info(); } /** * @brief Get the outstanding allocations map * * @return std::map<void*, allocation_info> const& of a map of allocations. The key * is the allocated memory pointer and the data is the allocation_info structure, which * contains size and, potentially, stack traces. */ std::map<void*, allocation_info> const& get_outstanding_allocations() const noexcept { return allocations_; } /** * @brief Query the number of bytes that have been allocated. Note that * this can not be used to know how large of an allocation is possible due * to both possible fragmentation and also internal page sizes and alignment * that is not tracked by this allocator. * * @return std::size_t number of bytes that have been allocated through this * allocator. */ std::size_t get_allocated_bytes() const noexcept { return allocated_bytes_; } /** * @brief Gets a string containing the outstanding allocation pointers, their * size, and optionally the stack trace for when each pointer was allocated. * * Stack traces are only included if this resource adaptor was created with * `capture_stack == true`. Otherwise, outstanding allocation pointers will be * shown with their size and empty stack traces. * * @return std::string Containing the outstanding allocation pointers. */ std::string get_outstanding_allocations_str() const { read_lock_t lock(mtx_); std::ostringstream oss; if (!allocations_.empty()) { for (auto const& alloc : allocations_) { oss << alloc.first << ": " << alloc.second.allocation_size << " B"; if (alloc.second.strace != nullptr) { oss << " : callstack:" << std::endl << *alloc.second.strace; } oss << std::endl; } } return oss.str(); } /** * @brief Log any outstanding allocations via RMM_LOG_DEBUG * */ void log_outstanding_allocations() const { #if SPDLOG_ACTIVE_LEVEL <= SPDLOG_LEVEL_DEBUG RMM_LOG_DEBUG("Outstanding Allocations: {}", get_outstanding_allocations_str()); #endif // SPDLOG_ACTIVE_LEVEL <= SPDLOG_LEVEL_DEBUG } private: /** * @brief Allocates memory of size at least `bytes` using the upstream * resource as long as it fits inside the allocation limit. * * The returned pointer has at least 256B alignment. * * @throws rmm::bad_alloc if the requested allocation could not be fulfilled * by the upstream resource. * * @param bytes The size, in bytes, of the allocation * @param stream Stream on which to perform the allocation * @return void* Pointer to the newly allocated memory */ void* do_allocate(std::size_t bytes, cuda_stream_view stream) override { void* ptr = upstream_->allocate(bytes, stream); // track it. { write_lock_t lock(mtx_); allocations_.emplace(ptr, allocation_info{bytes, capture_stacks_}); } allocated_bytes_ += bytes; return ptr; } /** * @brief Free allocation of size `bytes` pointed to by `ptr` * * @param ptr Pointer to be deallocated * @param bytes Size of the allocation * @param stream Stream on which to perform the deallocation */ void do_deallocate(void* ptr, std::size_t bytes, cuda_stream_view stream) override { upstream_->deallocate(ptr, bytes, stream); { write_lock_t lock(mtx_); const auto found = allocations_.find(ptr); // Ensure the allocation is found and the number of bytes match if (found == allocations_.end()) { // Don't throw but log an error. Throwing in a descructor (or any noexcept) will call // std::terminate RMM_LOG_ERROR( "Deallocating a pointer that was not tracked. Ptr: {:p} [{}B], Current Num. Allocations: " "{}", fmt::ptr(ptr), bytes, this->allocations_.size()); } else { allocations_.erase(found); auto allocated_bytes = found->second.allocation_size; if (allocated_bytes != bytes) { // Don't throw but log an error. Throwing in a descructor (or any noexcept) will call // std::terminate RMM_LOG_ERROR( "Alloc bytes ({}) and Dealloc bytes ({}) do not match", allocated_bytes, bytes); bytes = allocated_bytes; } } } allocated_bytes_ -= bytes; } /** * @brief Compare the upstream resource to another. * * @param other The other resource to compare to * @return true If the two resources are equivalent * @return false If the two resources are not equal */ bool do_is_equal(device_memory_resource const& other) const noexcept override { if (this == &other) { return true; } auto cast = dynamic_cast<tracking_resource_adaptor<Upstream> const*>(&other); return cast != nullptr ? upstream_->is_equal(*cast->get_upstream()) : upstream_->is_equal(other); } /** * @brief Get free and available memory from upstream resource. * * @throws rmm::cuda_error if unable to retrieve memory info. * * @param stream Stream on which to get the mem info. * @return std::pair contaiing free_size and total_size of memory */ std::pair<std::size_t, std::size_t> do_get_mem_info(cuda_stream_view stream) const override { return upstream_->get_mem_info(stream); } bool capture_stacks_; // whether or not to capture call stacks std::map<void*, allocation_info> allocations_; // map of active allocations std::atomic<std::size_t> allocated_bytes_; // number of bytes currently allocated std::shared_timed_mutex mutable mtx_; // mutex for thread safe access to allocations_ Upstream* upstream_; // the upstream resource used for satisfying allocation requests }; /** * @brief Convenience factory to return a `tracking_resource_adaptor` around the * upstream resource `upstream`. * * @tparam Upstream Type of the upstream `device_memory_resource`. * @param upstream Pointer to the upstream resource * @return The new tracking resource adaptor */ template <typename Upstream> tracking_resource_adaptor<Upstream> make_tracking_adaptor(Upstream* upstream) { return tracking_resource_adaptor<Upstream>{upstream}; } /** @} */ // end of group } // namespace rmm::mr
0
rapidsai_public_repos/rmm/include/rmm/mr
rapidsai_public_repos/rmm/include/rmm/mr/device/thrust_allocator_adaptor.hpp
/* * Copyright (c) 2019-2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/mr/device/device_memory_resource.hpp> #include <rmm/mr/device/per_device_resource.hpp> #include <rmm/detail/thrust_namespace.h> #include <thrust/device_malloc_allocator.h> #include <thrust/device_ptr.h> #include <thrust/memory.h> #include <cuda/memory_resource> namespace rmm::mr { /** * @addtogroup device_resource_adaptors * @{ * @file */ /** * @brief An `allocator` compatible with Thrust containers and algorithms using * a `device_memory_resource` for memory (de)allocation. * * Unlike a `device_memory_resource`, `thrust_allocator` is typed and bound to * allocate objects of a specific type `T`, but can be freely rebound to other * types. * * @tparam T The type of the objects that will be allocated by this allocator */ template <typename T> class thrust_allocator : public thrust::device_malloc_allocator<T> { using async_resource_ref = cuda::mr::async_resource_ref<cuda::mr::device_accessible>; public: using Base = thrust::device_malloc_allocator<T>; ///< The base type of this allocator using pointer = typename Base::pointer; ///< The pointer type using size_type = typename Base::size_type; ///< The size type /** * @brief Provides the type of a `thrust_allocator` instantiated with another * type. * * @tparam U the other type to use for instantiation */ template <typename U> struct rebind { using other = thrust_allocator<U>; ///< The type to bind to }; /** * @brief Default constructor creates an allocator using the default memory * resource and default stream. */ thrust_allocator() = default; /** * @brief Constructs a `thrust_allocator` using the default device memory * resource and specified stream. * * @param stream The stream to be used for device memory (de)allocation */ explicit thrust_allocator(cuda_stream_view stream) : _stream{stream} {} /** * @brief Constructs a `thrust_allocator` using a device memory resource and * stream. * * @param mr The resource to be used for device memory allocation * @param stream The stream to be used for device memory (de)allocation */ thrust_allocator(cuda_stream_view stream, async_resource_ref mr) : _stream{stream}, _mr(mr) {} /** * @brief Copy constructor. Copies the resource pointer and stream. * * @param other The `thrust_allocator` to copy */ template <typename U> thrust_allocator(thrust_allocator<U> const& other) : _mr(other.resource()), _stream{other.stream()} { } /** * @brief Allocate objects of type `T` * * @param num The number of elements of type `T` to allocate * @return pointer Pointer to the newly allocated storage */ pointer allocate(size_type num) { return thrust::device_pointer_cast( static_cast<T*>(_mr.allocate_async(num * sizeof(T), _stream))); } /** * @brief Deallocates objects of type `T` * * @param ptr Pointer returned by a previous call to `allocate` * @param num number of elements, *must* be equal to the argument passed to the * prior `allocate` call that produced `p` */ void deallocate(pointer ptr, size_type num) { return _mr.deallocate_async(thrust::raw_pointer_cast(ptr), num * sizeof(T), _stream); } /** * @briefreturn{The async_resource_ref used to allocate and deallocate} */ [[nodiscard]] async_resource_ref memory_resource() const noexcept { return _mr; } /** * @briefreturn{The stream used by this allocator} */ [[nodiscard]] cuda_stream_view stream() const noexcept { return _stream; } /** * @brief Enables the `cuda::mr::device_accessible` property * * This property declares that a `thrust_allocator` provides device accessible memory */ friend void get_property(thrust_allocator const&, cuda::mr::device_accessible) noexcept {} private: cuda_stream_view _stream{}; async_resource_ref _mr{rmm::mr::get_current_device_resource()}; }; /** @} */ // end of group } // namespace rmm::mr
0
rapidsai_public_repos/rmm/include/rmm/mr
rapidsai_public_repos/rmm/include/rmm/mr/device/cuda_memory_resource.hpp
/* * Copyright (c) 2019-2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/mr/device/device_memory_resource.hpp> #include <rmm/cuda_stream_view.hpp> #include <rmm/detail/error.hpp> #include <cstddef> namespace rmm::mr { /** * @addtogroup device_memory_resources * @{ * @file */ /** * @brief `device_memory_resource` derived class that uses cudaMalloc/Free for * allocation/deallocation. */ class cuda_memory_resource final : public device_memory_resource { public: cuda_memory_resource() = default; ~cuda_memory_resource() override = default; cuda_memory_resource(cuda_memory_resource const&) = default; ///< @default_copy_constructor cuda_memory_resource(cuda_memory_resource&&) = default; ///< @default_move_constructor cuda_memory_resource& operator=(cuda_memory_resource const&) = default; ///< @default_copy_assignment{cuda_memory_resource} cuda_memory_resource& operator=(cuda_memory_resource&&) = default; ///< @default_move_assignment{cuda_memory_resource} /** * @brief Query whether the resource supports use of non-null CUDA streams for * allocation/deallocation. `cuda_memory_resource` does not support streams. * * @returns bool false */ [[nodiscard]] bool supports_streams() const noexcept override { return false; } /** * @brief Query whether the resource supports the get_mem_info API. * * @return true */ [[nodiscard]] bool supports_get_mem_info() const noexcept override { return true; } private: /** * @brief Allocates memory of size at least \p bytes. * * The returned pointer will have at minimum 256 byte alignment. * * The stream argument is ignored. * * @param bytes The size of the allocation * @param stream This argument is ignored * @return void* Pointer to the newly allocated memory */ void* do_allocate(std::size_t bytes, [[maybe_unused]] cuda_stream_view stream) override { void* ptr{nullptr}; RMM_CUDA_TRY_ALLOC(cudaMalloc(&ptr, bytes)); return ptr; } /** * @brief Deallocate memory pointed to by \p p. * * The stream argument is ignored. * * @param ptr Pointer to be deallocated * @param bytes The size in bytes of the allocation. This must be equal to the * value of `bytes` that was passed to the `allocate` call that returned `p`. * @param stream This argument is ignored. */ void do_deallocate(void* ptr, [[maybe_unused]] std::size_t bytes, [[maybe_unused]] cuda_stream_view stream) override { RMM_ASSERT_CUDA_SUCCESS(cudaFree(ptr)); } /** * @brief Compare this resource to another. * * Two cuda_memory_resources always compare equal, because they can each * deallocate memory allocated by the other. * * @param other The other resource to compare to * @return true If the two resources are equivalent * @return false If the two resources are not equal */ [[nodiscard]] bool do_is_equal(device_memory_resource const& other) const noexcept override { return dynamic_cast<cuda_memory_resource const*>(&other) != nullptr; } /** * @brief Get free and available memory for memory resource * * @throws rmm::cuda_error if unable to retrieve memory info. * * @return std::pair contaiing free_size and total_size of memory */ [[nodiscard]] std::pair<std::size_t, std::size_t> do_get_mem_info(cuda_stream_view) const override { std::size_t free_size{}; std::size_t total_size{}; RMM_CUDA_TRY(cudaMemGetInfo(&free_size, &total_size)); return std::make_pair(free_size, total_size); } }; /** @} */ // end of group } // namespace rmm::mr
0
rapidsai_public_repos/rmm/include/rmm/mr
rapidsai_public_repos/rmm/include/rmm/mr/device/managed_memory_resource.hpp
/* * Copyright (c) 2019-2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/mr/device/device_memory_resource.hpp> #include <rmm/cuda_stream_view.hpp> #include <rmm/detail/error.hpp> #include <cstddef> namespace rmm::mr { /** * @addtogroup device_memory_resources * @{ * @file */ /** * @brief `device_memory_resource` derived class that uses * cudaMallocManaged/Free for allocation/deallocation. */ class managed_memory_resource final : public device_memory_resource { public: managed_memory_resource() = default; ~managed_memory_resource() override = default; managed_memory_resource(managed_memory_resource const&) = default; ///< @default_copy_constructor managed_memory_resource(managed_memory_resource&&) = default; ///< @default_move_constructor managed_memory_resource& operator=(managed_memory_resource const&) = default; ///< @default_copy_assignment{managed_memory_resource} managed_memory_resource& operator=(managed_memory_resource&&) = default; ///< @default_move_assignment{managed_memory_resource} /** * @brief Query whether the resource supports use of non-null streams for * allocation/deallocation. * * @returns false */ [[nodiscard]] bool supports_streams() const noexcept override { return false; } /** * @brief Query whether the resource supports the get_mem_info API. * * @return true */ [[nodiscard]] bool supports_get_mem_info() const noexcept override { return true; } private: /** * @brief Allocates memory of size at least \p bytes. * * The returned pointer will have at minimum 256 byte alignment. * * The stream is ignored. * * @param bytes The size of the allocation * @param stream This argument is ignored * @return void* Pointer to the newly allocated memory */ void* do_allocate(std::size_t bytes, [[maybe_unused]] cuda_stream_view stream) override { // FIXME: Unlike cudaMalloc, cudaMallocManaged will throw an error for 0 // size allocations. if (bytes == 0) { return nullptr; } void* ptr{nullptr}; RMM_CUDA_TRY_ALLOC(cudaMallocManaged(&ptr, bytes)); return ptr; } /** * @brief Deallocate memory pointed to by \p p. * * The stream is ignored. * * @param ptr Pointer to be deallocated * @param bytes The size in bytes of the allocation. This must be equal to the * value of `bytes` that was passed to the `allocate` call that returned `p`. * @param stream This argument is ignored */ void do_deallocate(void* ptr, [[maybe_unused]] std::size_t bytes, [[maybe_unused]] cuda_stream_view stream) override { RMM_ASSERT_CUDA_SUCCESS(cudaFree(ptr)); } /** * @brief Compare this resource to another. * * Two `managed_memory_resources` always compare equal, because they can each * deallocate memory allocated by the other. * * @param other The other resource to compare to * @return true If the two resources are equivalent * @return false If the two resources are not equal */ [[nodiscard]] bool do_is_equal(device_memory_resource const& other) const noexcept override { return dynamic_cast<managed_memory_resource const*>(&other) != nullptr; } /** * @brief Get free and available memory for memory resource * * @throws rmm::cuda_error if unable to retrieve memory info * * @param stream to execute on * @return std::pair contaiing free_size and total_size of memory */ [[nodiscard]] std::pair<std::size_t, std::size_t> do_get_mem_info( [[maybe_unused]] cuda_stream_view stream) const override { std::size_t free_size{}; std::size_t total_size{}; RMM_CUDA_TRY(cudaMemGetInfo(&free_size, &total_size)); return std::make_pair(free_size, total_size); } }; /** @} */ // end of group } // namespace rmm::mr
0
rapidsai_public_repos/rmm/include/rmm/mr
rapidsai_public_repos/rmm/include/rmm/mr/device/statistics_resource_adaptor.hpp
/* * Copyright (c) 2020-2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/mr/device/device_memory_resource.hpp> #include <cstddef> #include <mutex> #include <shared_mutex> namespace rmm::mr { /** * @addtogroup device_resource_adaptors * @{ * @file */ /** * @brief Resource that uses `Upstream` to allocate memory and tracks statistics * on memory allocations. * * An instance of this resource can be constructed with an existing, upstream * resource in order to satisfy allocation requests, but any existing * allocations will be untracked. Tracking statistics stores the current, peak * and total memory allocations for both the number of bytes and number of calls * to the memory resource. `statistics_resource_adaptor` is intended as a debug * adaptor and shouldn't be used in performance-sensitive code. * * @tparam Upstream Type of the upstream resource used for * allocation/deallocation. */ template <typename Upstream> class statistics_resource_adaptor final : public device_memory_resource { public: // can be a std::shared_mutex once C++17 is adopted using read_lock_t = std::shared_lock<std::shared_timed_mutex>; ///< Type of lock used to synchronize read access using write_lock_t = std::unique_lock<std::shared_timed_mutex>; ///< Type of lock used to synchronize write access /** * @brief Utility struct for counting the current, peak, and total value of a number */ struct counter { int64_t value{0}; ///< Current value int64_t peak{0}; ///< Max value of `value` int64_t total{0}; ///< Sum of all added values /** * @brief Add `val` to the current value and update the peak value if necessary * * @param val Value to add * @return Reference to this object */ counter& operator+=(int64_t val) { value += val; total += val; peak = std::max(value, peak); return *this; } /** * @brief Subtract `val` from the current value and update the peak value if necessary * * @param val Value to subtract * @return Reference to this object */ counter& operator-=(int64_t val) { value -= val; return *this; } }; /** * @brief Construct a new statistics resource adaptor using `upstream` to satisfy * allocation requests. * * @throws rmm::logic_error if `upstream == nullptr` * * @param upstream The resource used for allocating/deallocating device memory */ statistics_resource_adaptor(Upstream* upstream) : upstream_{upstream} { RMM_EXPECTS(nullptr != upstream, "Unexpected null upstream resource pointer."); } statistics_resource_adaptor() = delete; ~statistics_resource_adaptor() override = default; statistics_resource_adaptor(statistics_resource_adaptor const&) = delete; statistics_resource_adaptor& operator=(statistics_resource_adaptor const&) = delete; statistics_resource_adaptor(statistics_resource_adaptor&&) noexcept = default; ///< @default_move_constructor statistics_resource_adaptor& operator=(statistics_resource_adaptor&&) noexcept = default; ///< @default_move_assignment{statistics_resource_adaptor} /** * @briefreturn{Pointer to the upstream resource} */ Upstream* get_upstream() const noexcept { return upstream_; } /** * @brief Checks whether the upstream resource supports streams. * * @return true The upstream resource supports streams * @return false The upstream resource does not support streams. */ bool supports_streams() const noexcept override { return upstream_->supports_streams(); } /** * @brief Query whether the resource supports the get_mem_info API. * * @return bool true if the upstream resource supports get_mem_info, false otherwise. */ bool supports_get_mem_info() const noexcept override { return upstream_->supports_get_mem_info(); } /** * @brief Returns a `counter` struct for this adaptor containing the current, * peak, and total number of allocated bytes for this * adaptor since it was created. * * @return counter struct containing bytes count */ counter get_bytes_counter() const noexcept { read_lock_t lock(mtx_); return bytes_; } /** * @brief Returns a `counter` struct for this adaptor containing the current, * peak, and total number of allocation counts for this adaptor since it was * created. * * @return counter struct containing allocations count */ counter get_allocations_counter() const noexcept { read_lock_t lock(mtx_); return allocations_; } private: /** * @brief Allocates memory of size at least `bytes` using the upstream * resource as long as it fits inside the allocation limit. * * The returned pointer has at least 256B alignment. * * @throws rmm::bad_alloc if the requested allocation could not be fulfilled * by the upstream resource. * * @param bytes The size, in bytes, of the allocation * @param stream Stream on which to perform the allocation * @return void* Pointer to the newly allocated memory */ void* do_allocate(std::size_t bytes, cuda_stream_view stream) override { void* ptr = upstream_->allocate(bytes, stream); // increment the stats { write_lock_t lock(mtx_); // Increment the allocation_count_ while we have the lock bytes_ += bytes; allocations_ += 1; } return ptr; } /** * @brief Free allocation of size `bytes` pointed to by `ptr` * * @param ptr Pointer to be deallocated * @param bytes Size of the allocation * @param stream Stream on which to perform the deallocation */ void do_deallocate(void* ptr, std::size_t bytes, cuda_stream_view stream) override { upstream_->deallocate(ptr, bytes, stream); { write_lock_t lock(mtx_); // Decrement the current allocated counts. bytes_ -= bytes; allocations_ -= 1; } } /** * @brief Compare the upstream resource to another. * * @param other The other resource to compare to * @return true If the two resources are equivalent * @return false If the two resources are not equal */ bool do_is_equal(device_memory_resource const& other) const noexcept override { if (this == &other) { return true; } auto cast = dynamic_cast<statistics_resource_adaptor<Upstream> const*>(&other); return cast != nullptr ? upstream_->is_equal(*cast->get_upstream()) : upstream_->is_equal(other); } /** * @brief Get free and available memory from upstream resource. * * @throws rmm::cuda_error if unable to retrieve memory info. * * @param stream Stream on which to get the mem info. * @return std::pair contaiing free_size and total_size of memory */ std::pair<std::size_t, std::size_t> do_get_mem_info(cuda_stream_view stream) const override { return upstream_->get_mem_info(stream); } counter bytes_; // peak, current and total allocated bytes counter allocations_; // peak, current and total allocation count std::shared_timed_mutex mutable mtx_; // mutex for thread safe access to allocations_ Upstream* upstream_; // the upstream resource used for satisfying allocation requests }; /** * @brief Convenience factory to return a `statistics_resource_adaptor` around the * upstream resource `upstream`. * * @tparam Upstream Type of the upstream `device_memory_resource`. * @param upstream Pointer to the upstream resource * @return The new statistics resource adaptor */ template <typename Upstream> statistics_resource_adaptor<Upstream> make_statistics_adaptor(Upstream* upstream) { return statistics_resource_adaptor<Upstream>{upstream}; } /** @} */ // end of group } // namespace rmm::mr
0
rapidsai_public_repos/rmm/include/rmm/mr
rapidsai_public_repos/rmm/include/rmm/mr/device/binning_memory_resource.hpp
/* * Copyright (c) 2020-2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/detail/aligned.hpp> #include <rmm/mr/device/device_memory_resource.hpp> #include <rmm/mr/device/fixed_size_memory_resource.hpp> #include <cuda_runtime_api.h> #include <algorithm> #include <cassert> #include <map> #include <memory> #include <vector> namespace rmm::mr { /** * @addtogroup device_memory_resources * @{ * @file */ /** * @brief Allocates memory from upstream resources associated with bin sizes. * * @tparam UpstreamResource memory_resource to use for allocations that don't fall within any * configured bin size. Implements rmm::mr::device_memory_resource interface. */ template <typename Upstream> class binning_memory_resource final : public device_memory_resource { public: /** * @brief Construct a new binning memory resource object. * * Initially has no bins, so simply uses the upstream_resource until bin resources are added * with `add_bin`. * * @throws rmm::logic_error if size_base is not a power of two. * * @param upstream_resource The upstream memory resource used to allocate bin pools. */ explicit binning_memory_resource(Upstream* upstream_resource) : upstream_mr_{[upstream_resource]() { RMM_EXPECTS(nullptr != upstream_resource, "Unexpected null upstream pointer."); return upstream_resource; }()} { } /** * @brief Construct a new binning memory resource object with a range of initial bins. * * Constructs a new binning memory resource and adds bins backed by `fixed_size_memory_resource` * in the range [2^min_size_exponent, 2^max_size_exponent]. For example if `min_size_exponent==18` * and `max_size_exponent==22`, creates bins of sizes 256KiB, 512KiB, 1024KiB, 2048KiB and * 4096KiB. * * @param upstream_resource The upstream memory resource used to allocate bin pools. * @param min_size_exponent The minimum base-2 exponent bin size. * @param max_size_exponent The maximum base-2 exponent bin size. */ binning_memory_resource(Upstream* upstream_resource, int8_t min_size_exponent, // NOLINT(bugprone-easily-swappable-parameters) int8_t max_size_exponent) : upstream_mr_{[upstream_resource]() { RMM_EXPECTS(nullptr != upstream_resource, "Unexpected null upstream pointer."); return upstream_resource; }()} { for (auto i = min_size_exponent; i <= max_size_exponent; i++) { add_bin(1 << i); } } /** * @brief Destroy the binning_memory_resource and free all memory allocated from the upstream * resource. */ ~binning_memory_resource() override = default; binning_memory_resource() = delete; binning_memory_resource(binning_memory_resource const&) = delete; binning_memory_resource(binning_memory_resource&&) = delete; binning_memory_resource& operator=(binning_memory_resource const&) = delete; binning_memory_resource& operator=(binning_memory_resource&&) = delete; /** * @brief Query whether the resource supports use of non-null streams for * allocation/deallocation. * * @returns true */ [[nodiscard]] bool supports_streams() const noexcept override { return true; } /** * @brief Query whether the resource supports the get_mem_info API. * * @return false */ [[nodiscard]] bool supports_get_mem_info() const noexcept override { return false; } /** * @brief Get the upstream memory_resource object. * * @return UpstreamResource* the upstream memory resource. */ [[nodiscard]] Upstream* get_upstream() const noexcept { return upstream_mr_; } /** * @brief Add a bin allocator to this resource * * Adds `bin_resource` if it is not null; otherwise constructs and adds a * fixed_size_memory_resource. * * This bin will be used for any allocation smaller than `allocation_size` that is larger than * the next smaller bin's allocation size. * * If there is already a bin of the specified size nothing is changed. * * This function is not thread safe. * * @param allocation_size The maximum size that this bin allocates * @param bin_resource The memory resource for the bin */ void add_bin(std::size_t allocation_size, device_memory_resource* bin_resource = nullptr) { allocation_size = rmm::detail::align_up(allocation_size, rmm::detail::CUDA_ALLOCATION_ALIGNMENT); if (nullptr != bin_resource) { resource_bins_.insert({allocation_size, bin_resource}); } else if (resource_bins_.count(allocation_size) == 0) { // do nothing if bin already exists owned_bin_resources_.push_back( std::make_unique<fixed_size_memory_resource<Upstream>>(upstream_mr_, allocation_size)); resource_bins_.insert({allocation_size, owned_bin_resources_.back().get()}); } } private: /** * @brief Get the memory resource for the requested size * * Chooses a memory_resource that allocates the smallest blocks at least as large as `bytes`. * * @param bytes Requested allocation size in bytes * @return rmm::mr::device_memory_resource& memory_resource that can allocate the requested size. */ device_memory_resource* get_resource(std::size_t bytes) { auto iter = resource_bins_.lower_bound(bytes); return (iter != resource_bins_.cend()) ? iter->second : static_cast<device_memory_resource*>(get_upstream()); } /** * @brief Allocates memory of size at least \p bytes. * * The returned pointer will have at minimum 256 byte alignment. * * @param bytes The size of the allocation * @param stream Stream on which to perform allocation * @return void* Pointer to the newly allocated memory */ void* do_allocate(std::size_t bytes, cuda_stream_view stream) override { if (bytes <= 0) { return nullptr; } return get_resource(bytes)->allocate(bytes, stream); } /** * @brief Deallocate memory pointed to by \p p. * * @throws nothing * * @param ptr Pointer to be deallocated * @param bytes The size in bytes of the allocation. This must be equal to the * value of `bytes` that was passed to the `allocate` call that returned `p`. * @param stream Stream on which to perform deallocation */ void do_deallocate(void* ptr, std::size_t bytes, cuda_stream_view stream) override { auto res = get_resource(bytes); if (res != nullptr) { res->deallocate(ptr, bytes, stream); } } /** * @brief Get free and available memory for memory resource * * @throws std::runtime_error if we could not get free / total memory * * @param stream the stream being executed on * @return std::pair with available and free memory for resource */ [[nodiscard]] std::pair<std::size_t, std::size_t> do_get_mem_info( [[maybe_unused]] cuda_stream_view stream) const override { return std::make_pair(0, 0); } Upstream* upstream_mr_; // The upstream memory_resource from which to allocate blocks. std::vector<std::unique_ptr<fixed_size_memory_resource<Upstream>>> owned_bin_resources_; std::map<std::size_t, device_memory_resource*> resource_bins_; }; /** @} */ // end of group } // namespace rmm::mr
0
rapidsai_public_repos/rmm/include/rmm/mr
rapidsai_public_repos/rmm/include/rmm/mr/device/limiting_resource_adaptor.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/detail/aligned.hpp> #include <rmm/detail/error.hpp> #include <rmm/mr/device/device_memory_resource.hpp> #include <cstddef> namespace rmm::mr { /** * @addtogroup device_resource_adaptors * @{ * @file */ /** * @brief Resource that uses `Upstream` to allocate memory and limits the total * allocations possible. * * An instance of this resource can be constructed with an existing, upstream * resource in order to satisfy allocation requests, but any existing allocations * will be untracked. Atomics are used to make this thread-safe, but note that * the `get_allocated_bytes` may not include in-flight allocations. * * @tparam Upstream Type of the upstream resource used for * allocation/deallocation. */ template <typename Upstream> class limiting_resource_adaptor final : public device_memory_resource { public: /** * @brief Construct a new limiting resource adaptor using `upstream` to satisfy * allocation requests and limiting the total allocation amount possible. * * @throws rmm::logic_error if `upstream == nullptr` * * @param upstream The resource used for allocating/deallocating device memory * @param allocation_limit Maximum memory allowed for this allocator * @param alignment Alignment in bytes for the start of each allocated buffer */ limiting_resource_adaptor(Upstream* upstream, std::size_t allocation_limit, std::size_t alignment = rmm::detail::CUDA_ALLOCATION_ALIGNMENT) : allocation_limit_{allocation_limit}, allocated_bytes_(0), alignment_(alignment), upstream_{upstream} { RMM_EXPECTS(nullptr != upstream, "Unexpected null upstream resource pointer."); } limiting_resource_adaptor() = delete; ~limiting_resource_adaptor() override = default; limiting_resource_adaptor(limiting_resource_adaptor const&) = delete; limiting_resource_adaptor(limiting_resource_adaptor&&) noexcept = default; ///< @default_move_constructor limiting_resource_adaptor& operator=(limiting_resource_adaptor const&) = delete; limiting_resource_adaptor& operator=(limiting_resource_adaptor&&) noexcept = default; ///< @default_move_assignment{limiting_resource_adaptor} /** * @briefreturn{Pointer to the upstream resource} */ [[nodiscard]] Upstream* get_upstream() const noexcept { return upstream_; } /** * @brief Checks whether the upstream resource supports streams. * * @return true The upstream resource supports streams * @return false The upstream resource does not support streams. */ [[nodiscard]] bool supports_streams() const noexcept override { return upstream_->supports_streams(); } /** * @brief Query whether the resource supports the get_mem_info API. * * @return bool true if the upstream resource supports get_mem_info, false otherwise. */ [[nodiscard]] bool supports_get_mem_info() const noexcept override { return upstream_->supports_get_mem_info(); } /** * @brief Query the number of bytes that have been allocated. Note that * this can not be used to know how large of an allocation is possible due * to both possible fragmentation and also internal page sizes and alignment * that is not tracked by this allocator. * * @return std::size_t number of bytes that have been allocated through this * allocator. */ [[nodiscard]] std::size_t get_allocated_bytes() const { return allocated_bytes_; } /** * @brief Query the maximum number of bytes that this allocator is allowed * to allocate. This is the limit on the allocator and not a representation of * the underlying device. The device may not be able to support this limit. * * @return std::size_t max number of bytes allowed for this allocator */ [[nodiscard]] std::size_t get_allocation_limit() const { return allocation_limit_; } private: /** * @brief Allocates memory of size at least `bytes` using the upstream * resource as long as it fits inside the allocation limit. * * The returned pointer has at least 256B alignment. * * @throws rmm::bad_alloc if the requested allocation could not be fulfilled * by the upstream resource. * * @param bytes The size, in bytes, of the allocation * @param stream Stream on which to perform the allocation * @return void* Pointer to the newly allocated memory */ void* do_allocate(std::size_t bytes, cuda_stream_view stream) override { auto const proposed_size = rmm::detail::align_up(bytes, alignment_); auto const old = allocated_bytes_.fetch_add(proposed_size); if (old + proposed_size <= allocation_limit_) { try { return upstream_->allocate(bytes, stream); } catch (...) { allocated_bytes_ -= proposed_size; throw; } } allocated_bytes_ -= proposed_size; RMM_FAIL("Exceeded memory limit", rmm::out_of_memory); } /** * @brief Free allocation of size `bytes` pointed to by `ptr` * * @param ptr Pointer to be deallocated * @param bytes Size of the allocation * @param stream Stream on which to perform the deallocation */ void do_deallocate(void* ptr, std::size_t bytes, cuda_stream_view stream) override { std::size_t allocated_size = rmm::detail::align_up(bytes, alignment_); upstream_->deallocate(ptr, bytes, stream); allocated_bytes_ -= allocated_size; } /** * @brief Compare the upstream resource to another. * * @param other The other resource to compare to * @return true If the two resources are equivalent * @return false If the two resources are not equal */ [[nodiscard]] bool do_is_equal(device_memory_resource const& other) const noexcept override { if (this == &other) { return true; } auto const* cast = dynamic_cast<limiting_resource_adaptor<Upstream> const*>(&other); if (cast != nullptr) { return upstream_->is_equal(*cast->get_upstream()); } return upstream_->is_equal(other); } /** * @brief Get free and available memory from upstream resource. * * @throws rmm::cuda_error if unable to retrieve memory info. * * @param stream Stream on which to get the mem info. * @return std::pair contaiing free_size and total_size of memory */ [[nodiscard]] std::pair<std::size_t, std::size_t> do_get_mem_info( [[maybe_unused]] cuda_stream_view stream) const override { return {allocation_limit_ - allocated_bytes_, allocation_limit_}; } // maximum bytes this allocator is allowed to allocate. std::size_t allocation_limit_; // number of currently-allocated bytes std::atomic<std::size_t> allocated_bytes_; // todo: should be some way to ask the upstream... std::size_t alignment_; Upstream* upstream_; ///< The upstream resource used for satisfying ///< allocation requests }; /** * @brief Convenience factory to return a `limiting_resource_adaptor` around the * upstream resource `upstream`. * * @tparam Upstream Type of the upstream `device_memory_resource`. * @param upstream Pointer to the upstream resource * @param allocation_limit Maximum amount of memory to allocate * @return The new limiting resource adaptor */ template <typename Upstream> limiting_resource_adaptor<Upstream> make_limiting_adaptor(Upstream* upstream, std::size_t allocation_limit) { return limiting_resource_adaptor<Upstream>{upstream, allocation_limit}; } /** @} */ // end of group } // namespace rmm::mr
0
rapidsai_public_repos/rmm/include/rmm/mr
rapidsai_public_repos/rmm/include/rmm/mr/device/aligned_resource_adaptor.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/cuda_stream_view.hpp> #include <rmm/detail/aligned.hpp> #include <rmm/detail/error.hpp> #include <rmm/mr/device/device_memory_resource.hpp> #include <cstddef> #include <mutex> #include <optional> #include <unordered_map> namespace rmm::mr { /** * @addtogroup device_resource_adaptors * @{ * @file */ /** * @brief Resource that adapts `Upstream` memory resource to allocate memory in a specified * alignment size. * * An instance of this resource can be constructed with an existing, upstream resource in order * to satisfy allocation requests. This adaptor wraps allocations and deallocations from Upstream * using the given alignment size. * * By default, any address returned by one of the memory allocation routines from the CUDA driver or * runtime API is always aligned to at least 256 bytes. For some use cases, such as GPUDirect * Storage (GDS), allocations need to be aligned to a larger size (4 KiB for GDS) in order to avoid * additional copies to bounce buffers. * * Since a larger alignment size has some additional overhead, the user can specify a threshold * size. If an allocation's size falls below the threshold, it is aligned to the default size. Only * allocations with a size above the threshold are aligned to the custom alignment size. * * @tparam Upstream Type of the upstream resource used for allocation/deallocation. */ template <typename Upstream> class aligned_resource_adaptor final : public device_memory_resource { public: /** * @brief Construct an aligned resource adaptor using `upstream` to satisfy allocation requests. * * @throws rmm::logic_error if `upstream == nullptr` * @throws rmm::logic_error if `allocation_alignment` is not a power of 2 * * @param upstream The resource used for allocating/deallocating device memory. * @param alignment The size used for allocation alignment. * @param alignment_threshold Only allocations with a size larger than or equal to this threshold * are aligned. */ explicit aligned_resource_adaptor(Upstream* upstream, std::size_t alignment = rmm::detail::CUDA_ALLOCATION_ALIGNMENT, std::size_t alignment_threshold = default_alignment_threshold) : upstream_{upstream}, alignment_{alignment}, alignment_threshold_{alignment_threshold} { RMM_EXPECTS(nullptr != upstream, "Unexpected null upstream resource pointer."); RMM_EXPECTS(rmm::detail::is_supported_alignment(alignment), "Allocation alignment is not a power of 2."); } aligned_resource_adaptor() = delete; ~aligned_resource_adaptor() override = default; aligned_resource_adaptor(aligned_resource_adaptor const&) = delete; aligned_resource_adaptor(aligned_resource_adaptor&&) = delete; aligned_resource_adaptor& operator=(aligned_resource_adaptor const&) = delete; aligned_resource_adaptor& operator=(aligned_resource_adaptor&&) = delete; /** * @brief Get the upstream memory resource. * * @return Upstream* pointer to a memory resource object. */ Upstream* get_upstream() const noexcept { return upstream_; } /** * @copydoc rmm::mr::device_memory_resource::supports_streams() */ [[nodiscard]] bool supports_streams() const noexcept override { return upstream_->supports_streams(); } /** * @brief Query whether the resource supports the get_mem_info API. * * @return bool true if the upstream resource supports get_mem_info, false otherwise. */ [[nodiscard]] bool supports_get_mem_info() const noexcept override { return upstream_->supports_get_mem_info(); } /** * @brief The default alignment used by the adaptor. */ static constexpr std::size_t default_alignment_threshold = 0; private: using lock_guard = std::lock_guard<std::mutex>; /** * @brief Allocates memory of size at least `bytes` using the upstream resource with the specified * alignment. * * @throws rmm::bad_alloc if the requested allocation could not be fulfilled * by the upstream resource. * * @param bytes The size, in bytes, of the allocation * @param stream Stream on which to perform the allocation * @return void* Pointer to the newly allocated memory */ void* do_allocate(std::size_t bytes, cuda_stream_view stream) override { if (alignment_ == rmm::detail::CUDA_ALLOCATION_ALIGNMENT || bytes < alignment_threshold_) { return upstream_->allocate(bytes, stream); } auto const size = upstream_allocation_size(bytes); void* pointer = upstream_->allocate(size, stream); // NOLINTNEXTLINE(cppcoreguidelines-pro-type-reinterpret-cast) auto const address = reinterpret_cast<std::size_t>(pointer); auto const aligned_address = rmm::detail::align_up(address, alignment_); // NOLINTNEXTLINE(cppcoreguidelines-pro-type-reinterpret-cast,performance-no-int-to-ptr) void* aligned_pointer = reinterpret_cast<void*>(aligned_address); if (pointer != aligned_pointer) { lock_guard lock(mtx_); pointers_.emplace(aligned_pointer, pointer); } return aligned_pointer; } /** * @brief Free allocation of size `bytes` pointed to to by `p` and log the deallocation. * * @param ptr Pointer to be deallocated * @param bytes Size of the allocation * @param stream Stream on which to perform the deallocation */ void do_deallocate(void* ptr, std::size_t bytes, cuda_stream_view stream) override { if (alignment_ == rmm::detail::CUDA_ALLOCATION_ALIGNMENT || bytes < alignment_threshold_) { upstream_->deallocate(ptr, bytes, stream); } else { { lock_guard lock(mtx_); auto const iter = pointers_.find(ptr); if (iter != pointers_.end()) { ptr = iter->second; pointers_.erase(iter); } } upstream_->deallocate(ptr, upstream_allocation_size(bytes), stream); } } /** * @brief Compare this resource to another. * * @param other The other resource to compare to * @return true If the two resources are equivalent * @return false If the two resources are not equivalent */ [[nodiscard]] bool do_is_equal(device_memory_resource const& other) const noexcept override { if (this == &other) { return true; } auto cast = dynamic_cast<aligned_resource_adaptor<Upstream> const*>(&other); return cast != nullptr && upstream_->is_equal(*cast->get_upstream()) && alignment_ == cast->alignment_ && alignment_threshold_ == cast->alignment_threshold_; } /** * @brief Get free and available memory from upstream resource. * * The free size may not be fully allocatable because of alignment requirements. * * @throws rmm::cuda_error if unable to retrieve memory info. * * @param stream Stream on which to get the mem info. * @return std::pair containing free_size and total_size of memory */ [[nodiscard]] std::pair<std::size_t, std::size_t> do_get_mem_info( cuda_stream_view stream) const override { return upstream_->get_mem_info(stream); } /** * @brief Calculate the allocation size needed from upstream to account for alignments of both the * size and the base pointer. * * @param bytes The requested allocation size. * @return Allocation size needed from upstream to align both the size and the base pointer. */ std::size_t upstream_allocation_size(std::size_t bytes) const { auto const aligned_size = rmm::detail::align_up(bytes, alignment_); return aligned_size + alignment_ - rmm::detail::CUDA_ALLOCATION_ALIGNMENT; } Upstream* upstream_; ///< The upstream resource used for satisfying allocation requests std::unordered_map<void*, void*> pointers_; ///< Map of aligned pointers to upstream pointers. std::size_t alignment_; ///< The size used for allocation alignment std::size_t alignment_threshold_; ///< The size above which allocations should be aligned mutable std::mutex mtx_; ///< Mutex for exclusive lock. }; /** @} */ // end of group } // namespace rmm::mr
0
rapidsai_public_repos/rmm/include/rmm/mr
rapidsai_public_repos/rmm/include/rmm/mr/device/cuda_async_view_memory_resource.hpp
/* * Copyright (c) 2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/cuda_device.hpp> #include <rmm/cuda_stream_view.hpp> #include <rmm/detail/cuda_util.hpp> #include <rmm/detail/dynamic_load_runtime.hpp> #include <rmm/detail/error.hpp> #include <rmm/mr/device/device_memory_resource.hpp> #include <rmm/detail/thrust_namespace.h> #include <thrust/optional.h> #include <cuda_runtime_api.h> #include <cstddef> #include <limits> #if CUDART_VERSION >= 11020 // 11.2 introduced cudaMallocAsync #define RMM_CUDA_MALLOC_ASYNC_SUPPORT #endif namespace rmm::mr { /** * @addtogroup device_memory_resources * @{ * @file */ /** * @brief `device_memory_resource` derived class that uses `cudaMallocAsync`/`cudaFreeAsync` for * allocation/deallocation. */ class cuda_async_view_memory_resource final : public device_memory_resource { public: #ifdef RMM_CUDA_MALLOC_ASYNC_SUPPORT /** * @brief Constructs a cuda_async_view_memory_resource which uses an existing CUDA memory pool. * The provided pool is not owned by cuda_async_view_memory_resource and must remain valid * during the lifetime of the memory resource. * * @throws rmm::runtime_error if the CUDA version does not support `cudaMallocAsync` * * @param valid_pool_handle Handle to a CUDA memory pool which will be used to * serve allocation requests. */ cuda_async_view_memory_resource(cudaMemPool_t valid_pool_handle) : cuda_pool_handle_{[valid_pool_handle]() { RMM_EXPECTS(nullptr != valid_pool_handle, "Unexpected null pool handle."); return valid_pool_handle; }()} { // Check if cudaMallocAsync Memory pool supported auto const device = rmm::get_current_cuda_device(); int cuda_pool_supported{}; auto result = cudaDeviceGetAttribute(&cuda_pool_supported, cudaDevAttrMemoryPoolsSupported, device.value()); RMM_EXPECTS(result == cudaSuccess && cuda_pool_supported, "cudaMallocAsync not supported with this CUDA driver/runtime version"); } #endif #ifdef RMM_CUDA_MALLOC_ASYNC_SUPPORT /** * @brief Returns the underlying native handle to the CUDA pool * */ [[nodiscard]] cudaMemPool_t pool_handle() const noexcept { return cuda_pool_handle_; } #endif cuda_async_view_memory_resource() = default; cuda_async_view_memory_resource(cuda_async_view_memory_resource const&) = default; ///< @default_copy_constructor cuda_async_view_memory_resource(cuda_async_view_memory_resource&&) = default; ///< @default_move_constructor cuda_async_view_memory_resource& operator=(cuda_async_view_memory_resource const&) = default; ///< @default_copy_assignment{cuda_async_view_memory_resource} cuda_async_view_memory_resource& operator=(cuda_async_view_memory_resource&&) = default; ///< @default_move_assignment{cuda_async_view_memory_resource} /** * @brief Query whether the resource supports use of non-null CUDA streams for * allocation/deallocation. `cuda_memory_resource` does not support streams. * * @returns bool true */ [[nodiscard]] bool supports_streams() const noexcept override { return true; } /** * @brief Query whether the resource supports the get_mem_info API. * * @return true */ [[nodiscard]] bool supports_get_mem_info() const noexcept override { return false; } private: #ifdef RMM_CUDA_MALLOC_ASYNC_SUPPORT cudaMemPool_t cuda_pool_handle_{}; #endif /** * @brief Allocates memory of size at least \p bytes. * * The returned pointer will have at minimum 256 byte alignment. * * @param bytes The size of the allocation * @param stream Stream on which to perform allocation * @return void* Pointer to the newly allocated memory */ void* do_allocate(std::size_t bytes, rmm::cuda_stream_view stream) override { void* ptr{nullptr}; #ifdef RMM_CUDA_MALLOC_ASYNC_SUPPORT if (bytes > 0) { RMM_CUDA_TRY_ALLOC(rmm::detail::async_alloc::cudaMallocFromPoolAsync( &ptr, bytes, pool_handle(), stream.value())); } #else (void)bytes; (void)stream; #endif return ptr; } /** * @brief Deallocate memory pointed to by \p p. * * @param ptr Pointer to be deallocated * @param bytes The size in bytes of the allocation. This must be equal to the * value of `bytes` that was passed to the `allocate` call that returned `p`. * @param stream Stream on which to perform deallocation */ void do_deallocate(void* ptr, [[maybe_unused]] std::size_t bytes, rmm::cuda_stream_view stream) override { #ifdef RMM_CUDA_MALLOC_ASYNC_SUPPORT if (ptr != nullptr) { RMM_ASSERT_CUDA_SUCCESS(rmm::detail::async_alloc::cudaFreeAsync(ptr, stream.value())); } #else (void)ptr; (void)bytes; (void)stream; #endif } /** * @brief Compare this resource to another. * * @param other The other resource to compare to * @return true If the two resources are equivalent * @return false If the two resources are not equal */ [[nodiscard]] bool do_is_equal(device_memory_resource const& other) const noexcept override { return dynamic_cast<cuda_async_view_memory_resource const*>(&other) != nullptr; } /** * @brief Get free and available memory for memory resource * * @throws rmm::cuda_error if unable to retrieve memory info. * * @return std::pair contaiing free_size and total_size of memory */ [[nodiscard]] std::pair<std::size_t, std::size_t> do_get_mem_info( rmm::cuda_stream_view) const override { return std::make_pair(0, 0); } }; /** @} */ // end of group } // namespace rmm::mr
0
rapidsai_public_repos/rmm/include/rmm/mr
rapidsai_public_repos/rmm/include/rmm/mr/device/pool_memory_resource.hpp
/* * Copyright (c) 2020-2023, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/cuda_stream_view.hpp> #include <rmm/detail/aligned.hpp> #include <rmm/detail/cuda_util.hpp> #include <rmm/detail/error.hpp> #include <rmm/detail/logging_assert.hpp> #include <rmm/logger.hpp> #include <rmm/mr/device/detail/coalescing_free_list.hpp> #include <rmm/mr/device/detail/stream_ordered_memory_resource.hpp> #include <rmm/mr/device/device_memory_resource.hpp> #include <rmm/detail/thrust_namespace.h> #include <thrust/iterator/counting_iterator.h> #include <thrust/iterator/transform_iterator.h> #include <thrust/optional.h> #include <fmt/core.h> #include <cuda_runtime_api.h> #include <algorithm> #include <cstddef> #include <iostream> #include <map> #include <mutex> #include <numeric> #include <set> #include <thread> #include <unordered_map> #include <vector> namespace rmm::mr { /** * @addtogroup device_memory_resources * @{ * @file */ namespace detail { /** * @brief A helper class to remove the device_accessible property * * We want to be able to use the pool_memory_resource with an upstream that may not * be device accessible. To avoid rewriting the world, we allow conditionally removing * the cuda::mr::device_accessible property. * * @tparam PoolResource the pool_memory_resource class * @tparam Upstream memory_resource to use for allocating the pool. * @tparam Property The property we want to potentially remove. */ template <class PoolResource, class Upstream, class Property, class = void> struct maybe_remove_property {}; /** * @brief Specialization of maybe_remove_property to not propagate nonexistent properties */ template <class PoolResource, class Upstream, class Property> struct maybe_remove_property<PoolResource, Upstream, Property, cuda::std::enable_if_t<!cuda::has_property<Upstream, Property>>> { #ifdef __GNUC__ // GCC warns about compatibility issues with pre ISO C++ code #pragma GCC diagnostic push #pragma GCC diagnostic ignored "-Wnon-template-friend" #endif // __GNUC__ /** * @brief Explicit removal of the friend function so we do not pretend to provide device * accessible memory */ friend void get_property(const PoolResource&, Property) = delete; #ifdef __GNUC__ #pragma GCC diagnostic pop #endif // __GNUC__ }; } // namespace detail /** * @brief A coalescing best-fit suballocator which uses a pool of memory allocated from * an upstream memory_resource. * * Allocation (do_allocate()) and deallocation (do_deallocate()) are thread-safe. Also, * this class is compatible with CUDA per-thread default stream. * * @tparam UpstreamResource memory_resource to use for allocating the pool. Implements * rmm::mr::device_memory_resource interface. */ template <typename Upstream> class pool_memory_resource final : public detail:: maybe_remove_property<pool_memory_resource<Upstream>, Upstream, cuda::mr::device_accessible>, public detail::stream_ordered_memory_resource<pool_memory_resource<Upstream>, detail::coalescing_free_list>, public cuda::forward_property<pool_memory_resource<Upstream>, Upstream> { public: friend class detail::stream_ordered_memory_resource<pool_memory_resource<Upstream>, detail::coalescing_free_list>; /** * @brief Construct a `pool_memory_resource` and allocate the initial device memory pool using * `upstream_mr`. * * @throws rmm::logic_error if `upstream_mr == nullptr` * @throws rmm::logic_error if `initial_pool_size` is neither the default nor aligned to a * multiple of pool_memory_resource::allocation_alignment bytes. * @throws rmm::logic_error if `maximum_pool_size` is neither the default nor aligned to a * multiple of pool_memory_resource::allocation_alignment bytes. * * @param upstream_mr The memory_resource from which to allocate blocks for the pool. * @param initial_pool_size Minimum size, in bytes, of the initial pool. Defaults to half of the * available memory on the current device. * @param maximum_pool_size Maximum size, in bytes, that the pool can grow to. Defaults to all * of the available memory on the current device. */ explicit pool_memory_resource(Upstream* upstream_mr, thrust::optional<std::size_t> initial_pool_size = thrust::nullopt, thrust::optional<std::size_t> maximum_pool_size = thrust::nullopt) : upstream_mr_{[upstream_mr]() { RMM_EXPECTS(nullptr != upstream_mr, "Unexpected null upstream pointer."); return upstream_mr; }()} { RMM_EXPECTS(rmm::detail::is_aligned(initial_pool_size.value_or(0), rmm::detail::CUDA_ALLOCATION_ALIGNMENT), "Error, Initial pool size required to be a multiple of 256 bytes"); RMM_EXPECTS(rmm::detail::is_aligned(maximum_pool_size.value_or(0), rmm::detail::CUDA_ALLOCATION_ALIGNMENT), "Error, Maximum pool size required to be a multiple of 256 bytes"); initialize_pool(initial_pool_size, maximum_pool_size); } /** * @brief Construct a `pool_memory_resource` and allocate the initial device memory pool using * `upstream_mr`. * * @throws rmm::logic_error if `upstream_mr == nullptr` * @throws rmm::logic_error if `initial_pool_size` is neither the default nor aligned to a * multiple of pool_memory_resource::allocation_alignment bytes. * @throws rmm::logic_error if `maximum_pool_size` is neither the default nor aligned to a * multiple of pool_memory_resource::allocation_alignment bytes. * * @param upstream_mr The memory_resource from which to allocate blocks for the pool. * @param initial_pool_size Minimum size, in bytes, of the initial pool. Defaults to half of the * available memory on the current device. * @param maximum_pool_size Maximum size, in bytes, that the pool can grow to. Defaults to all * of the available memory on the current device. */ template <typename Upstream2 = Upstream, cuda::std::enable_if_t<cuda::mr::async_resource<Upstream2>, int> = 0> explicit pool_memory_resource(Upstream2& upstream_mr, thrust::optional<std::size_t> initial_pool_size = thrust::nullopt, thrust::optional<std::size_t> maximum_pool_size = thrust::nullopt) : pool_memory_resource(cuda::std::addressof(upstream_mr), initial_pool_size, maximum_pool_size) { } /** * @brief Destroy the `pool_memory_resource` and deallocate all memory it allocated using * the upstream resource. */ ~pool_memory_resource() override { release(); } pool_memory_resource() = delete; pool_memory_resource(pool_memory_resource const&) = delete; pool_memory_resource(pool_memory_resource&&) = delete; pool_memory_resource& operator=(pool_memory_resource const&) = delete; pool_memory_resource& operator=(pool_memory_resource&&) = delete; /** * @brief Queries whether the resource supports use of non-null CUDA streams for * allocation/deallocation. * * @returns bool true. */ [[nodiscard]] bool supports_streams() const noexcept override { return true; } /** * @brief Query whether the resource supports the get_mem_info API. * * @return bool false */ [[nodiscard]] bool supports_get_mem_info() const noexcept override { return false; } /** * @brief Get the upstream memory_resource object. * * @return const reference to the upstream memory resource. */ [[nodiscard]] const Upstream& upstream_resource() const noexcept { return *upstream_mr_; } /** * @brief Get the upstream memory_resource object. * * @return UpstreamResource* the upstream memory resource. */ Upstream* get_upstream() const noexcept { return upstream_mr_; } /** * @brief Computes the size of the current pool * * Includes allocated as well as free memory. * * @return std::size_t The total size of the currently allocated pool. */ [[nodiscard]] std::size_t pool_size() const noexcept { return current_pool_size_; } protected: using free_list = detail::coalescing_free_list; ///< The free list implementation using block_type = free_list::block_type; ///< The type of block returned by the free list using typename detail::stream_ordered_memory_resource<pool_memory_resource<Upstream>, detail::coalescing_free_list>::split_block; using lock_guard = std::lock_guard<std::mutex>; ///< Type of lock used to synchronize access /** * @brief Get the maximum size of allocations supported by this memory resource * * Note this does not depend on the memory size of the device. It simply returns the maximum * value of `std::size_t` * * @return std::size_t The maximum size of a single allocation supported by this memory resource */ [[nodiscard]] std::size_t get_maximum_allocation_size() const { return std::numeric_limits<std::size_t>::max(); } /** * @brief Try to expand the pool by allocating a block of at least `min_size` bytes from * upstream * * Attempts to allocate `try_size` bytes from upstream. If it fails, it iteratively reduces the * attempted size by half until `min_size`, returning the allocated block once it succeeds. * * @throws rmm::bad_alloc if `min_size` bytes cannot be allocated from upstream or maximum pool * size is exceeded. * * @param try_size The initial requested size to try allocating. * @param min_size The minimum requested size to try allocating. * @param stream The stream on which the memory is to be used. * @return block_type a block of at least `min_size` bytes */ block_type try_to_expand(std::size_t try_size, std::size_t min_size, cuda_stream_view stream) { while (try_size >= min_size) { auto block = block_from_upstream(try_size, stream); if (block.has_value()) { current_pool_size_ += block.value().size(); return block.value(); } if (try_size == min_size) { break; // only try `size` once } try_size = std::max(min_size, try_size / 2); } RMM_LOG_ERROR("[A][Stream {}][Upstream {}B][FAILURE maximum pool size exceeded]", fmt::ptr(stream.value()), min_size); RMM_FAIL("Maximum pool size exceeded", rmm::out_of_memory); } /** * @brief Allocate initial memory for the pool * * If initial_size is unset, then queries the upstream memory resource for available memory if * upstream supports `get_mem_info`, or queries the device (using CUDA API) for available memory * if not. Then attempts to initialize to half the available memory. * * If initial_size is set, then tries to initialize the pool to that size. * * @param initial_size The optional initial size for the pool * @param maximum_size The optional maximum size for the pool */ // NOLINTNEXTLINE(bugprone-easily-swappable-parameters) void initialize_pool(thrust::optional<std::size_t> initial_size, thrust::optional<std::size_t> maximum_size) { auto const try_size = [&]() { if (not initial_size.has_value()) { auto const [free, total] = (get_upstream()->supports_get_mem_info()) ? get_upstream()->get_mem_info(cuda_stream_legacy) : rmm::detail::available_device_memory(); return rmm::detail::align_up(std::min(free, total / 2), rmm::detail::CUDA_ALLOCATION_ALIGNMENT); } return initial_size.value(); }(); current_pool_size_ = 0; // try_to_expand will set this if it succeeds maximum_pool_size_ = maximum_size; RMM_EXPECTS(try_size <= maximum_pool_size_.value_or(std::numeric_limits<std::size_t>::max()), "Initial pool size exceeds the maximum pool size!"); if (try_size > 0) { auto const block = try_to_expand(try_size, try_size, cuda_stream_legacy); this->insert_block(block, cuda_stream_legacy); } } /** * @brief Allocate space from upstream to supply the suballocation pool and return * a sufficiently sized block. * * @param size The minimum size to allocate * @param blocks The free list (ignored in this implementation) * @param stream The stream on which the memory is to be used. * @return block_type a block of at least `size` bytes */ block_type expand_pool(std::size_t size, free_list& blocks, cuda_stream_view stream) { // Strategy: If maximum_pool_size_ is set, then grow geometrically, e.g. by halfway to the // limit each time. If it is not set, grow exponentially, e.g. by doubling the pool size each // time. Upon failure, attempt to back off exponentially, e.g. by half the attempted size, // until either success or the attempt is less than the requested size. return try_to_expand(size_to_grow(size), size, stream); } /** * @brief Given a minimum size, computes an appropriate size to grow the pool. * * Strategy is to try to grow the pool by half the difference between the configured maximum * pool size and the current pool size, if the maximum pool size is set. If it is not set, try * to double the current pool size. * * Returns 0 if the requested size cannot be satisfied. * * @param size The size of the minimum allocation immediately needed * @return std::size_t The computed size to grow the pool. */ [[nodiscard]] std::size_t size_to_grow(std::size_t size) const { if (maximum_pool_size_.has_value()) { auto const unaligned_remaining = maximum_pool_size_.value() - pool_size(); using rmm::detail::align_up; auto const remaining = align_up(unaligned_remaining, rmm::detail::CUDA_ALLOCATION_ALIGNMENT); auto const aligned_size = align_up(size, rmm::detail::CUDA_ALLOCATION_ALIGNMENT); return (aligned_size <= remaining) ? std::max(aligned_size, remaining / 2) : 0; } return std::max(size, pool_size()); }; /** * @brief Allocate a block from upstream to expand the suballocation pool. * * @param size The size in bytes to allocate from the upstream resource * @param stream The stream on which the memory is to be used. * @return block_type The allocated block */ thrust::optional<block_type> block_from_upstream(std::size_t size, cuda_stream_view stream) { RMM_LOG_DEBUG("[A][Stream {}][Upstream {}B]", fmt::ptr(stream.value()), size); if (size == 0) { return {}; } try { void* ptr = get_upstream()->allocate_async(size, stream); return thrust::optional<block_type>{ *upstream_blocks_.emplace(static_cast<char*>(ptr), size, true).first}; } catch (std::exception const& e) { return thrust::nullopt; } } /** * @brief Splits `block` if necessary to return a pointer to memory of `size` bytes. * * If the block is split, the remainder is returned to the pool. * * @param block The block to allocate from. * @param size The size in bytes of the requested allocation. * @return A pair comprising the allocated pointer and any unallocated remainder of the input * block. */ split_block allocate_from_block(block_type const& block, std::size_t size) { block_type const alloc{block.pointer(), size, block.is_head()}; #ifdef RMM_POOL_TRACK_ALLOCATIONS allocated_blocks_.insert(alloc); #endif auto rest = (block.size() > size) // NOLINTNEXTLINE(cppcoreguidelines-pro-bounds-pointer-arithmetic) ? block_type{block.pointer() + size, block.size() - size, false} : block_type{}; return {alloc, rest}; } /** * @brief Finds, frees and returns the block associated with pointer `ptr`. * * @param ptr The pointer to the memory to free. * @param size The size of the memory to free. Must be equal to the original allocation size. * @return The (now freed) block associated with `p`. The caller is expected to return the block * to the pool. */ block_type free_block(void* ptr, std::size_t size) noexcept { #ifdef RMM_POOL_TRACK_ALLOCATIONS if (ptr == nullptr) return block_type{}; auto const iter = allocated_blocks_.find(static_cast<char*>(ptr)); RMM_LOGGING_ASSERT(iter != allocated_blocks_.end()); auto block = *iter; RMM_LOGGING_ASSERT(block.size() == rmm::detail::align_up(size, allocation_alignment)); allocated_blocks_.erase(iter); return block; #else auto const iter = upstream_blocks_.find(static_cast<char*>(ptr)); return block_type{static_cast<char*>(ptr), size, (iter != upstream_blocks_.end())}; #endif } /** * @brief Free all memory allocated from the upstream memory_resource. * */ void release() { lock_guard lock(this->get_mutex()); for (auto block : upstream_blocks_) { get_upstream()->deallocate(block.pointer(), block.size()); } upstream_blocks_.clear(); #ifdef RMM_POOL_TRACK_ALLOCATIONS allocated_blocks_.clear(); #endif current_pool_size_ = 0; } #ifdef RMM_DEBUG_PRINT /** * @brief Print debugging information about all blocks in the pool. * * @note This function is intended only for use in debugging. * */ void print() { lock_guard lock(this->get_mutex()); auto const [free, total] = upstream_mr_->get_mem_info(rmm::cuda_stream_default); std::cout << "GPU free memory: " << free << " total: " << total << "\n"; std::cout << "upstream_blocks: " << upstream_blocks_.size() << "\n"; std::size_t upstream_total{0}; for (auto blocks : upstream_blocks_) { blocks.print(); upstream_total += blocks.size(); } std::cout << "total upstream: " << upstream_total << " B\n"; #ifdef RMM_POOL_TRACK_ALLOCATIONS std::cout << "allocated_blocks: " << allocated_blocks_.size() << "\n"; for (auto block : allocated_blocks_) block.print(); #endif this->print_free_blocks(); } #endif /** * @brief Get the largest available block size and total free size in the specified free list * * This is intended only for debugging * * @param blocks The free list from which to return the summary * @return std::pair<std::size_t, std::size_t> Pair of largest available block, total free size */ std::pair<std::size_t, std::size_t> free_list_summary(free_list const& blocks) { std::size_t largest{}; std::size_t total{}; std::for_each(blocks.cbegin(), blocks.cend(), [&largest, &total](auto const& block) { total += block.size(); largest = std::max(largest, block.size()); }); return {largest, total}; } /** * @brief Get free and available memory for memory resource * * @throws nothing * * @param stream to execute on * @return std::pair contaiing free_size and total_size of memory */ [[nodiscard]] std::pair<std::size_t, std::size_t> do_get_mem_info( cuda_stream_view stream) const override { // TODO implement this return {0, 0}; } private: Upstream* upstream_mr_; // The "heap" to allocate the pool from std::size_t current_pool_size_{}; thrust::optional<std::size_t> maximum_pool_size_{}; #ifdef RMM_POOL_TRACK_ALLOCATIONS std::set<block_type, rmm::mr::detail::compare_blocks<block_type>> allocated_blocks_; #endif // blocks allocated from upstream std::set<block_type, rmm::mr::detail::compare_blocks<block_type>> upstream_blocks_; }; // namespace mr /** @} */ // end of group } // namespace rmm::mr
0
rapidsai_public_repos/rmm/include/rmm/mr
rapidsai_public_repos/rmm/include/rmm/mr/device/owning_wrapper.hpp
/* * Copyright (c) 2020-2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/mr/device/device_memory_resource.hpp> #include <functional> #include <iostream> #include <memory> #include <utility> namespace rmm::mr { namespace detail { /** * @brief Converts a tuple into a parameter pack. * * This helper function for make_resource allows passing the upstreams as a * list of arguments to the Resource's constructor. * * @tparam Resource The resource type to create * @tparam UpstreamTuple A tuple of shared pointers of the types of the upstream resources * @tparam Args The types of the arguments to the resource's constructor * @param upstreams Tuple of `std::shared_ptr`s to the upstreams used by the wrapped resource, in * the same order as expected by `Resource`s constructor. * @param args Function parameter pack of arguments to forward to the Resource's * constructor * @return std::unique_ptr<Resource> A unique pointer to the created resource. */ template <typename Resource, typename UpstreamTuple, std::size_t... Indices, typename... Args> auto make_resource_impl(UpstreamTuple const& upstreams, std::index_sequence<Indices...>, Args&&... args) { return std::make_unique<Resource>(std::get<Indices>(upstreams).get()..., std::forward<Args>(args)...); } /** * @brief Create a `std::unique_ptr` to a `Resource` with the given upstreams and arguments * * @tparam Resource The resource type to create * @tparam Upstreams The types of the upstream resources * @tparam Args The types of the arguments to the resource's constructor * @param upstreams Tuple of `std::shared_ptr`s to the upstreams used by the wrapped resource, in * the same order as expected by `Resource`s constructor. * @param args Function parameter pack of arguments to forward to the wrapped resource's * constructor * @return std::unique_ptr<Resource> A unique pointer to the created resource */ template <typename Resource, typename... Upstreams, typename... Args> auto make_resource(std::tuple<std::shared_ptr<Upstreams>...> const& upstreams, Args&&... args) { return make_resource_impl<Resource>( upstreams, std::index_sequence_for<Upstreams...>{}, std::forward<Args>(args)...); } } // namespace detail /** * @addtogroup device_resource_adaptors * @{ * @file */ /** * @brief Resource adaptor that maintains the lifetime of upstream resources. * * Many `device_memory_resource` derived types allocate memory from another "upstream" resource. * E.g., `pool_memory_resource` allocates its pool from an upstream resource. Typically, a resource * does not own its upstream, and therefore it is the user's responsibility to maintain the lifetime * of the upstream resource. This can be inconvenient and error prone, especially for resources with * complex upstreams that may themselves also have an upstream. * * `owning_wrapper` simplifies lifetime management of a resource, `wrapped`, by taking shared * ownership of all upstream resources via a `std::shared_ptr`. * * For convenience, it is recommended to use the `make_owning_wrapper` factory instead of * constructing an `owning_wrapper` directly. * * Example: * \code{.cpp} * auto cuda = std::make_shared<rmm::mr::cuda_memory_resource>(); * auto pool = rmm::mr::make_owning_wrapper<rmm::mr::pool_memory_resource>(cuda,initial_pool_size, * max_pool_size); * // The `cuda` resource will be kept alive for the lifetime of `pool` and automatically be * // destroyed after `pool` is destroyed * \endcode * * @tparam Resource Type of the wrapped resource * @tparam Upstreams Template parameter pack of the types of the upstream resources used by * `Resource` */ template <typename Resource, typename... Upstreams> class owning_wrapper : public device_memory_resource { public: using upstream_tuple = std::tuple<std::shared_ptr<Upstreams>...>; ///< Tuple of upstream memory resources /** * @brief Constructs the wrapped resource using the provided upstreams and any additional * arguments forwarded to the wrapped resources constructor. * * `Resource` is required to have a constructor whose first argument(s) are raw pointers to its * upstream resources in the same order as `upstreams`, followed by any additional arguments in * the same order as `args`. * * Example: * \code{.cpp} * template <typename Upstream1, typename Upstream2> * class example_resource{ * example_resource(Upstream1 * u1, Upstream2 * u2, int n, float f); * }; * * using cuda = rmm::mr::cuda_memory_resource; * using example = example_resource<cuda,cuda>; * using wrapped_example = rmm::mr::owning_wrapper<example, cuda, cuda>; * auto cuda_mr = std::make_shared<cuda>(); * * // Constructs an `example_resource` wrapped by an `owning_wrapper` taking shared ownership of * //`cuda_mr` and using it as both of `example_resource`s upstream resources. Forwards the * // arguments `42` and `3.14` to the additional `n` and `f` arguments of `example_resources` * // constructor. * wrapped_example w{std::make_tuple(cuda_mr,cuda_mr), 42, 3.14}; * \endcode * * @tparam Args Template parameter pack to forward to the wrapped resource's constructor * @param upstreams Tuple of `std::shared_ptr`s to the upstreams used by the wrapped resource, in * the same order as expected by `Resource`s constructor. * @param args Function parameter pack of arguments to forward to the wrapped resource's * constructor */ template <typename... Args> owning_wrapper(upstream_tuple upstreams, Args&&... args) : upstreams_{std::move(upstreams)}, wrapped_{detail::make_resource<Resource>(upstreams_, std::forward<Args>(args)...)} { } /** * @briefreturn{A constant reference to the wrapped resource} */ [[nodiscard]] Resource const& wrapped() const noexcept { return *wrapped_; } /** * @briefreturn{A reference to the wrapped resource} */ [[nodiscard]] Resource& wrapped() noexcept { return *wrapped_; } /** * @copydoc rmm::mr::device_memory_resource::supports_streams() */ [[nodiscard]] bool supports_streams() const noexcept override { return wrapped().supports_streams(); } /** * @briefreturn{true if the wrapped resource supports get_mem_info, false otherwise} */ [[nodiscard]] bool supports_get_mem_info() const noexcept override { return wrapped().supports_get_mem_info(); } private: /** * @brief Allocates memory using the wrapped resource. * * @throws rmm::bad_alloc if the requested allocation could not be fulfilled by the wrapped * resource. * * @param bytes The size, in bytes, of the allocation * @param stream Stream on which to perform the allocation * @return void* Pointer to the memory allocated by the wrapped resource */ void* do_allocate(std::size_t bytes, cuda_stream_view stream) override { return wrapped().allocate(bytes, stream); } /** * @brief Returns an allocation to the wrapped resource. * * `ptr` must have been returned from a prior call to `do_allocate(bytes)`. * * @param ptr Pointer to the allocation to free. * @param bytes Size of the allocation * @param stream Stream on which to deallocate the memory */ void do_deallocate(void* ptr, std::size_t bytes, cuda_stream_view stream) override { wrapped().deallocate(ptr, bytes, stream); } /** * @brief Compare if this resource is equal to another. * * Two resources are equal if memory allocated by one resource can be freed by the other. * * @param other The other resource to compare to * @return true If the two resources are equal * @return false If the two resources are not equal */ [[nodiscard]] bool do_is_equal(device_memory_resource const& other) const noexcept override { if (this == &other) { return true; } auto casted = dynamic_cast<owning_wrapper<Resource, Upstreams...> const*>(&other); if (nullptr != casted) { return wrapped().is_equal(casted->wrapped()); } return wrapped().is_equal(other); } /** * @brief Get free and available memory from upstream resource. * * @throws rmm::cuda_error if unable to retrieve memory info. * * @param stream Stream on which to get the mem info. * @return std::pair contaiing free_size and total_size of memory */ [[nodiscard]] std::pair<std::size_t, std::size_t> do_get_mem_info( cuda_stream_view stream) const override { return wrapped().get_mem_info(stream); } upstream_tuple upstreams_; ///< The owned upstream resources std::unique_ptr<Resource> wrapped_; ///< The wrapped resource that uses the upstreams }; /** * @brief Constructs a resource of type `Resource` wrapped in an `owning_wrapper` using `upstreams` * as the upstream resources and `args` as the additional parameters for the constructor of * `Resource`. * * \code{.cpp} * template <typename Upstream1, typename Upstream2> * class example_resource{ * example_resource(Upstream1 * u1, Upstream2 * u2, int n, float f); * }; * * auto cuda_mr = std::make_shared<rmm::mr::cuda_memory_resource>(); * auto cuda_upstreams = std::make_tuple(cuda_mr, cuda_mr); * * // Constructs an `example_resource<rmm::mr::cuda_memory_resource, rmm::mr::cuda_memory_resource>` * // wrapped by an `owning_wrapper` taking shared ownership of `cuda_mr` and using it as both of * // `example_resource`s upstream resources. Forwards the arguments `42` and `3.14` to the * // additional `n` and `f` arguments of `example_resource` constructor. * auto wrapped_example = rmm::mr::make_owning_wrapper<example_resource>(cuda_upstreams, 42, 3.14); * \endcode * * @tparam Resource Template template parameter specifying the type of the wrapped resource to * construct * @tparam Upstreams Types of the upstream resources * @tparam Args Types of the arguments used in `Resource`s constructor * @param upstreams Tuple of `std::shared_ptr`s to the upstreams used by the wrapped resource, in * the same order as expected by `Resource`s constructor. * @param args Function parameter pack of arguments to forward to the wrapped resource's * constructor * @return An `owning_wrapper` wrapping a newly constructed `Resource<Upstreams...>` and * `upstreams`. */ template <template <typename...> class Resource, typename... Upstreams, typename... Args> auto make_owning_wrapper(std::tuple<std::shared_ptr<Upstreams>...> upstreams, Args&&... args) { return std::make_shared<owning_wrapper<Resource<Upstreams...>, Upstreams...>>( std::move(upstreams), std::forward<Args>(args)...); } /** * @brief Additional convenience factory for `owning_wrapper` when `Resource` has only a single * upstream resource. * * When a resource has only a single upstream, it can be inconvenient to construct a `std::tuple` of * the upstream resource. This factory allows specifying the single upstream as just a * `std::shared_ptr`. * * @tparam Resource Type of the wrapped resource to construct * @tparam Upstream Type of the single upstream resource * @tparam Args Types of the arguments used in `Resource`s constructor * @param upstream `std::shared_ptr` to the upstream resource * @param args Function parameter pack of arguments to forward to the wrapped resource's constructor * @return An `owning_wrapper` wrapping a newly construct `Resource<Upstream>` and `upstream`. */ template <template <typename> class Resource, typename Upstream, typename... Args> auto make_owning_wrapper(std::shared_ptr<Upstream> upstream, Args&&... args) { return make_owning_wrapper<Resource>(std::make_tuple(std::move(upstream)), std::forward<Args>(args)...); } /** @} */ // end of group } // namespace rmm::mr
0
rapidsai_public_repos/rmm/include/rmm/mr
rapidsai_public_repos/rmm/include/rmm/mr/device/device_memory_resource.hpp
/* * Copyright (c) 2019-2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/cuda_stream_view.hpp> #include <rmm/detail/aligned.hpp> #include <cuda/memory_resource> #include <cstddef> #include <utility> namespace rmm::mr { /** * @addtogroup device_memory_resources * @{ * @file */ /** * @brief Base class for all libcudf device memory allocation. * * This class serves as the interface that all custom device memory * implementations must satisfy. * * There are two private, pure virtual functions that all derived classes must implement: *`do_allocate` and `do_deallocate`. Optionally, derived classes may also override `is_equal`. By * default, `is_equal` simply performs an identity comparison. * * The public, non-virtual functions `allocate`, `deallocate`, and `is_equal` simply call the * private virtual functions. The reason for this is to allow implementing shared, default behavior * in the base class. For example, the base class' `allocate` function may log every allocation, no * matter what derived class implementation is used. * * The `allocate` and `deallocate` APIs and implementations provide stream-ordered memory * allocation. This allows optimizations such as re-using memory deallocated on the same stream * without the overhead of stream synchronization. * * A call to `allocate(bytes, stream_a)` (on any derived class) returns a pointer that is valid to * use on `stream_a`. Using the memory on a different stream (say `stream_b`) is Undefined Behavior * unless the two streams are first synchronized, for example by using * `cudaStreamSynchronize(stream_a)` or by recording a CUDA event on `stream_a` and then * calling `cudaStreamWaitEvent(stream_b, event)`. * * The stream specified to deallocate() should be a stream on which it is valid to use the * deallocated memory immediately for another allocation. Typically this is the stream on which the * allocation was *last* used before the call to deallocate(). The passed stream may be used * internally by a device_memory_resource for managing available memory with minimal * synchronization, and it may also be synchronized at a later time, for example using a call to * `cudaStreamSynchronize()`. * * For this reason, it is Undefined Behavior to destroy a CUDA stream that is passed to * deallocate(). If the stream on which the allocation was last used has been destroyed before * calling deallocate() or it is known that it will be destroyed, it is likely better to synchronize * the stream (before destroying it) and then pass a different stream to deallocate() (e.g. the * default stream). * * A device_memory_resource should only be used when the active CUDA device is the same device * that was active when the device_memory_resource was created. Otherwise behavior is undefined. * * Creating a device_memory_resource for each device requires care to set the current device * before creating each resource, and to maintain the lifetime of the resources as long as they * are set as per-device resources. Here is an example loop that creates `unique_ptr`s to * pool_memory_resource objects for each device and sets them as the per-device resource for that * device. * * @code{.cpp} * std::vector<unique_ptr<pool_memory_resource>> per_device_pools; * for(int i = 0; i < N; ++i) { * cudaSetDevice(i); * per_device_pools.push_back(std::make_unique<pool_memory_resource>()); * set_per_device_resource(cuda_device_id{i}, &per_device_pools.back()); * } * @endcode */ class device_memory_resource { public: device_memory_resource() = default; virtual ~device_memory_resource() = default; device_memory_resource(device_memory_resource const&) = default; ///< @default_copy_constructor device_memory_resource(device_memory_resource&&) noexcept = default; ///< @default_move_constructor device_memory_resource& operator=(device_memory_resource const&) = default; ///< @default_copy_assignment{device_memory_resource} device_memory_resource& operator=(device_memory_resource&&) noexcept = default; ///< @default_move_assignment{device_memory_resource} /** * @brief Allocates memory of size at least \p bytes. * * The returned pointer will have at minimum 256 byte alignment. * * If supported, this operation may optionally be executed on a stream. * Otherwise, the stream is ignored and the null stream is used. * * @throws rmm::bad_alloc When the requested `bytes` cannot be allocated on * the specified @p stream. * * @param bytes The size of the allocation * @param stream Stream on which to perform allocation * @return void* Pointer to the newly allocated memory */ void* allocate(std::size_t bytes, cuda_stream_view stream = cuda_stream_view{}) { return do_allocate(bytes, stream); } /** * @brief Deallocate memory pointed to by \p p. * * `p` must have been returned by a prior call to `allocate(bytes, stream)` on * a `device_memory_resource` that compares equal to `*this`, and the storage * it points to must not yet have been deallocated, otherwise behavior is * undefined. * * If supported, this operation may optionally be executed on a stream. * Otherwise, the stream is ignored and the null stream is used. * * @param ptr Pointer to be deallocated * @param bytes The size in bytes of the allocation. This must be equal to the * value of `bytes` that was passed to the `allocate` call that returned `p`. * @param stream Stream on which to perform deallocation */ void deallocate(void* ptr, std::size_t bytes, cuda_stream_view stream = cuda_stream_view{}) { do_deallocate(ptr, bytes, stream); } /** * @brief Compare this resource to another. * * Two device_memory_resources compare equal if and only if memory allocated * from one device_memory_resource can be deallocated from the other and vice * versa. * * By default, simply checks if \p *this and \p other refer to the same * object, i.e., does not check if they are two objects of the same class. * * @param other The other resource to compare to * @returns If the two resources are equivalent */ [[nodiscard]] bool is_equal(device_memory_resource const& other) const noexcept { return do_is_equal(other); } /** * @brief Allocates memory of size at least \p bytes. * * The returned pointer will have at minimum 256 byte alignment. * * @throws rmm::bad_alloc When the requested `bytes` cannot be allocated on * the specified `stream`. * * @param bytes The size of the allocation * @param alignment The expected alignment of the allocation * @return void* Pointer to the newly allocated memory */ void* allocate(std::size_t bytes, std::size_t alignment) { return do_allocate(rmm::detail::align_up(bytes, alignment), cuda_stream_view{}); } /** * @brief Deallocate memory pointed to by \p p. * * `p` must have been returned by a prior call to `allocate(bytes, stream)` on * a `device_memory_resource` that compares equal to `*this`, and the storage * it points to must not yet have been deallocated, otherwise behavior is * undefined. * * @param ptr Pointer to be deallocated * @param bytes The size in bytes of the allocation. This must be equal to the * value of `bytes` that was passed to the `allocate` call that returned `p`. * @param alignment The alignment that was passed to the `allocate` call that returned `p` */ void deallocate(void* ptr, std::size_t bytes, std::size_t alignment) { do_deallocate(ptr, rmm::detail::align_up(bytes, alignment), cuda_stream_view{}); } /** * @brief Allocates memory of size at least \p bytes. * * The returned pointer will have at minimum 256 byte alignment. * * @throws rmm::bad_alloc When the requested `bytes` cannot be allocated on * the specified `stream`. * * @param bytes The size of the allocation * @param alignment The expected alignment of the allocation * @param stream Stream on which to perform allocation * @return void* Pointer to the newly allocated memory */ void* allocate_async(std::size_t bytes, std::size_t alignment, cuda_stream_view stream) { return do_allocate(rmm::detail::align_up(bytes, alignment), stream); } /** * @brief Allocates memory of size at least \p bytes. * * The returned pointer will have at minimum 256 byte alignment. * * @throws rmm::bad_alloc When the requested `bytes` cannot be allocated on * the specified `stream`. * * @param bytes The size of the allocation * @param stream Stream on which to perform allocation * @return void* Pointer to the newly allocated memory */ void* allocate_async(std::size_t bytes, cuda_stream_view stream) { return do_allocate(bytes, stream); } /** * @brief Deallocate memory pointed to by \p p. * * `p` must have been returned by a prior call to `allocate(bytes, stream)` on * a `device_memory_resource` that compares equal to `*this`, and the storage * it points to must not yet have been deallocated, otherwise behavior is * undefined. * * @param ptr Pointer to be deallocated * @param bytes The size in bytes of the allocation. This must be equal to the * value of `bytes` that was passed to the `allocate` call that returned `p`. * @param alignment The alignment that was passed to the `allocate` call that returned `p` * @param stream Stream on which to perform allocation */ void deallocate_async(void* ptr, std::size_t bytes, std::size_t alignment, cuda_stream_view stream) { do_deallocate(ptr, rmm::detail::align_up(bytes, alignment), stream); } /** * @brief Deallocate memory pointed to by \p p. * * `p` must have been returned by a prior call to `allocate(bytes, stream)` on * a `device_memory_resource` that compares equal to `*this`, and the storage * it points to must not yet have been deallocated, otherwise behavior is * undefined. * * @param ptr Pointer to be deallocated * @param bytes The size in bytes of the allocation. This must be equal to the * value of `bytes` that was passed to the `allocate` call that returned `p`. * @param stream Stream on which to perform allocation */ void deallocate_async(void* ptr, std::size_t bytes, cuda_stream_view stream) { do_deallocate(ptr, bytes, stream); } /** * @brief Comparison operator with another device_memory_resource * * @param other The other resource to compare to * @return true If the two resources are equivalent * @return false If the two resources are not equivalent */ [[nodiscard]] bool operator==(device_memory_resource const& other) const noexcept { return do_is_equal(other); } /** * @brief Comparison operator with another device_memory_resource * * @param other The other resource to compare to * @return false If the two resources are equivalent * @return true If the two resources are not equivalent */ [[nodiscard]] bool operator!=(device_memory_resource const& other) const noexcept { return !do_is_equal(other); } /** * @brief Query whether the resource supports use of non-null CUDA streams for * allocation/deallocation. * * @returns bool true if the resource supports non-null CUDA streams. */ [[nodiscard]] virtual bool supports_streams() const noexcept = 0; /** * @brief Query whether the resource supports the get_mem_info API. * * @return bool true if the resource supports get_mem_info, false otherwise. */ [[nodiscard]] virtual bool supports_get_mem_info() const noexcept = 0; /** * @brief Queries the amount of free and total memory for the resource. * * @param stream the stream whose memory manager we want to retrieve * * @returns a pair containing the free memory in bytes in .first and total amount of memory in * .second */ [[nodiscard]] std::pair<std::size_t, std::size_t> get_mem_info(cuda_stream_view stream) const { return do_get_mem_info(stream); } /** * @brief Enables the `cuda::mr::device_accessible` property * * This property declares that a `device_memory_resource` provides device accessible memory */ friend void get_property(device_memory_resource const&, cuda::mr::device_accessible) noexcept {} private: /** * @brief Allocates memory of size at least \p bytes. * * The returned pointer will have at minimum 256 byte alignment. * * If supported, this operation may optionally be executed on a stream. * Otherwise, the stream is ignored and the null stream is used. * * @param bytes The size of the allocation * @param stream Stream on which to perform allocation * @return void* Pointer to the newly allocated memory */ virtual void* do_allocate(std::size_t bytes, cuda_stream_view stream) = 0; /** * @brief Deallocate memory pointed to by \p p. * * If supported, this operation may optionally be executed on a stream. * Otherwise, the stream is ignored and the null stream is used. * * @param ptr Pointer to be deallocated * @param bytes The size in bytes of the allocation. This must be equal to the * value of `bytes` that was passed to the `allocate` call that returned `p`. * @param stream Stream on which to perform deallocation */ virtual void do_deallocate(void* ptr, std::size_t bytes, cuda_stream_view stream) = 0; /** * @brief Compare this resource to another. * * Two device_memory_resources compare equal if and only if memory allocated * from one device_memory_resource can be deallocated from the other and vice * versa. * * By default, simply checks if \p *this and \p other refer to the same * object, i.e., does not check if they are two objects of the same class. * * @param other The other resource to compare to * @return true If the two resources are equivalent * @return false If the two resources are not equal */ [[nodiscard]] virtual bool do_is_equal(device_memory_resource const& other) const noexcept { return this == &other; } /** * @brief Get free and available memory for memory resource * * @throws std::runtime_error if we could not get free / total memory * * @param stream the stream being executed on * @return std::pair with available and free memory for resource */ [[nodiscard]] virtual std::pair<std::size_t, std::size_t> do_get_mem_info( cuda_stream_view stream) const = 0; }; static_assert(cuda::mr::async_resource_with<device_memory_resource, cuda::mr::device_accessible>); /** @} */ // end of group } // namespace rmm::mr
0
rapidsai_public_repos/rmm/include/rmm/mr
rapidsai_public_repos/rmm/include/rmm/mr/device/failure_callback_resource_adaptor.hpp
/* * Copyright (c) 2020-2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/detail/error.hpp> #include <rmm/mr/device/device_memory_resource.hpp> #include <cstddef> #include <functional> #include <utility> namespace rmm::mr { /** * @addtogroup device_resource_adaptors * @{ * @file */ /** * @brief Callback function type used by failure_callback_resource_adaptor * * The resource adaptor calls this function when a memory allocation throws a specified exception * type. The function decides whether the resource adaptor should try to allocate the memory again * or re-throw the exception. * * The callback function signature is: * `bool failure_callback_t(std::size_t bytes, void* callback_arg)` * * The callback function is passed two parameters: `bytes` is the size of the failed memory * allocation and `arg` is the extra argument passed to the constructor of the * `failure_callback_resource_adaptor`. The callback function returns a Boolean where true means to * retry the memory allocation and false means to re-throw the exception. */ using failure_callback_t = std::function<bool(std::size_t, void*)>; /** * @brief A device memory resource that calls a callback function when allocations * throw a specified exception type. * * An instance of this resource must be constructed with an existing, upstream * resource in order to satisfy allocation requests. * * The callback function takes an allocation size and a callback argument and returns * a bool representing whether to retry the allocation (true) or re-throw the caught exception * (false). * * When implementing a callback function for allocation retry, care must be taken to avoid an * infinite loop. The following example makes sure to only retry the allocation once: * * @code{.cpp} * using failure_callback_adaptor = * rmm::mr::failure_callback_resource_adaptor<rmm::mr::device_memory_resource>; * * bool failure_handler(std::size_t bytes, void* arg) * { * bool& retried = *reinterpret_cast<bool*>(arg); * if (!retried) { * retried = true; * return true; // First time we request an allocation retry * } * return false; // Second time we let the adaptor throw std::bad_alloc * } * * int main() * { * bool retried{false}; * failure_callback_adaptor mr{ * rmm::mr::get_current_device_resource(), failure_handler, &retried * }; * rmm::mr::set_current_device_resource(&mr); * } * @endcode * * @tparam Upstream The type of the upstream resource used for allocation/deallocation. * @tparam ExceptionType The type of exception that this adaptor should respond to */ template <typename Upstream, typename ExceptionType = rmm::out_of_memory> class failure_callback_resource_adaptor final : public device_memory_resource { public: using exception_type = ExceptionType; ///< The type of exception this object catches/throws /** * @brief Construct a new `failure_callback_resource_adaptor` using `upstream` to satisfy * allocation requests. * * @throws rmm::logic_error if `upstream == nullptr` * * @param upstream The resource used for allocating/deallocating device memory * @param callback Callback function @see failure_callback_t * @param callback_arg Extra argument passed to `callback` */ failure_callback_resource_adaptor(Upstream* upstream, failure_callback_t callback, void* callback_arg) : upstream_{upstream}, callback_{std::move(callback)}, callback_arg_{callback_arg} { RMM_EXPECTS(nullptr != upstream, "Unexpected null upstream resource pointer."); } failure_callback_resource_adaptor() = delete; ~failure_callback_resource_adaptor() override = default; failure_callback_resource_adaptor(failure_callback_resource_adaptor const&) = delete; failure_callback_resource_adaptor& operator=(failure_callback_resource_adaptor const&) = delete; failure_callback_resource_adaptor(failure_callback_resource_adaptor&&) noexcept = default; ///< @default_move_constructor failure_callback_resource_adaptor& operator=(failure_callback_resource_adaptor&&) noexcept = default; ///< @default_move_assignment{failure_callback_resource_adaptor} /** * @briefreturn{Pointer to the upstream resource} */ Upstream* get_upstream() const noexcept { return upstream_; } /** * @brief Checks whether the upstream resource supports streams. * * @return true The upstream resource supports streams * @return false The upstream resource does not support streams. */ [[nodiscard]] bool supports_streams() const noexcept override { return upstream_->supports_streams(); } /** * @brief Query whether the resource supports the get_mem_info API. * * @return bool true if the upstream resource supports get_mem_info, false otherwise. */ [[nodiscard]] bool supports_get_mem_info() const noexcept override { return upstream_->supports_get_mem_info(); } private: /** * @brief Allocates memory of size at least `bytes` using the upstream * resource. * * @throws `exception_type` if the requested allocation could not be fulfilled * by the upstream resource. * * @param bytes The size, in bytes, of the allocation * @param stream Stream on which to perform the allocation * @return void* Pointer to the newly allocated memory */ void* do_allocate(std::size_t bytes, cuda_stream_view stream) override { void* ret{}; while (true) { try { ret = upstream_->allocate(bytes, stream); break; } catch (exception_type const& e) { if (!callback_(bytes, callback_arg_)) { throw; } } } return ret; } /** * @brief Free allocation of size `bytes` pointed to by `ptr` * * @param ptr Pointer to be deallocated * @param bytes Size of the allocation * @param stream Stream on which to perform the deallocation */ void do_deallocate(void* ptr, std::size_t bytes, cuda_stream_view stream) override { upstream_->deallocate(ptr, bytes, stream); } /** * @brief Compare the upstream resource to another. * * @param other The other resource to compare to * @return true If the two resources are equivalent * @return false If the two resources are not equal */ [[nodiscard]] bool do_is_equal(device_memory_resource const& other) const noexcept override { if (this == &other) { return true; } auto cast = dynamic_cast<failure_callback_resource_adaptor<Upstream> const*>(&other); return cast != nullptr ? upstream_->is_equal(*cast->get_upstream()) : upstream_->is_equal(other); } /** * @brief Get free and available memory from upstream resource. * * @throws rmm::cuda_error if unable to retrieve memory info. * * @param stream Stream on which to get the mem info. * @return std::pair contaiing free_size and total_size of memory */ [[nodiscard]] std::pair<std::size_t, std::size_t> do_get_mem_info( cuda_stream_view stream) const override { return upstream_->get_mem_info(stream); } Upstream* upstream_; // the upstream resource used for satisfying allocation requests failure_callback_t callback_; void* callback_arg_; }; /** @} */ // end of group } // namespace rmm::mr
0
rapidsai_public_repos/rmm/include/rmm/mr
rapidsai_public_repos/rmm/include/rmm/mr/device/cuda_async_memory_resource.hpp
/* * Copyright (c) 2021-2022, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/cuda_device.hpp> #include <rmm/cuda_stream_view.hpp> #include <rmm/detail/cuda_util.hpp> #include <rmm/detail/dynamic_load_runtime.hpp> #include <rmm/detail/error.hpp> #include <rmm/mr/device/cuda_async_view_memory_resource.hpp> #include <rmm/mr/device/device_memory_resource.hpp> #include <rmm/detail/thrust_namespace.h> #include <thrust/optional.h> #include <cuda_runtime_api.h> #include <cstddef> #include <limits> #if CUDART_VERSION >= 11020 // 11.2 introduced cudaMallocAsync #ifndef RMM_DISABLE_CUDA_MALLOC_ASYNC #define RMM_CUDA_MALLOC_ASYNC_SUPPORT #endif #endif namespace rmm::mr { /** * @addtogroup device_memory_resources * @{ * @file */ /** * @brief `device_memory_resource` derived class that uses `cudaMallocAsync`/`cudaFreeAsync` for * allocation/deallocation. */ class cuda_async_memory_resource final : public device_memory_resource { public: /** * @brief Flags for specifying memory allocation handle types. * * @note These values are exact copies from `cudaMemAllocationHandleType`. We need to * define our own enum here because the earliest CUDA runtime version that supports asynchronous * memory pools (CUDA 11.2) did not support these flags, so we need a placeholder that can be * used consistently in the constructor of `cuda_async_memory_resource` with all versions of * CUDA >= 11.2. See the `cudaMemAllocationHandleType` docs at * https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html */ enum class allocation_handle_type { none = 0x0, ///< Does not allow any export mechanism. posix_file_descriptor = 0x1, ///< Allows a file descriptor to be used for exporting. Permitted ///< only on POSIX systems. win32 = 0x2, ///< Allows a Win32 NT handle to be used for exporting. (HANDLE) win32_kmt = 0x4 ///< Allows a Win32 KMT handle to be used for exporting. (D3DKMT_HANDLE) }; /** * @brief Constructs a cuda_async_memory_resource with the optionally specified initial pool size * and release threshold. * * If the pool size grows beyond the release threshold, unused memory held by the pool will be * released at the next synchronization event. * * @throws rmm::logic_error if the CUDA version does not support `cudaMallocAsync` * * @param initial_pool_size Optional initial size in bytes of the pool. If no value is provided, * initial pool size is half of the available GPU memory. * @param release_threshold Optional release threshold size in bytes of the pool. If no value is * provided, the release threshold is set to the total amount of memory on the current device. * @param export_handle_type Optional `cudaMemAllocationHandleType` that allocations from this * resource should support interprocess communication (IPC). Default is * `cudaMemHandleTypeNone` for no IPC support. */ // NOLINTNEXTLINE(bugprone-easily-swappable-parameters) cuda_async_memory_resource(thrust::optional<std::size_t> initial_pool_size = {}, thrust::optional<std::size_t> release_threshold = {}, thrust::optional<allocation_handle_type> export_handle_type = {}) { #ifdef RMM_CUDA_MALLOC_ASYNC_SUPPORT // Check if cudaMallocAsync Memory pool supported RMM_EXPECTS(rmm::detail::async_alloc::is_supported(), "cudaMallocAsync not supported with this CUDA driver/runtime version"); // Construct explicit pool cudaMemPoolProps pool_props{}; pool_props.allocType = cudaMemAllocationTypePinned; pool_props.handleTypes = static_cast<cudaMemAllocationHandleType>( export_handle_type.value_or(allocation_handle_type::none)); RMM_EXPECTS(rmm::detail::async_alloc::is_export_handle_type_supported(pool_props.handleTypes), "Requested IPC memory handle type not supported"); pool_props.location.type = cudaMemLocationTypeDevice; pool_props.location.id = rmm::get_current_cuda_device().value(); cudaMemPool_t cuda_pool_handle{}; RMM_CUDA_TRY(rmm::detail::async_alloc::cudaMemPoolCreate(&cuda_pool_handle, &pool_props)); pool_ = cuda_async_view_memory_resource{cuda_pool_handle}; // CUDA drivers before 11.5 have known incompatibilities with the async allocator. // We'll disable `cudaMemPoolReuseAllowOpportunistic` if cuda driver < 11.5. // See https://github.com/NVIDIA/spark-rapids/issues/4710. int driver_version{}; RMM_CUDA_TRY(cudaDriverGetVersion(&driver_version)); constexpr auto min_async_version{11050}; if (driver_version < min_async_version) { int disabled{0}; RMM_CUDA_TRY(rmm::detail::async_alloc::cudaMemPoolSetAttribute( pool_handle(), cudaMemPoolReuseAllowOpportunistic, &disabled)); } auto const [free, total] = rmm::detail::available_device_memory(); // Need an l-value to take address to pass to cudaMemPoolSetAttribute uint64_t threshold = release_threshold.value_or(total); RMM_CUDA_TRY(rmm::detail::async_alloc::cudaMemPoolSetAttribute( pool_handle(), cudaMemPoolAttrReleaseThreshold, &threshold)); // Allocate and immediately deallocate the initial_pool_size to prime the pool with the // specified size auto const pool_size = initial_pool_size.value_or(free / 2); auto* ptr = do_allocate(pool_size, cuda_stream_default); do_deallocate(ptr, pool_size, cuda_stream_default); #else RMM_FAIL( "cudaMallocAsync not supported by the version of the CUDA Toolkit used for this build"); #endif } #ifdef RMM_CUDA_MALLOC_ASYNC_SUPPORT /** * @brief Returns the underlying native handle to the CUDA pool * */ [[nodiscard]] cudaMemPool_t pool_handle() const noexcept { return pool_.pool_handle(); } #endif ~cuda_async_memory_resource() override { #if defined(RMM_CUDA_MALLOC_ASYNC_SUPPORT) RMM_ASSERT_CUDA_SUCCESS(rmm::detail::async_alloc::cudaMemPoolDestroy(pool_handle())); #endif } cuda_async_memory_resource(cuda_async_memory_resource const&) = delete; cuda_async_memory_resource(cuda_async_memory_resource&&) = delete; cuda_async_memory_resource& operator=(cuda_async_memory_resource const&) = delete; cuda_async_memory_resource& operator=(cuda_async_memory_resource&&) = delete; /** * @brief Query whether the resource supports use of non-null CUDA streams for * allocation/deallocation. `cuda_memory_resource` does not support streams. * * @returns bool true */ [[nodiscard]] bool supports_streams() const noexcept override { return true; } /** * @brief Query whether the resource supports the get_mem_info API. * * @return false */ [[nodiscard]] bool supports_get_mem_info() const noexcept override { return false; } private: #ifdef RMM_CUDA_MALLOC_ASYNC_SUPPORT cuda_async_view_memory_resource pool_{}; #endif /** * @brief Allocates memory of size at least \p bytes. * * The returned pointer will have at minimum 256 byte alignment. * * @param bytes The size of the allocation * @param stream Stream on which to perform allocation * @return void* Pointer to the newly allocated memory */ void* do_allocate(std::size_t bytes, rmm::cuda_stream_view stream) override { void* ptr{nullptr}; #ifdef RMM_CUDA_MALLOC_ASYNC_SUPPORT ptr = pool_.allocate(bytes, stream); #else (void)bytes; (void)stream; #endif return ptr; } /** * @brief Deallocate memory pointed to by \p p. * * @param ptr Pointer to be deallocated * @param bytes The size in bytes of the allocation. This must be equal to the * value of `bytes` that was passed to the `allocate` call that returned `p`. * @param stream Stream on which to perform deallocation */ void do_deallocate(void* ptr, std::size_t bytes, rmm::cuda_stream_view stream) override { #ifdef RMM_CUDA_MALLOC_ASYNC_SUPPORT pool_.deallocate(ptr, bytes, stream); #else (void)ptr; (void)bytes; (void)stream; #endif } /** * @brief Compare this resource to another. * * @param other The other resource to compare to * @return true If the two resources are equivalent * @return false If the two resources are not equal */ [[nodiscard]] bool do_is_equal(device_memory_resource const& other) const noexcept override { auto const* async_mr = dynamic_cast<cuda_async_memory_resource const*>(&other); #ifdef RMM_CUDA_MALLOC_ASYNC_SUPPORT return (async_mr != nullptr) && (this->pool_handle() == async_mr->pool_handle()); #else return async_mr != nullptr; #endif } /** * @brief Get free and available memory for memory resource * * @throws rmm::cuda_error if unable to retrieve memory info. * * @return std::pair contaiing free_size and total_size of memory */ [[nodiscard]] std::pair<std::size_t, std::size_t> do_get_mem_info( rmm::cuda_stream_view) const override { return std::make_pair(0, 0); } }; /** @} */ // end of group } // namespace rmm::mr
0
rapidsai_public_repos/rmm/include/rmm/mr/device
rapidsai_public_repos/rmm/include/rmm/mr/device/detail/coalescing_free_list.hpp
/* * Copyright (c) 2019-2023, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/detail/error.hpp> #include <rmm/mr/device/detail/free_list.hpp> #include <fmt/core.h> #include <algorithm> #include <cassert> #include <cstddef> #include <iostream> #include <iterator> #include <list> namespace rmm::mr::detail { /** * @brief A simple block structure specifying the size and location of a block * of memory, with a flag indicating whether it is the head of a block * of memory allocated from the heap (or upstream allocator). */ struct block : public block_base { block() = default; block(char* ptr, std::size_t size, bool is_head) : block_base{ptr}, size_bytes{size}, head{is_head} { } /** * @brief Returns the pointer to the memory represented by this block. * * @return the pointer to the memory represented by this block. */ [[nodiscard]] inline char* pointer() const { return static_cast<char*>(ptr); } /** * @brief Returns the size of the memory represented by this block. * * @return the size in bytes of the memory represented by this block. */ [[nodiscard]] inline std::size_t size() const { return size_bytes; } /** * @brief Returns whether this block is the start of an allocation from an upstream allocator. * * A block `b` may not be coalesced with a preceding contiguous block `a` if `b.is_head == true`. * * @return true if this block is the start of an allocation from an upstream allocator. */ [[nodiscard]] inline bool is_head() const { return head; } /** * @brief Comparison operator to enable comparing blocks and storing in ordered containers. * * Orders by ptr address. * @param rhs * @return true if this block's ptr is < than `rhs` block pointer. * @return false if this block's ptr is >= than `rhs` block pointer. */ inline bool operator<(block const& rhs) const noexcept { return pointer() < rhs.pointer(); }; /** * @brief Coalesce two contiguous blocks into one. * * `this` must immediately precede `b` and both `this` and `b` must be from the same upstream * allocation. That is, `this->is_contiguous_before(b)`. Otherwise behavior is undefined. * * @param blk block to merge * @return The merged block */ [[nodiscard]] inline block merge(block const& blk) const noexcept { assert(is_contiguous_before(blk)); return {pointer(), size() + blk.size(), is_head()}; } /** * @brief Verifies whether this block can be merged to the beginning of block b. * * @param blk The block to check for contiguity. * @return Returns true if this blocks's `ptr` + `size` == `b.ptr`, and `not b.is_head`, false otherwise. */ [[nodiscard]] inline bool is_contiguous_before(block const& blk) const noexcept { // NOLINTNEXTLINE(cppcoreguidelines-pro-bounds-pointer-arithmetic) return (pointer() + size() == blk.ptr) and not(blk.is_head()); } /** * @brief Is this block large enough to fit `sz` bytes? * * @param bytes The size in bytes to check for fit. * @return true if this block is at least `bytes` bytes */ [[nodiscard]] inline bool fits(std::size_t bytes) const noexcept { return size() >= bytes; } /** * @brief Is this block a better fit for `sz` bytes than block `b`? * * @param bytes The size in bytes to check for best fit. * @param blk The other block to check for fit. * @return true If this block is a tighter fit for `bytes` bytes than block `blk`. * @return false If this block does not fit `bytes` bytes or `blk` is a tighter fit. */ [[nodiscard]] inline bool is_better_fit(std::size_t bytes, block const& blk) const noexcept { return fits(bytes) && (size() < blk.size() || blk.size() < bytes); } #ifdef RMM_DEBUG_PRINT /** * @brief Print this block. For debugging. */ inline void print() const { std::cout << fmt::format("{} {} B", fmt::ptr(pointer()), size()) << std::endl; } #endif private: std::size_t size_bytes{}; ///< Size in bytes bool head{}; ///< Indicates whether ptr was allocated from the heap }; #ifdef RMM_DEBUG_PRINT /// Print block on an ostream inline std::ostream& operator<<(std::ostream& out, const block& blk) { out << fmt::format("{} {} B\n", fmt::ptr(blk.pointer()), blk.size()); return out; } #endif /** * @brief Comparator for block types based on pointer address. * * This comparator allows searching associative containers of blocks by pointer rather than * having to search by the contained type. Saves potentially error-prone temporary construction of * a block when you just want to search by pointer. */ template <typename block_type> struct compare_blocks { // is_transparent (C++14 feature) allows search key type for set<block_type>::find() using is_transparent = void; bool operator()(block_type const& lhs, block_type const& rhs) const { return lhs < rhs; } bool operator()(char const* ptr, block_type const& rhs) const { return ptr < rhs.pointer(); } bool operator()(block_type const& lhs, char const* ptr) const { return lhs.pointer() < ptr; }; }; /** * @brief An ordered list of free memory blocks that coalesces contiguous blocks on insertion. * * @tparam list_type the type of the internal list data structure. */ struct coalescing_free_list : free_list<block> { coalescing_free_list() = default; ~coalescing_free_list() override = default; coalescing_free_list(coalescing_free_list const&) = delete; coalescing_free_list& operator=(coalescing_free_list const&) = delete; coalescing_free_list(coalescing_free_list&&) = delete; coalescing_free_list& operator=(coalescing_free_list&&) = delete; /** * @brief Inserts a block into the `free_list` in the correct order, coalescing it with the * preceding and following blocks if either is contiguous. * * @param b The block to insert. */ void insert(block_type const& block) { if (is_empty()) { free_list::insert(cend(), block); return; } // Find the right place (in ascending ptr order) to insert the block // Can't use binary_search because it's a linked list and will be quadratic auto const next = std::find_if(begin(), end(), [block](block_type const& blk) { return block < blk; }); auto const previous = (next == cbegin()) ? next : std::prev(next); // Coalesce with neighboring blocks or insert the new block if it can't be coalesced bool const merge_prev = previous->is_contiguous_before(block); bool const merge_next = (next != cend()) && block.is_contiguous_before(*next); if (merge_prev && merge_next) { *previous = previous->merge(block).merge(*next); erase(next); } else if (merge_prev) { *previous = previous->merge(block); } else if (merge_next) { *next = block.merge(*next); } else { free_list::insert(next, block); // cannot be coalesced, just insert } } /** * @brief Moves blocks from free_list `other` into this free_list in their correct order, * coalescing them with their preceding and following blocks if they are contiguous. * * @tparam InputIt iterator type * @param other free_list of blocks to insert */ void insert(free_list&& other) { using std::make_move_iterator; auto inserter = [this](block_type&& block) { this->insert(block); }; std::for_each(make_move_iterator(other.begin()), make_move_iterator(other.end()), inserter); } /** * @brief Finds the smallest block in the `free_list` large enough to fit `size` bytes. * * This is a "best fit" search. * * @param size The size in bytes of the desired block. * @return A block large enough to store `size` bytes. */ block_type get_block(std::size_t size) { // find best fit block auto finder = [size](block_type const& lhs, block_type const& rhs) { return lhs.is_better_fit(size, rhs); }; auto const iter = std::min_element(cbegin(), cend(), finder); if (iter != cend() && iter->fits(size)) { // Remove the block from the free_list and return it. block_type const found = *iter; erase(iter); return found; } return block_type{}; // not found } #ifdef RMM_DEBUG_PRINT /** * @brief Print all blocks in the free_list. */ void print() const { std::cout << size() << '\n'; std::for_each(cbegin(), cend(), [](auto const iter) { iter.print(); }); } #endif }; // coalescing_free_list } // namespace rmm::mr::detail
0
rapidsai_public_repos/rmm/include/rmm/mr/device
rapidsai_public_repos/rmm/include/rmm/mr/device/detail/stream_ordered_memory_resource.hpp
/* * Copyright (c) 2020-2023, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/cuda_device.hpp> #include <rmm/detail/aligned.hpp> #include <rmm/detail/error.hpp> #include <rmm/logger.hpp> #include <rmm/mr/device/device_memory_resource.hpp> #include <cuda_runtime_api.h> #include <fmt/core.h> #include <cstddef> #include <map> #include <mutex> #include <unordered_map> namespace rmm::mr::detail { /** * @brief A CRTP helper function * * https://www.fluentcpp.com/2017/05/19/crtp-helper/ * * Does two things: * 1. Makes "crtp" explicit in the inheritance structure of a CRTP base class. * 2. Avoids having to `static_cast` in a lot of places * * @tparam T The derived class in a CRTP hierarchy */ template <typename T> struct crtp { [[nodiscard]] T& underlying() { return static_cast<T&>(*this); } [[nodiscard]] T const& underlying() const { return static_cast<T const&>(*this); } }; /** * @brief Base class for a stream-ordered memory resource * * This base class uses CRTP (https://en.wikipedia.org/wiki/Curiously_recurring_template_pattern) * to provide static polymorphism to enable defining suballocator resources that maintain separate * pools per stream. All of the stream-ordering logic is contained in this class, but the logic * to determine how memory pools are managed and the type of allocation is implemented in a derived * class and in a free list class. * * For example, a coalescing pool memory resource uses a coalescing_free_list and maintains data * structures for allocated blocks and has functions to allocate and free blocks and to expand the * pool. * * Classes derived from stream_ordered_memory_resource must implement the following four methods, * documented separately: * * 1. `std::size_t get_maximum_allocation_size() const` * 2. `block_type expand_pool(std::size_t size, free_list& blocks, cuda_stream_view stream)` * 3. `split_block allocate_from_block(block_type const& b, std::size_t size)` * 4. `block_type free_block(void* p, std::size_t size) noexcept` */ template <typename PoolResource, typename FreeListType> class stream_ordered_memory_resource : public crtp<PoolResource>, public device_memory_resource { public: ~stream_ordered_memory_resource() override { release(); } stream_ordered_memory_resource() = default; stream_ordered_memory_resource(stream_ordered_memory_resource const&) = delete; stream_ordered_memory_resource(stream_ordered_memory_resource&&) = delete; stream_ordered_memory_resource& operator=(stream_ordered_memory_resource const&) = delete; stream_ordered_memory_resource& operator=(stream_ordered_memory_resource&&) = delete; protected: using free_list = FreeListType; using block_type = typename free_list::block_type; using lock_guard = std::lock_guard<std::mutex>; // Derived classes must implement these four methods /* * @brief Get the maximum size of a single allocation supported by this suballocator memory * resource * * Default implementation is the maximum `std::size_t` value, but fixed-size allocators will have * a lower limit. Override this function in derived classes as necessary. * * @return std::size_t The maximum size of a single allocation supported by this memory resource */ // std::size_t get_maximum_allocation_size() const /* * @brief Allocate space (typically from upstream) to supply the suballocation pool and return * a sufficiently sized block. * * This function returns a block because in some suballocators, a single block is allocated * from upstream and returned. In other suballocators, many blocks are created from upstream. In * the latter case, the function returns one block and inserts all the rest into the free list * `blocks`. * * @param size The minimum size block to return * @param blocks The free list into which to optionally insert new blocks * @param stream The stream on which the memory is to be used. * @return block_type a block of at least `size` bytes */ // block_type expand_pool(std::size_t size, free_list& blocks, cuda_stream_view stream) /// Pair representing a block that has been split for allocation using split_block = std::pair<block_type, block_type>; /* * @brief Split block `b` if necessary to return a pointer to memory of `size` bytes. * * If the block is split, the remainder is returned as the remainder element in the output * `split_block`. * * @param b The block to allocate from. * @param size The size in bytes of the requested allocation. * @param stream_event The stream and associated event on which the allocation will be used. * @return A `split_block` comprising the allocated pointer and any unallocated remainder of the * input block. */ // split_block allocate_from_block(block_type const& b, std::size_t size) /* * @brief Finds, frees and returns the block associated with pointer `p`. * * @param p The pointer to the memory to free. * @param size The size of the memory to free. Must be equal to the original allocation size. * @return The (now freed) block associated with `p`. The caller is expected to return the block * to the pool. */ // block_type free_block(void* p, std::size_t size) noexcept /** * @brief Returns the block `b` (last used on stream `stream_event`) to the pool. * * @param block The block to insert into the pool. * @param stream The stream on which the memory was last used. */ void insert_block(block_type const& block, cuda_stream_view stream) { stream_free_blocks_[get_event(stream)].insert(block); } void insert_blocks(free_list&& blocks, cuda_stream_view stream) { stream_free_blocks_[get_event(stream)].insert(std::move(blocks)); } #ifdef RMM_DEBUG_PRINT void print_free_blocks() const { std::cout << "stream free blocks: "; for (auto& free_blocks : stream_free_blocks_) { std::cout << "stream: " << free_blocks.first.stream << " event: " << free_blocks.first.event << " "; free_blocks.second.print(); std::cout << std::endl; } std::cout << std::endl; } #endif /** * @brief Get the mutex object * * @return std::mutex */ std::mutex& get_mutex() { return mtx_; } struct stream_event_pair { cudaStream_t stream; cudaEvent_t event; bool operator<(stream_event_pair const& rhs) const { return event < rhs.event; } }; /** * @brief Allocates memory of size at least `bytes`. * * The returned pointer has at least 256B alignment. * * @throws `std::bad_alloc` if the requested allocation could not be fulfilled * * @param size The size in bytes of the allocation * @param stream The stream in which to order this allocation * @return void* Pointer to the newly allocated memory */ void* do_allocate(std::size_t size, cuda_stream_view stream) override { RMM_LOG_TRACE("[A][stream {:p}][{}B]", fmt::ptr(stream.value()), size); if (size <= 0) { return nullptr; } lock_guard lock(mtx_); auto stream_event = get_event(stream); size = rmm::detail::align_up(size, rmm::detail::CUDA_ALLOCATION_ALIGNMENT); RMM_EXPECTS(size <= this->underlying().get_maximum_allocation_size(), "Maximum allocation size exceeded", rmm::out_of_memory); auto const block = this->underlying().get_block(size, stream_event); RMM_LOG_TRACE("[A][stream {:p}][{}B][{:p}]", fmt::ptr(stream_event.stream), size, fmt::ptr(block.pointer())); log_summary_trace(); return block.pointer(); } /** * @brief Deallocate memory pointed to by `p`. * * @throws nothing * * @param p Pointer to be deallocated * @param size The size in bytes of the allocation to deallocate * @param stream The stream in which to order this deallocation */ void do_deallocate(void* ptr, std::size_t size, cuda_stream_view stream) override { RMM_LOG_TRACE("[D][stream {:p}][{}B][{:p}]", fmt::ptr(stream.value()), size, ptr); if (size <= 0 || ptr == nullptr) { return; } lock_guard lock(mtx_); auto stream_event = get_event(stream); size = rmm::detail::align_up(size, rmm::detail::CUDA_ALLOCATION_ALIGNMENT); auto const block = this->underlying().free_block(ptr, size); // TODO: cudaEventRecord has significant overhead on deallocations. For the non-PTDS case // we may be able to delay recording the event in some situations. But using events rather than // streams allows stealing from deleted streams. RMM_ASSERT_CUDA_SUCCESS(cudaEventRecord(stream_event.event, stream.value())); stream_free_blocks_[stream_event].insert(block); log_summary_trace(); } private: /** * @brief get a unique CUDA event (possibly new) associated with `stream` * * The event is created on the first call, and it is not recorded. If compiled for per-thread * default stream and `stream` is the default stream, the event is created in thread local * memory and is unique per CPU thread. * * @param stream The stream for which to get an event. * @return The stream_event for `stream`. */ stream_event_pair get_event(cuda_stream_view stream) { if (stream.is_per_thread_default()) { // Create a thread-local event for each device. These events are // deliberately leaked since the destructor needs to call into // the CUDA runtime and thread_local destructors (can) run below // main: it is undefined behaviour to call into the CUDA // runtime below main. thread_local std::vector<cudaEvent_t> events_tls(rmm::get_num_cuda_devices()); auto event = [device_id = this->device_id_]() { auto& e = events_tls[device_id.value()]; if (!e) { // These events are deliberately not destructed and therefore live until // program exit. RMM_ASSERT_CUDA_SUCCESS(cudaEventCreateWithFlags(&e, cudaEventDisableTiming)); } return e; }(); return stream_event_pair{stream.value(), event}; } // We use cudaStreamLegacy as the event map key for the default stream for consistency between // PTDS and non-PTDS mode. In PTDS mode, the cudaStreamLegacy map key will only exist if the // user explicitly passes it, so it is used as the default location for the free list // at construction. For consistency, the same key is used for null stream free lists in // non-PTDS mode. // NOLINTNEXTLINE(cppcoreguidelines-pro-type-cstyle-cast) auto* const stream_to_store = stream.is_default() ? cudaStreamLegacy : stream.value(); auto const iter = stream_events_.find(stream_to_store); return (iter != stream_events_.end()) ? iter->second : [&]() { stream_event_pair stream_event{stream_to_store}; RMM_ASSERT_CUDA_SUCCESS( cudaEventCreateWithFlags(&stream_event.event, cudaEventDisableTiming)); // NOLINTNEXTLINE(cppcoreguidelines-pro-bounds-pointer-arithmetic) stream_events_[stream_to_store] = stream_event; return stream_event; }(); } /** * @brief Splits a block into an allocated block of `size` bytes and a remainder block, and * inserts the remainder into a free list. * * @param block The block to split into allocated and remainder portions. * @param size The size of the block to allocate from `b`. * @param blocks The `free_list` in which to insert the remainder block. * @return The allocated block. */ block_type allocate_and_insert_remainder(block_type block, std::size_t size, free_list& blocks) { auto const [allocated, remainder] = this->underlying().allocate_from_block(block, size); if (remainder.is_valid()) { blocks.insert(remainder); } return allocated; } /** * @brief Get an available memory block of at least `size` bytes * * @param size The number of bytes to allocate * @param stream_event The stream and associated event on which the allocation will be used. * @return block_type A block of memory of at least `size` bytes */ block_type get_block(std::size_t size, stream_event_pair stream_event) { // Try to find a satisfactory block in free list for the same stream (no sync required) auto iter = stream_free_blocks_.find(stream_event); if (iter != stream_free_blocks_.end()) { block_type const block = iter->second.get_block(size); if (block.is_valid()) { return allocate_and_insert_remainder(block, size, iter->second); } } free_list& blocks = (iter != stream_free_blocks_.end()) ? iter->second : stream_free_blocks_[stream_event]; // Try to find an existing block in another stream { block_type const block = get_block_from_other_stream(size, stream_event, blocks, false); if (block.is_valid()) { return block; } } // no large enough blocks available on other streams, so sync and merge until we find one { block_type const block = get_block_from_other_stream(size, stream_event, blocks, true); if (block.is_valid()) { return block; } } log_summary_trace(); // no large enough blocks available after merging, so grow the pool block_type const block = this->underlying().expand_pool(size, blocks, cuda_stream_view{stream_event.stream}); return allocate_and_insert_remainder(block, size, blocks); } /** * @brief Find a free block of at least `size` bytes in a `free_list` with a different * stream/event than `stream_event`. * * If an appropriate block is found in a free list F associated with event E, * `stream_event.stream` will be made to wait on event E. * * @param size The requested size of the allocation. * @param stream_event The stream and associated event on which the allocation is being * requested. * @return A block with non-null pointer and size >= `size`, or a nullptr block if none is * available in `blocks`. */ block_type get_block_from_other_stream(std::size_t size, stream_event_pair stream_event, free_list& blocks, bool merge_first) { auto find_block = [&](auto iter) { auto other_event = iter->first.event; auto& other_blocks = iter->second; if (merge_first) { merge_lists(stream_event, blocks, other_event, std::move(other_blocks)); RMM_LOG_DEBUG("[A][Stream {:p}][{}B][Merged stream {:p}]", fmt::ptr(stream_event.stream), size, fmt::ptr(iter->first.stream)); stream_free_blocks_.erase(iter); block_type const block = blocks.get_block(size); // get the best fit block in merged lists if (block.is_valid()) { return allocate_and_insert_remainder(block, size, blocks); } } else { block_type const block = other_blocks.get_block(size); if (block.is_valid()) { // Since we found a block associated with a different stream, we have to insert a wait // on the stream's associated event into the allocating stream. RMM_CUDA_TRY(cudaStreamWaitEvent(stream_event.stream, other_event, 0)); return allocate_and_insert_remainder(block, size, other_blocks); } } return block_type{}; }; for (auto iter = stream_free_blocks_.begin(), next_iter = iter; iter != stream_free_blocks_.end(); iter = next_iter) { ++next_iter; // Points to element after `iter` to allow erasing `iter` in the loop body if (iter->first.event != stream_event.event) { block_type const block = find_block(iter); if (block.is_valid()) { RMM_LOG_DEBUG((merge_first) ? "[A][Stream {:p}][{}B][Found after merging stream {:p}]" : "[A][Stream {:p}][{}B][Taken from stream {:p}]", fmt::ptr(stream_event.stream), size, fmt::ptr(iter->first.stream)); return block; } } } return block_type{}; } void merge_lists(stream_event_pair stream_event, free_list& blocks, cudaEvent_t other_event, free_list&& other_blocks) { // Since we found a block associated with a different stream, we have to insert a wait // on the stream's associated event into the allocating stream. RMM_CUDA_TRY(cudaStreamWaitEvent(stream_event.stream, other_event, 0)); // Merge the two free lists blocks.insert(std::move(other_blocks)); } /** * @brief Clear free lists and events * * Note: only called by destructor. */ void release() { lock_guard lock(mtx_); for (auto s_e : stream_events_) { RMM_ASSERT_CUDA_SUCCESS(cudaEventSynchronize(s_e.second.event)); RMM_ASSERT_CUDA_SUCCESS(cudaEventDestroy(s_e.second.event)); } stream_events_.clear(); stream_free_blocks_.clear(); } void log_summary_trace() { #if (SPDLOG_ACTIVE_LEVEL <= SPDLOG_LEVEL_TRACE) std::size_t num_blocks{0}; std::size_t max_block{0}; std::size_t free_mem{0}; std::for_each(stream_free_blocks_.cbegin(), stream_free_blocks_.cend(), [this, &num_blocks, &max_block, &free_mem](auto const& freelist) { num_blocks += freelist.second.size(); auto summary = this->underlying().free_list_summary(freelist.second); max_block = std::max(summary.first, max_block); free_mem += summary.second; }); RMM_LOG_TRACE("[Summary][Free lists: {}][Blocks: {}][Max Block: {}][Total Free: {}]", stream_free_blocks_.size(), num_blocks, max_block, free_mem); #endif } // map of stream_event_pair --> free_list // Event (or associated stream) must be synced before allocating from associated free_list to a // different stream std::map<stream_event_pair, free_list> stream_free_blocks_; // bidirectional mapping between non-default streams and events std::unordered_map<cudaStream_t, stream_event_pair> stream_events_; std::mutex mtx_; // mutex for thread-safe access rmm::cuda_device_id device_id_{rmm::get_current_cuda_device()}; }; // namespace detail } // namespace rmm::mr::detail
0
rapidsai_public_repos/rmm/include/rmm/mr/device
rapidsai_public_repos/rmm/include/rmm/mr/device/detail/arena.hpp
/* * Copyright (c) 2019-2023, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/cuda_stream_view.hpp> #include <rmm/detail/aligned.hpp> #include <rmm/detail/cuda_util.hpp> #include <rmm/detail/error.hpp> #include <rmm/detail/logging_assert.hpp> #include <rmm/logger.hpp> #include <cuda_runtime_api.h> #include <fmt/core.h> #include <spdlog/common.h> #include <algorithm> #include <cstddef> #include <limits> #include <memory> #include <mutex> #include <numeric> #include <optional> #include <set> namespace rmm::mr::detail::arena { /** * @brief Align up to nearest size class. * * @param[in] value value to align. * @return Return the aligned value. */ inline std::size_t align_to_size_class(std::size_t value) noexcept { // See http://jemalloc.net/jemalloc.3.html. // NOLINTBEGIN(readability-magic-numbers,cppcoreguidelines-avoid-magic-numbers) static std::array<std::size_t, 117> size_classes{ // clang-format off // Spacing 256: 256UL, 512UL, 768UL, 1024UL, 1280UL, 1536UL, 1792UL, 2048UL, // Spacing 512: 2560UL, 3072UL, 3584UL, 4096UL, // Spacing 1 KiB: 5UL << 10, 6UL << 10, 7UL << 10, 8UL << 10, // Spacing 2 KiB: 10UL << 10, 12UL << 10, 14UL << 10, 16UL << 10, // Spacing 4 KiB: 20UL << 10, 24UL << 10, 28UL << 10, 32UL << 10, // Spacing 8 KiB: 40UL << 10, 48UL << 10, 54UL << 10, 64UL << 10, // Spacing 16 KiB: 80UL << 10, 96UL << 10, 112UL << 10, 128UL << 10, // Spacing 32 KiB: 160UL << 10, 192UL << 10, 224UL << 10, 256UL << 10, // Spacing 64 KiB: 320UL << 10, 384UL << 10, 448UL << 10, 512UL << 10, // Spacing 128 KiB: 640UL << 10, 768UL << 10, 896UL << 10, 1UL << 20, // Spacing 256 KiB: 1280UL << 10, 1536UL << 10, 1792UL << 10, 2UL << 20, // Spacing 512 KiB: 2560UL << 10, 3UL << 20, 3584UL << 10, 4UL << 20, // Spacing 1 MiB: 5UL << 20, 6UL << 20, 7UL << 20, 8UL << 20, // Spacing 2 MiB: 10UL << 20, 12UL << 20, 14UL << 20, 16UL << 20, // Spacing 4 MiB: 20UL << 20, 24UL << 20, 28UL << 20, 32UL << 20, // Spacing 8 MiB: 40UL << 20, 48UL << 20, 56UL << 20, 64UL << 20, // Spacing 16 MiB: 80UL << 20, 96UL << 20, 112UL << 20, 128UL << 20, // Spacing 32 MiB: 160UL << 20, 192UL << 20, 224UL << 20, 256UL << 20, // Spacing 64 MiB: 320UL << 20, 384UL << 20, 448UL << 20, 512UL << 20, // Spacing 128 MiB: 640UL << 20, 768UL << 20, 896UL << 20, 1UL << 30, // Spacing 256 MiB: 1280UL << 20, 1536UL << 20, 1792UL << 20, 2UL << 30, // Spacing 512 MiB: 2560UL << 20, 3UL << 30, 3584UL << 20, 4UL << 30, // Spacing 1 GiB: 5UL << 30, 6UL << 30, 7UL << 30, 8UL << 30, // Spacing 2 GiB: 10UL << 30, 12UL << 30, 14UL << 30, 16UL << 30, // Spacing 4 GiB: 20UL << 30, 24UL << 30, 28UL << 30, 32UL << 30, // Spacing 8 GiB: 40UL << 30, 48UL << 30, 56UL << 30, 64UL << 30, // Spacing 16 GiB: 80UL << 30, 96UL << 30, 112UL << 30, 128UL << 30, // Spacing 32 Gib: 160UL << 30, 192UL << 30, 224UL << 30, 256UL << 30, // Catch all: std::numeric_limits<std::size_t>::max() // clang-format on }; // NOLINTEND(readability-magic-numbers,cppcoreguidelines-avoid-magic-numbers) auto* bound = std::lower_bound(size_classes.begin(), size_classes.end(), value); RMM_LOGGING_ASSERT(bound != size_classes.end()); return *bound; } /** * @brief Represents a contiguous region of memory. */ class byte_span { public: /** * @brief Construct a default span. */ byte_span() = default; /** * @brief Construct a span given a pointer and size. * * @param pointer The address for the beginning of the span. * @param size The size of the span. */ byte_span(void* pointer, std::size_t size) : pointer_{static_cast<char*>(pointer)}, size_{size} { RMM_LOGGING_ASSERT(pointer != nullptr); RMM_LOGGING_ASSERT(size > 0); } /// Returns the underlying pointer. [[nodiscard]] char* pointer() const { return pointer_; } /// Returns the size of the span. [[nodiscard]] std::size_t size() const { return size_; } /// Returns the end of the span. [[nodiscard]] char* end() const { return pointer_ + size_; // NOLINT(cppcoreguidelines-pro-bounds-pointer-arithmetic) } /// Returns true if this span is valid (non-null), false otherwise. [[nodiscard]] bool is_valid() const { return pointer_ != nullptr && size_ > 0; } /// Used by std::set to compare spans. bool operator<(byte_span const& span) const { RMM_LOGGING_ASSERT(span.is_valid()); return pointer_ < span.pointer_; } private: char* pointer_{}; ///< Raw memory pointer. std::size_t size_{}; ///< Size in bytes. }; /// Calculate the total size of a set of spans. template <typename T> inline auto total_memory_size(std::set<T> const& spans) { return std::accumulate( spans.cbegin(), spans.cend(), std::size_t{}, [](auto const& lhs, auto const& rhs) { return lhs + rhs.size(); }); } /** * @brief Represents a chunk of memory that can be allocated and deallocated. */ class block final : public byte_span { public: using byte_span::byte_span; /** * @brief Is this block large enough to fit `bytes` bytes? * * @param bytes The size in bytes to check for fit. * @return true if this block is at least `bytes` bytes. */ [[nodiscard]] bool fits(std::size_t bytes) const { RMM_LOGGING_ASSERT(is_valid()); RMM_LOGGING_ASSERT(bytes > 0); return size() >= bytes; } /** * @brief Verifies whether this block can be merged to the beginning of block blk. * * @param blk The block to check for contiguity. * @return true Returns true if this block's `pointer` + `size` == `blk.pointer`. */ [[nodiscard]] bool is_contiguous_before(block const& blk) const { RMM_LOGGING_ASSERT(is_valid()); RMM_LOGGING_ASSERT(blk.is_valid()); // NOLINTNEXTLINE(cppcoreguidelines-pro-bounds-pointer-arithmetic) return pointer() + size() == blk.pointer(); } /** * @brief Split this block into two by the given size. * * @param bytes The size in bytes of the first block. * @return std::pair<block, block> A pair of blocks split by bytes. */ [[nodiscard]] std::pair<block, block> split(std::size_t bytes) const { RMM_LOGGING_ASSERT(is_valid()); RMM_LOGGING_ASSERT(size() > bytes); // NOLINTNEXTLINE(cppcoreguidelines-pro-bounds-pointer-arithmetic) return {{pointer(), bytes}, {pointer() + bytes, size() - bytes}}; } /** * @brief Coalesce two contiguous blocks into one. * * `this->is_contiguous_before(blk)` must be true. * * @param blk block to merge. * @return block The merged block. */ [[nodiscard]] block merge(block const& blk) const { RMM_LOGGING_ASSERT(is_contiguous_before(blk)); return {pointer(), size() + blk.size()}; } }; /// Comparison function for block sizes. inline bool block_size_compare(block const& lhs, block const& rhs) { RMM_LOGGING_ASSERT(lhs.is_valid()); RMM_LOGGING_ASSERT(rhs.is_valid()); return lhs.size() < rhs.size(); } /** * @brief Represents a large chunk of memory that is exchanged between the global arena and * per-thread arenas. */ class superblock final : public byte_span { public: /// Minimum size of a superblock (1 MiB). static constexpr std::size_t minimum_size{1UL << 20}; /// Maximum size of a superblock (1 TiB), as a sanity check. static constexpr std::size_t maximum_size{1UL << 40}; /** * @brief Construct a default superblock. */ superblock() = default; /** * @brief Construct a superblock given a pointer and size. * * @param pointer The address for the beginning of the superblock. * @param size The size of the superblock. */ superblock(void* pointer, std::size_t size) : byte_span{pointer, size} { RMM_LOGGING_ASSERT(size >= minimum_size); RMM_LOGGING_ASSERT(size <= maximum_size); free_blocks_.emplace(pointer, size); } // Disable copy semantics. superblock(superblock const&) = delete; superblock& operator=(superblock const&) = delete; // Allow move semantics. superblock(superblock&&) noexcept = default; superblock& operator=(superblock&&) noexcept = default; ~superblock() = default; /** * @brief Is this superblock empty? * * @return true if this superblock is empty. */ [[nodiscard]] bool empty() const { RMM_LOGGING_ASSERT(is_valid()); return free_blocks_.size() == 1 && free_blocks_.cbegin()->size() == size(); } /** * @brief Return the number of free blocks. * * @return the number of free blocks. */ [[nodiscard]] std::size_t free_blocks() const { RMM_LOGGING_ASSERT(is_valid()); return free_blocks_.size(); } /** * @brief Whether this superblock contains the given block. * * @param blk The block to search for. * @return true if the given block belongs to this superblock. */ [[nodiscard]] bool contains(block const& blk) const { RMM_LOGGING_ASSERT(is_valid()); RMM_LOGGING_ASSERT(blk.is_valid()); // NOLINTNEXTLINE(cppcoreguidelines-pro-bounds-pointer-arithmetic) return pointer() <= blk.pointer() && pointer() + size() >= blk.pointer() + blk.size(); } /** * @brief Can this superblock fit `bytes` bytes? * * @param bytes The size in bytes to check for fit. * @return true if this superblock can fit `bytes` bytes. */ [[nodiscard]] bool fits(std::size_t bytes) const { RMM_LOGGING_ASSERT(is_valid()); return std::any_of(free_blocks_.cbegin(), free_blocks_.cend(), [bytes](auto const& blk) { return blk.fits(bytes); }); } /** * @brief Verifies whether this superblock can be merged to the beginning of superblock s. * * @param s The superblock to check for contiguity. * @return true Returns true if both superblocks are empty and this superblock's * `pointer` + `size` == `s.ptr`. */ [[nodiscard]] bool is_contiguous_before(superblock const& sblk) const { RMM_LOGGING_ASSERT(is_valid()); RMM_LOGGING_ASSERT(sblk.is_valid()); // NOLINTNEXTLINE(cppcoreguidelines-pro-bounds-pointer-arithmetic) return empty() && sblk.empty() && pointer() + size() == sblk.pointer(); } /** * @brief Split this superblock into two by the given size. * * @param bytes The size in bytes of the first block. * @return superblock_pair A pair of superblocks split by bytes. */ [[nodiscard]] std::pair<superblock, superblock> split(std::size_t bytes) const { RMM_LOGGING_ASSERT(is_valid()); RMM_LOGGING_ASSERT(empty() && bytes >= minimum_size && size() >= bytes + minimum_size); // NOLINTNEXTLINE(cppcoreguidelines-pro-bounds-pointer-arithmetic) return {superblock{pointer(), bytes}, superblock{pointer() + bytes, size() - bytes}}; } /** * @brief Coalesce two contiguous superblocks into one. * * `this->is_contiguous_before(s)` must be true. * * @param sblk superblock to merge. * @return block The merged block. */ [[nodiscard]] superblock merge(superblock const& sblk) const { RMM_LOGGING_ASSERT(is_contiguous_before(sblk)); return {pointer(), size() + sblk.size()}; } /** * @brief Get the first free block of at least `size` bytes. * * @param size The number of bytes to allocate. * @return block A block of memory of at least `size` bytes, or an empty block if not found. */ block first_fit(std::size_t size) { RMM_LOGGING_ASSERT(is_valid()); RMM_LOGGING_ASSERT(size > 0); auto fits = [size](auto const& blk) { return blk.fits(size); }; auto const iter = std::find_if(free_blocks_.cbegin(), free_blocks_.cend(), fits); if (iter == free_blocks_.cend()) { return {}; } // Remove the block from the free list. auto const blk = *iter; auto const next = free_blocks_.erase(iter); if (blk.size() > size) { // Split the block and put the remainder back. auto const split = blk.split(size); free_blocks_.insert(next, split.second); return split.first; } return blk; } /** * @brief Coalesce the given block with other free blocks. * * @param blk The block to coalesce. */ void coalesce(block const& blk) // NOLINT(readability-function-cognitive-complexity) { RMM_LOGGING_ASSERT(is_valid()); RMM_LOGGING_ASSERT(blk.is_valid()); RMM_LOGGING_ASSERT(contains(blk)); // Find the right place (in ascending address order) to insert the block. auto const next = free_blocks_.lower_bound(blk); auto const previous = next == free_blocks_.cbegin() ? next : std::prev(next); // Coalesce with neighboring blocks. bool const merge_prev = previous != free_blocks_.cend() && previous->is_contiguous_before(blk); bool const merge_next = next != free_blocks_.cend() && blk.is_contiguous_before(*next); if (merge_prev && merge_next) { auto const merged = previous->merge(blk).merge(*next); free_blocks_.erase(previous); auto const iter = free_blocks_.erase(next); free_blocks_.insert(iter, merged); } else if (merge_prev) { auto const merged = previous->merge(blk); auto const iter = free_blocks_.erase(previous); free_blocks_.insert(iter, merged); } else if (merge_next) { auto const merged = blk.merge(*next); auto const iter = free_blocks_.erase(next); free_blocks_.insert(iter, merged); } else { free_blocks_.insert(next, blk); } } /** * @brief Find the total free block size. * @return the total free block size. */ [[nodiscard]] std::size_t total_free_size() const { return total_memory_size(free_blocks_); } /** * @brief Find the max free block size. * @return the max free block size. */ [[nodiscard]] std::size_t max_free_size() const { if (free_blocks_.empty()) { return 0; } return std::max_element(free_blocks_.cbegin(), free_blocks_.cend(), block_size_compare)->size(); } private: /// Address-ordered set of free blocks. std::set<block> free_blocks_{}; }; /// Calculate the total free size of a set of superblocks. inline auto total_free_size(std::set<superblock> const& superblocks) { return std::accumulate( superblocks.cbegin(), superblocks.cend(), std::size_t{}, [](auto const& lhs, auto const& rhs) { return lhs + rhs.total_free_size(); }); } /// Find the max free size from a set of superblocks. inline auto max_free_size(std::set<superblock> const& superblocks) { std::size_t size{}; for (auto const& sblk : superblocks) { size = std::max(size, sblk.max_free_size()); } return size; }; /** * @brief The global arena for allocating memory from the upstream memory resource. * * The global arena is a shared memory pool from which other arenas allocate superblocks. * * @tparam Upstream Memory resource to use for allocating the arena. Implements * rmm::mr::device_memory_resource interface. */ template <typename Upstream> class global_arena final { public: /** * @brief Construct a global arena. * * @throws rmm::logic_error if `upstream_mr == nullptr`. * * @param upstream_mr The memory resource from which to allocate blocks for the pool * @param arena_size Size in bytes of the global arena. Defaults to half of the available memory * on the current device. */ global_arena(Upstream* upstream_mr, std::optional<std::size_t> arena_size) : upstream_mr_{upstream_mr} { RMM_EXPECTS(nullptr != upstream_mr_, "Unexpected null upstream pointer."); auto const size = rmm::detail::align_down(arena_size.value_or(default_size()), rmm::detail::CUDA_ALLOCATION_ALIGNMENT); RMM_EXPECTS(size >= superblock::minimum_size, "Arena size smaller than minimum superblock size."); initialize(size); } // Disable copy (and move) semantics. global_arena(global_arena const&) = delete; global_arena& operator=(global_arena const&) = delete; global_arena(global_arena&&) noexcept = delete; global_arena& operator=(global_arena&&) noexcept = delete; /** * @brief Destroy the global arena and deallocate all memory it allocated using the upstream * resource. */ ~global_arena() { std::lock_guard lock(mtx_); upstream_mr_->deallocate(upstream_block_.pointer(), upstream_block_.size()); } /** * @brief Should allocation of `size` bytes be handled by the global arena directly? * * @param size The size in bytes of the allocation. * @return bool True if the allocation should be handled by the global arena. */ bool handles(std::size_t size) const { return size > superblock::minimum_size; } /** * @brief Acquire a superblock that can fit a block of the given size. * * @param size The size in bytes of the allocation. * @return superblock The acquired superblock. */ superblock acquire(std::size_t size) { // Superblocks should only be acquired if the size is not directly handled by the global arena. RMM_LOGGING_ASSERT(!handles(size)); std::lock_guard lock(mtx_); return first_fit(size); } /** * @brief Release a superblock. * * @param s Superblock to be released. */ void release(superblock&& sblk) { RMM_LOGGING_ASSERT(sblk.is_valid()); std::lock_guard lock(mtx_); coalesce(std::move(sblk)); } /** * @brief Release a set of superblocks from a dying arena. * * @param superblocks The set of superblocks. */ void release(std::set<superblock>& superblocks) { std::lock_guard lock(mtx_); while (!superblocks.empty()) { auto sblk = std::move(superblocks.extract(superblocks.cbegin()).value()); RMM_LOGGING_ASSERT(sblk.is_valid()); coalesce(std::move(sblk)); } } /** * @brief Allocate a large block directly. * * @param size The size in bytes of the allocation. * @return void* Pointer to the newly allocated memory. */ void* allocate(std::size_t size) { RMM_LOGGING_ASSERT(handles(size)); std::lock_guard lock(mtx_); auto sblk = first_fit(size); if (sblk.is_valid()) { auto blk = sblk.first_fit(size); superblocks_.insert(std::move(sblk)); return blk.pointer(); } return nullptr; } /** * @brief Deallocate memory pointed to by `ptr`. * * @param ptr Pointer to be deallocated. * @param size The size in bytes of the allocation. This must be equal to the value of `size` * that was passed to the `allocate` call that returned `p`. * @param stream Stream on which to perform deallocation. * @return bool true if the allocation is found, false otherwise. */ bool deallocate(void* ptr, std::size_t size, cuda_stream_view stream) { RMM_LOGGING_ASSERT(handles(size)); stream.synchronize_no_throw(); return deallocate(ptr, size); } /** * @brief Deallocate memory pointed to by `ptr`. * * @param ptr Pointer to be deallocated. * @param bytes The size in bytes of the allocation. This must be equal to the * value of `bytes` that was passed to the `allocate` call that returned `ptr`. * @return bool true if the allocation is found, false otherwise. */ bool deallocate(void* ptr, std::size_t bytes) { std::lock_guard lock(mtx_); block const blk{ptr, bytes}; auto const iter = std::find_if(superblocks_.cbegin(), superblocks_.cend(), [&](auto const& sblk) { return sblk.contains(blk); }); if (iter == superblocks_.cend()) { return false; } auto sblk = std::move(superblocks_.extract(iter).value()); sblk.coalesce(blk); if (sblk.empty()) { coalesce(std::move(sblk)); } else { superblocks_.insert(std::move(sblk)); } return true; } /** * @brief Dump memory to log. * * @param logger the spdlog logger to use */ void dump_memory_log(std::shared_ptr<spdlog::logger> const& logger) const { std::lock_guard lock(mtx_); logger->info(" Arena size: {}", rmm::detail::bytes{upstream_block_.size()}); logger->info(" # superblocks: {}", superblocks_.size()); if (!superblocks_.empty()) { logger->debug(" Total size of superblocks: {}", rmm::detail::bytes{total_memory_size(superblocks_)}); auto const total_free = total_free_size(superblocks_); auto const max_free = max_free_size(superblocks_); auto const fragmentation = (1 - max_free / static_cast<double>(total_free)) * 100; logger->info(" Total free memory: {}", rmm::detail::bytes{total_free}); logger->info(" Largest block of free memory: {}", rmm::detail::bytes{max_free}); logger->info(" Fragmentation: {:.2f}%", fragmentation); auto index = 0; char* prev_end{}; for (auto const& sblk : superblocks_) { if (prev_end == nullptr) { prev_end = sblk.pointer(); } logger->debug( " Superblock {}: start={}, end={}, size={}, empty={}, # free blocks={}, max free={}, " "gap={}", index, fmt::ptr(sblk.pointer()), fmt::ptr(sblk.end()), rmm::detail::bytes{sblk.size()}, sblk.empty(), sblk.free_blocks(), rmm::detail::bytes{sblk.max_free_size()}, rmm::detail::bytes{static_cast<size_t>(sblk.pointer() - prev_end)}); prev_end = sblk.end(); index++; } } } private: /** * @brief Default size of the global arena if unspecified. * @return the default global arena size. */ constexpr std::size_t default_size() const { auto const [free, total] = rmm::detail::available_device_memory(); return free / 2; } /** * @brief Allocate space from upstream to initialize the arena. * * @param size The size to allocate. */ void initialize(std::size_t size) { upstream_block_ = {upstream_mr_->allocate(size), size}; superblocks_.emplace(upstream_block_.pointer(), size); } /** * @brief Get the first superblock that can fit a block of at least `size` bytes. * * Address-ordered first-fit has shown to perform slightly better than best-fit when it comes to * memory fragmentation, and slightly cheaper to implement. It is also used by some popular * allocators such as jemalloc. * * \see Johnstone, M. S., & Wilson, P. R. (1998). The memory fragmentation problem: Solved?. ACM * Sigplan Notices, 34(3), 26-36. * * @param size The number of bytes to allocate. * @param minimum_size The minimum size of the superblock required. * @return superblock A superblock that can fit at least `size` bytes, or empty if not found. */ superblock first_fit(std::size_t size) { auto const iter = std::find_if(superblocks_.cbegin(), superblocks_.cend(), [=](auto const& sblk) { return sblk.fits(size); }); if (iter == superblocks_.cend()) { return {}; } auto sblk = std::move(superblocks_.extract(iter).value()); auto const min_size = std::max(superblock::minimum_size, size); if (sblk.empty() && sblk.size() >= min_size + superblock::minimum_size) { // Split the superblock and put the remainder back. auto [head, tail] = sblk.split(min_size); superblocks_.insert(std::move(tail)); return std::move(head); } return sblk; } /** * @brief Coalesce the given superblock with other empty superblocks. * * @param sblk The superblock to coalesce. */ void coalesce(superblock&& sblk) { RMM_LOGGING_ASSERT(sblk.is_valid()); // Find the right place (in ascending address order) to insert the block. auto const next = superblocks_.lower_bound(sblk); auto const previous = next == superblocks_.cbegin() ? next : std::prev(next); // Coalesce with neighboring blocks. bool const merge_prev = previous != superblocks_.cend() && previous->is_contiguous_before(sblk); bool const merge_next = next != superblocks_.cend() && sblk.is_contiguous_before(*next); if (merge_prev && merge_next) { auto prev_sb = std::move(superblocks_.extract(previous).value()); auto next_sb = std::move(superblocks_.extract(next).value()); auto merged = prev_sb.merge(sblk).merge(next_sb); superblocks_.insert(std::move(merged)); } else if (merge_prev) { auto prev_sb = std::move(superblocks_.extract(previous).value()); auto merged = prev_sb.merge(sblk); superblocks_.insert(std::move(merged)); } else if (merge_next) { auto next_sb = std::move(superblocks_.extract(next).value()); auto merged = sblk.merge(next_sb); superblocks_.insert(std::move(merged)); } else { superblocks_.insert(std::move(sblk)); } } /// The upstream resource to allocate memory from. Upstream* upstream_mr_; /// Block allocated from upstream so that it can be quickly freed. block upstream_block_; /// Address-ordered set of superblocks. std::set<superblock> superblocks_; /// Mutex for exclusive lock. mutable std::mutex mtx_; }; /** * @brief An arena for allocating memory for a thread. * * An arena is a per-thread or per-non-default-stream memory pool. It allocates * superblocks from the global arena, and returns them when the superblocks become empty. * * @tparam Upstream Memory resource to use for allocating the global arena. Implements * rmm::mr::device_memory_resource interface. */ template <typename Upstream> class arena { public: /** * @brief Construct an `arena`. * * @param global_arena The global arena from which to allocate superblocks. */ explicit arena(global_arena<Upstream>& global_arena) : global_arena_{global_arena} {} // Disable copy (and move) semantics. arena(arena const&) = delete; arena& operator=(arena const&) = delete; arena(arena&&) noexcept = delete; arena& operator=(arena&&) noexcept = delete; ~arena() = default; /** * @brief Allocates memory of size at least `size` bytes. * * @param size The size in bytes of the allocation. * @return void* Pointer to the newly allocated memory. */ void* allocate(std::size_t size) { if (global_arena_.handles(size)) { return global_arena_.allocate(size); } std::lock_guard lock(mtx_); return get_block(size).pointer(); } /** * @brief Deallocate memory pointed to by `ptr`, and possibly return superblocks to upstream. * * @param ptr Pointer to be deallocated. * @param size The size in bytes of the allocation. This must be equal to the value of `size` * that was passed to the `allocate` call that returned `p`. * @param stream Stream on which to perform deallocation. * @return bool true if the allocation is found, false otherwise. */ bool deallocate(void* ptr, std::size_t size, cuda_stream_view stream) { if (global_arena_.handles(size) && global_arena_.deallocate(ptr, size, stream)) { return true; } return deallocate(ptr, size); } /** * @brief Deallocate memory pointed to by `ptr`, and possibly return superblocks to upstream. * * @param ptr Pointer to be deallocated. * @param size The size in bytes of the allocation. This must be equal to the value of `size` * that was passed to the `allocate` call that returned `p`. * @return bool true if the allocation is found, false otherwise. */ bool deallocate(void* ptr, std::size_t size) { std::lock_guard lock(mtx_); return deallocate_from_superblock({ptr, size}); } /** * @brief Clean the arena and release all superblocks to the global arena. */ void clean() { std::lock_guard lock(mtx_); global_arena_.release(superblocks_); superblocks_.clear(); } /** * @brief Defragment the arena and release empty superblock to the global arena. */ void defragment() { std::lock_guard lock(mtx_); while (true) { auto const iter = std::find_if( superblocks_.cbegin(), superblocks_.cend(), [](auto const& sblk) { return sblk.empty(); }); if (iter == superblocks_.cend()) { return; } global_arena_.release(std::move(superblocks_.extract(iter).value())); } } private: /** * @brief Get an available memory block of at least `size` bytes. * * @param size The number of bytes to allocate. * @return A block of memory of at least `size` bytes. */ block get_block(std::size_t size) { // Find the first-fit free block. auto const blk = first_fit(size); if (blk.is_valid()) { return blk; } // No existing larger blocks available, so grow the arena and obtain a superblock. return expand_arena(size); } /** * @brief Get the first free block of at least `size` bytes. * * Address-ordered first-fit has shown to perform slightly better than best-fit when it comes to * memory fragmentation, and slightly cheaper to implement. It is also used by some popular * allocators such as jemalloc. * * \see Johnstone, M. S., & Wilson, P. R. (1998). The memory fragmentation problem: Solved?. ACM * Sigplan Notices, 34(3), 26-36. * * @param size The number of bytes to allocate. * @return block A block of memory of at least `size` bytes, or an empty block if not found. */ block first_fit(std::size_t size) { auto const iter = std::find_if(superblocks_.cbegin(), superblocks_.cend(), [size](auto const& sblk) { return sblk.fits(size); }); if (iter == superblocks_.cend()) { return {}; } auto sblk = std::move(superblocks_.extract(iter).value()); auto const blk = sblk.first_fit(size); superblocks_.insert(std::move(sblk)); return blk; } /** * @brief Deallocate a block from the superblock it belongs to. * * @param blk The block to deallocate. * @param stream The stream to use for deallocation. * @return true if the block is found. */ bool deallocate_from_superblock(block const& blk) { auto const iter = std::find_if(superblocks_.cbegin(), superblocks_.cend(), [&](auto const& sblk) { return sblk.contains(blk); }); if (iter == superblocks_.cend()) { return false; } auto sblk = std::move(superblocks_.extract(iter).value()); sblk.coalesce(blk); superblocks_.insert(std::move(sblk)); return true; } /** * @brief Allocate space from upstream to supply the arena and return a block. * * @param size The number of bytes to allocate. * @return block A block of memory of at least `size` bytes. */ block expand_arena(std::size_t size) { auto sblk = global_arena_.acquire(size); if (sblk.is_valid()) { RMM_LOGGING_ASSERT(sblk.size() >= superblock::minimum_size); auto const blk = sblk.first_fit(size); superblocks_.insert(std::move(sblk)); return blk; } return {}; } /// The global arena to allocate superblocks from. global_arena<Upstream>& global_arena_; /// Acquired superblocks. std::set<superblock> superblocks_; /// Mutex for exclusive lock. mutable std::mutex mtx_; }; /** * @brief RAII-style cleaner for an arena. * * This is useful when a thread is about to terminate, and it contains a per-thread arena. * * @tparam Upstream Memory resource to use for allocating the global arena. Implements * rmm::mr::device_memory_resource interface. */ template <typename Upstream> class arena_cleaner { public: explicit arena_cleaner(std::shared_ptr<arena<Upstream>> const& arena) : arena_(arena) {} // Disable copy (and move) semantics. arena_cleaner(arena_cleaner const&) = delete; arena_cleaner& operator=(arena_cleaner const&) = delete; arena_cleaner(arena_cleaner&&) noexcept = delete; arena_cleaner& operator=(arena_cleaner&&) = delete; ~arena_cleaner() { if (!arena_.expired()) { auto arena_ptr = arena_.lock(); arena_ptr->clean(); } } private: /// A non-owning pointer to the arena that may need cleaning. std::weak_ptr<arena<Upstream>> arena_; }; } // namespace rmm::mr::detail::arena
0
rapidsai_public_repos/rmm/include/rmm/mr/device
rapidsai_public_repos/rmm/include/rmm/mr/device/detail/free_list.hpp
/* * Copyright (c) 2019-2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <algorithm> #include <iostream> #include <list> namespace rmm::mr::detail { struct block_base { void* ptr{}; ///< Raw memory pointer block_base() = default; block_base(void* ptr) : ptr{ptr} {}; /// Returns the raw pointer for this block [[nodiscard]] inline void* pointer() const { return ptr; } /// Returns true if this block is valid (non-null), false otherwise [[nodiscard]] inline bool is_valid() const { return pointer() != nullptr; } #ifdef RMM_DEBUG_PRINT /// Prints the block to stdout inline void print() const { std::cout << pointer(); } #endif }; #ifdef RMM_DEBUG_PRINT /// Print block_base on an ostream inline std::ostream& operator<<(std::ostream& out, const block_base& block) { out << block.pointer(); return out; } #endif /** * @brief Base class defining an interface for a list of free memory blocks. * * Derived classes typically provide additional methods such as the following (see * fixed_size_free_list.hpp and coalescing_free_list.hpp). However this is not a required interface. * * - `void insert(block_type const& b) // insert a block into the free list` * - `void insert(free_list&& other) // insert / merge another free list` * - `block_type get_block(std::size_t size) // get a block of at least size bytes * - `void print() // print the block` * * @tparam list_type the type of the internal list data structure. */ template <typename BlockType, typename ListType = std::list<BlockType>> class free_list { public: free_list() = default; virtual ~free_list() = default; free_list(free_list const&) = delete; free_list& operator=(free_list const&) = delete; free_list(free_list&&) = delete; free_list& operator=(free_list&&) = delete; using block_type = BlockType; using list_type = ListType; using size_type = typename list_type::size_type; using iterator = typename list_type::iterator; using const_iterator = typename list_type::const_iterator; /// beginning of the free list [[nodiscard]] iterator begin() noexcept { return blocks.begin(); } /// beginning of the free list [[nodiscard]] const_iterator begin() const noexcept { return blocks.begin(); } /// beginning of the free list [[nodiscard]] const_iterator cbegin() const noexcept { return blocks.cbegin(); } /// end of the free list [[nodiscard]] iterator end() noexcept { return blocks.end(); } /// beginning of the free list [[nodiscard]] const_iterator end() const noexcept { return blocks.end(); } /// beginning of the free list [[nodiscard]] const_iterator cend() const noexcept { return blocks.cend(); } /** * @brief The size of the free list in blocks. * * @return size_type The number of blocks in the free list. */ [[nodiscard]] size_type size() const noexcept { return blocks.size(); } /** * @brief checks whether the free_list is empty. * * @return true If there are blocks in the free_list. * @return false If there are no blocks in the free_list. */ [[nodiscard]] bool is_empty() const noexcept { return blocks.empty(); } /** * @brief Removes the block indicated by `iter` from the free list. * * @param iter An iterator referring to the block to erase. */ void erase(const_iterator iter) { blocks.erase(iter); } /** * @brief Erase all blocks from the free_list. * */ void clear() noexcept { blocks.clear(); } #ifdef RMM_DEBUG_PRINT /** * @brief Print all blocks in the free_list. */ void print() const { std::cout << size() << std::endl; for (auto const& block : blocks) { std::cout << block << std::endl; } } #endif protected: /** * @brief Insert a block in the free list before the specified position * * @param pos iterator before which the block will be inserted. pos may be the end() iterator. * @param block The block to insert. */ void insert(const_iterator pos, block_type const& block) { blocks.insert(pos, block); } /** * @brief Inserts a list of blocks in the free list before the specified position * * @param pos iterator before which the block will be inserted. pos may be the end() iterator. * @param other The free list to insert. */ void splice(const_iterator pos, free_list&& other) { return blocks.splice(pos, std::move(other.blocks)); } /** * @brief Appends the given block to the end of the free list. * * @param block The block to append. */ void push_back(const block_type& block) { blocks.push_back(block); } /** * @brief Appends the given block to the end of the free list. `b` is moved to the new element. * * @param block The block to append. */ void push_back(block_type&& block) { blocks.push_back(std::move(block)); } /** * @brief Removes the first element of the free list. If there are no elements in the free list, * the behavior is undefined. * * References and iterators to the erased element are invalidated. */ void pop_front() { blocks.pop_front(); } private: list_type blocks; // The internal container of blocks }; } // namespace rmm::mr::detail
0
rapidsai_public_repos/rmm/include/rmm/mr/device
rapidsai_public_repos/rmm/include/rmm/mr/device/detail/fixed_size_free_list.hpp
/* * Copyright (c) 2020-2021, NVIDIA CORPORATION. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include <rmm/mr/device/detail/free_list.hpp> #include <cstddef> #include <iostream> namespace rmm::mr::detail { struct fixed_size_free_list : free_list<block_base> { fixed_size_free_list() = default; ~fixed_size_free_list() override = default; fixed_size_free_list(fixed_size_free_list const&) = delete; fixed_size_free_list& operator=(fixed_size_free_list const&) = delete; fixed_size_free_list(fixed_size_free_list&&) = delete; fixed_size_free_list& operator=(fixed_size_free_list&&) = delete; /** * @brief Construct a new free_list from range defined by input iterators * * @tparam InputIt Input iterator * @param first The start of the range to insert into the free_list * @param last The end of the range to insert into the free_list */ template <class InputIt> fixed_size_free_list(InputIt first, InputIt last) { std::for_each(first, last, [this](block_type const& block) { insert(block); }); } /** * @brief Inserts a block into the `free_list` in the correct order, coalescing it with the * preceding and following blocks if either is contiguous. * * @param block The block to insert. */ void insert(block_type const& block) { push_back(block); } /** * @brief Inserts blocks from another free list into this free_list. * * @param other The free_list to insert into this free_list. */ void insert(free_list&& other) { splice(cend(), std::move(other)); } /** * @brief Returns the first block in the free list. * * @param size The size in bytes of the desired block (unused). * @return A block large enough to store `size` bytes. */ block_type get_block(std::size_t size) { if (is_empty()) { return block_type{}; } block_type block = *begin(); pop_front(); return block; } }; } // namespace rmm::mr::detail
0
rapidsai_public_repos/rmm
rapidsai_public_repos/rmm/python/pyproject.toml
# Copyright (c) 2021-2022, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. [build-system] build-backend = "setuptools.build_meta" requires = [ "cmake>=3.26.4", "cuda-python>=11.7.1,<12.0a0", "cython>=3.0.0", "ninja", "scikit-build>=0.13.1", "setuptools>=61.0.0", "tomli", "wheel", ] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit ../dependencies.yaml and run `rapids-dependency-file-generator`. [project] name = "rmm" dynamic = ["version"] description = "rmm - RAPIDS Memory Manager" readme = { file = "README.md", content-type = "text/markdown" } authors = [ { name = "NVIDIA Corporation" }, ] license = { text = "Apache 2.0" } requires-python = ">=3.9" dependencies = [ "cuda-python>=11.7.1,<12.0a0", "numba>=0.57", "numpy>=1.21", ] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit ../dependencies.yaml and run `rapids-dependency-file-generator`. classifiers = [ "Intended Audience :: Developers", "Topic :: Database", "Topic :: Scientific/Engineering", "License :: OSI Approved :: Apache Software License", "Programming Language :: Python", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", ] [project.optional-dependencies] test = [ "pytest", "pytest-cov", ] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit ../dependencies.yaml and run `rapids-dependency-file-generator`. [project.urls] Homepage = "https://github.com/rapidsai/rmm" [tool.black] line-length = 79 target-version = ["py39"] include = '\.py?$' exclude = ''' /( thirdparty | \.eggs | \.git | \.hg | \.mypy_cache | \.tox | \.venv | _build | buck-out | build | dist )/ ''' [tool.isort] line_length = 79 multi_line_output = 3 include_trailing_comma = true force_grid_wrap = 0 combine_as_imports = true order_by_type = true known_first_party = [ "rmm", ] default_section = "THIRDPARTY" sections = [ "FUTURE", "STDLIB", "THIRDPARTY", "FIRSTPARTY", "LOCALFOLDER", ] skip = [ "thirdparty", ".eggs", ".git", ".hg", ".mypy_cache", ".tox", ".venv", "_build", "buck-out", "build", "dist", "__init__.py", ] [tool.setuptools] license-files = ["LICENSE"] [tool.setuptools.dynamic] version = {file = "rmm/VERSION"}
0
rapidsai_public_repos/rmm
rapidsai_public_repos/rmm/python/CMakeLists.txt
# ============================================================================= # Copyright (c) 2022, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except # in compliance with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software distributed under the License # is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express # or implied. See the License for the specific language governing permissions and limitations under # the License. # ============================================================================= cmake_minimum_required(VERSION 3.26.4 FATAL_ERROR) set(rmm_version 24.02.00) include(../fetch_rapids.cmake) project( rmm-python VERSION ${rmm_version} LANGUAGES # TODO: Building Python extension modules via the python_extension_module requires the C # language to be enabled here. The test project that is built in scikit-build to verify # various linking options for the python library is hardcoded to build with C, so until # that is fixed we need to keep C. C CXX) option(FIND_RMM_CPP "Search for existing RMM C++ installations before defaulting to local files" OFF) option(RMM_BUILD_WHEELS "Whether this build is generating a Python wheel." OFF) # If the user requested it we attempt to find RMM. if(FIND_RMM_CPP) find_package(rmm ${rmm_version}) else() set(rmm_FOUND OFF) endif() if(NOT rmm_FOUND) set(BUILD_TESTS OFF) set(BUILD_BENCHMARKS OFF) set(_exclude_from_all "") if(RMM_BUILD_WHEELS) # Statically link dependencies if building wheels set(CUDA_STATIC_RUNTIME ON) # Don't install the rmm C++ targets into wheels set(_exclude_from_all EXCLUDE_FROM_ALL) endif() add_subdirectory(../ rmm-cpp ${_exclude_from_all}) endif() include(rapids-cython) rapids_cython_init() add_compile_definitions("SPDLOG_ACTIVE_LEVEL=SPDLOG_LEVEL_${RMM_LOGGING_LEVEL}") add_subdirectory(rmm/_cuda) add_subdirectory(rmm/_lib)
0
rapidsai_public_repos/rmm
rapidsai_public_repos/rmm/python/README.md
# <div align="left"><img src="img/rapids_logo.png" width="90px"/>&nbsp;RMM: RAPIDS Memory Manager</div> **NOTE:** For the latest stable [README.md](https://github.com/rapidsai/rmm/blob/main/README.md) ensure you are on the `main` branch. ## Resources - [RMM Reference Documentation](https://docs.rapids.ai/api/rmm/stable/): Python API reference, tutorials, and topic guides. - [librmm Reference Documentation](https://docs.rapids.ai/api/librmm/stable/): C/C++ CUDA library API reference. - [Getting Started](https://rapids.ai/start.html): Instructions for installing RMM. - [RAPIDS Community](https://rapids.ai/community.html): Get help, contribute, and collaborate. - [GitHub repository](https://github.com/rapidsai/rmm): Download the RMM source code. - [Issue tracker](https://github.com/rapidsai/rmm/issues): Report issues or request features. ## Overview Achieving optimal performance in GPU-centric workflows frequently requires customizing how host and device memory are allocated. For example, using "pinned" host memory for asynchronous host <-> device memory transfers, or using a device memory pool sub-allocator to reduce the cost of dynamic device memory allocation. The goal of the RAPIDS Memory Manager (RMM) is to provide: - A common interface that allows customizing [device](#device_memory_resource) and [host](#host_memory_resource) memory allocation - A collection of [implementations](#available-resources) of the interface - A collection of [data structures](#device-data-structures) that use the interface for memory allocation For information on the interface RMM provides and how to use RMM in your C++ code, see [below](#using-rmm-in-c). For a walkthrough about the design of the RAPIDS Memory Manager, read [Fast, Flexible Allocation for NVIDIA CUDA with RAPIDS Memory Manager](https://developer.nvidia.com/blog/fast-flexible-allocation-for-cuda-with-rapids-memory-manager/) on the NVIDIA Developer Blog. ## Installation ### Conda RMM can be installed with Conda ([miniconda](https://conda.io/miniconda.html), or the full [Anaconda distribution](https://www.anaconda.com/download)) from the `rapidsai` channel: ```bash conda install -c rapidsai -c conda-forge -c nvidia rmm cuda-version=12.0 ``` We also provide [nightly Conda packages](https://anaconda.org/rapidsai-nightly) built from the HEAD of our latest development branch. Note: RMM is supported only on Linux, and only tested with Python versions 3.9 and 3.10. Note: The RMM package from Conda requires building with GCC 9 or later. Otherwise, your application may fail to build. See the [Get RAPIDS version picker](https://rapids.ai/start.html) for more OS and version info. ## Building from Source ### Get RMM Dependencies Compiler requirements: * `gcc` version 9.3+ * `nvcc` version 11.4+ * `cmake` version 3.26.4+ CUDA/GPU requirements: * CUDA 11.4+ * Pascal architecture or better You can obtain CUDA from [https://developer.nvidia.com/cuda-downloads](https://developer.nvidia.com/cuda-downloads) Python requirements: * `scikit-build` * `cuda-python` * `cython` For more details, see [pyproject.toml](python/pyproject.toml) ### Script to build RMM from source To install RMM from source, ensure the dependencies are met and follow the steps below: - Clone the repository and submodules ```bash $ git clone --recurse-submodules https://github.com/rapidsai/rmm.git $ cd rmm ``` - Create the conda development environment `rmm_dev` ```bash # create the conda environment (assuming in base `rmm` directory) $ conda env create --name rmm_dev --file conda/environments/all_cuda-118_arch-x86_64.yaml # activate the environment $ conda activate rmm_dev ``` - Build and install `librmm` using cmake & make. CMake depends on the `nvcc` executable being on your path or defined in `CUDACXX` environment variable. ```bash $ mkdir build # make a build directory $ cd build # enter the build directory $ cmake .. -DCMAKE_INSTALL_PREFIX=/install/path # configure cmake ... use $CONDA_PREFIX if you're using Anaconda $ make -j # compile the library librmm.so ... '-j' will start a parallel job using the number of physical cores available on your system $ make install # install the library librmm.so to '/install/path' ``` - Building and installing `librmm` and `rmm` using build.sh. Build.sh creates build dir at root of git repository. build.sh depends on the `nvcc` executable being on your path or defined in `CUDACXX` environment variable. ```bash $ ./build.sh -h # Display help and exit $ ./build.sh -n librmm # Build librmm without installing $ ./build.sh -n rmm # Build rmm without installing $ ./build.sh -n librmm rmm # Build librmm and rmm without installing $ ./build.sh librmm rmm # Build and install librmm and rmm ``` - To run tests (Optional): ```bash $ cd build (if you are not already in build directory) $ make test ``` - Build, install, and test the `rmm` python package, in the `python` folder: ```bash $ python setup.py build_ext --inplace $ python setup.py install $ pytest -v ``` Done! You are ready to develop for the RMM OSS project. ### Caching third-party dependencies RMM uses [CPM.cmake](https://github.com/TheLartians/CPM.cmake) to handle third-party dependencies like spdlog, Thrust, GoogleTest, GoogleBenchmark. In general you won't have to worry about it. If CMake finds an appropriate version on your system, it uses it (you can help it along by setting `CMAKE_PREFIX_PATH` to point to the installed location). Otherwise those dependencies will be downloaded as part of the build. If you frequently start new builds from scratch, consider setting the environment variable `CPM_SOURCE_CACHE` to an external download directory to avoid repeated downloads of the third-party dependencies. ## Using RMM in a downstream CMake project The installed RMM library provides a set of config files that makes it easy to integrate RMM into your own CMake project. In your `CMakeLists.txt`, just add ```cmake find_package(rmm [VERSION]) # ... target_link_libraries(<your-target> (PRIVATE|PUBLIC) rmm::rmm) ``` Since RMM is a header-only library, this does not actually link RMM, but it makes the headers available and pulls in transitive dependencies. If RMM is not installed in a default location, use `CMAKE_PREFIX_PATH` or `rmm_ROOT` to point to its location. One of RMM's dependencies is the Thrust library, so the above automatically pulls in `Thrust` by means of a dependency on the `rmm::Thrust` target. By default it uses the standard configuration of Thrust. If you want to customize it, you can set the variables `THRUST_HOST_SYSTEM` and `THRUST_DEVICE_SYSTEM`; see [Thrust's CMake documentation](https://github.com/NVIDIA/thrust/blob/main/thrust/cmake/README.md). # Using RMM in C++ The first goal of RMM is to provide a common interface for device and host memory allocation. This allows both _users_ and _implementers_ of custom allocation logic to program to a single interface. To this end, RMM defines two abstract interface classes: - [`rmm::mr::device_memory_resource`](#device_memory_resource) for device memory allocation - [`rmm::mr::host_memory_resource`](#host_memory_resource) for host memory allocation These classes are based on the [`std::pmr::memory_resource`](https://en.cppreference.com/w/cpp/memory/memory_resource) interface class introduced in C++17 for polymorphic memory allocation. ## `device_memory_resource` `rmm::mr::device_memory_resource` is the base class that defines the interface for allocating and freeing device memory. It has two key functions: 1. `void* device_memory_resource::allocate(std::size_t bytes, cuda_stream_view s)` - Returns a pointer to an allocation of at least `bytes` bytes. 2. `void device_memory_resource::deallocate(void* p, std::size_t bytes, cuda_stream_view s)` - Reclaims a previous allocation of size `bytes` pointed to by `p`. - `p` *must* have been returned by a previous call to `allocate(bytes)`, otherwise behavior is undefined It is up to a derived class to provide implementations of these functions. See [available resources](#available-resources) for example `device_memory_resource` derived classes. Unlike `std::pmr::memory_resource`, `rmm::mr::device_memory_resource` does not allow specifying an alignment argument. All allocations are required to be aligned to at least 256B. Furthermore, `device_memory_resource` adds an additional `cuda_stream_view` argument to allow specifying the stream on which to perform the (de)allocation. ## `cuda_stream_view` and `cuda_stream` `rmm::cuda_stream_view` is a simple non-owning wrapper around a CUDA `cudaStream_t`. This wrapper's purpose is to provide strong type safety for stream types. (`cudaStream_t` is an alias for a pointer, which can lead to ambiguity in APIs when it is assigned `0`.) All RMM stream-ordered APIs take a `rmm::cuda_stream_view` argument. `rmm::cuda_stream` is a simple owning wrapper around a CUDA `cudaStream_t`. This class provides RAII semantics (constructor creates the CUDA stream, destructor destroys it). An `rmm::cuda_stream` can never represent the CUDA default stream or per-thread default stream; it only ever represents a single non-default stream. `rmm::cuda_stream` cannot be copied, but can be moved. ## `cuda_stream_pool` `rmm::cuda_stream_pool` provides fast access to a pool of CUDA streams. This class can be used to create a set of `cuda_stream` objects whose lifetime is equal to the `cuda_stream_pool`. Using the stream pool can be faster than creating the streams on the fly. The size of the pool is configurable. Depending on this size, multiple calls to `cuda_stream_pool::get_stream()` may return instances of `rmm::cuda_stream_view` that represent identical CUDA streams. ### Thread Safety All current device memory resources are thread safe unless documented otherwise. More specifically, calls to memory resource `allocate()` and `deallocate()` methods are safe with respect to calls to either of these functions from other threads. They are _not_ thread safe with respect to construction and destruction of the memory resource object. Note that a class `thread_safe_resource_adapter` is provided which can be used to adapt a memory resource that is not thread safe to be thread safe (as described above). This adapter is not needed with any current RMM device memory resources. ### Stream-ordered Memory Allocation `rmm::mr::device_memory_resource` is a base class that provides stream-ordered memory allocation. This allows optimizations such as re-using memory deallocated on the same stream without the overhead of synchronization. A call to `device_memory_resource::allocate(bytes, stream_a)` returns a pointer that is valid to use on `stream_a`. Using the memory on a different stream (say `stream_b`) is Undefined Behavior unless the two streams are first synchronized, for example by using `cudaStreamSynchronize(stream_a)` or by recording a CUDA event on `stream_a` and then calling `cudaStreamWaitEvent(stream_b, event)`. The stream specified to `device_memory_resource::deallocate` should be a stream on which it is valid to use the deallocated memory immediately for another allocation. Typically this is the stream on which the allocation was *last* used before the call to `deallocate`. The passed stream may be used internally by a `device_memory_resource` for managing available memory with minimal synchronization, and it may also be synchronized at a later time, for example using a call to `cudaStreamSynchronize()`. For this reason, it is Undefined Behavior to destroy a CUDA stream that is passed to `device_memory_resource::deallocate`. If the stream on which the allocation was last used has been destroyed before calling `deallocate` or it is known that it will be destroyed, it is likely better to synchronize the stream (before destroying it) and then pass a different stream to `deallocate` (e.g. the default stream). Note that device memory data structures such as `rmm::device_buffer` and `rmm::device_uvector` follow these stream-ordered memory allocation semantics and rules. For further information about stream-ordered memory allocation semantics, read [Using the NVIDIA CUDA Stream-Ordered Memory Allocator](https://developer.nvidia.com/blog/using-cuda-stream-ordered-memory-allocator-part-1/) on the NVIDIA Developer Blog. ### Available Resources RMM provides several `device_memory_resource` derived classes to satisfy various user requirements. For more detailed information about these resources, see their respective documentation. #### `cuda_memory_resource` Allocates and frees device memory using `cudaMalloc` and `cudaFree`. #### `managed_memory_resource` Allocates and frees device memory using `cudaMallocManaged` and `cudaFree`. Note that `managed_memory_resource` cannot be used with NVIDIA Virtual GPU Software (vGPU, for use with virtual machines or hypervisors) because [NVIDIA CUDA Unified Memory is not supported by NVIDIA vGPU](https://docs.nvidia.com/grid/latest/grid-vgpu-user-guide/index.html#cuda-open-cl-support-vgpu). #### `pool_memory_resource` A coalescing, best-fit pool sub-allocator. #### `fixed_size_memory_resource` A memory resource that can only allocate a single fixed size. Average allocation and deallocation cost is constant. #### `binning_memory_resource` Configurable to use multiple upstream memory resources for allocations that fall within different bin sizes. Often configured with multiple bins backed by `fixed_size_memory_resource`s and a single `pool_memory_resource` for allocations larger than the largest bin size. ### Default Resources and Per-device Resources RMM users commonly need to configure a `device_memory_resource` object to use for all allocations where another resource has not explicitly been provided. A common example is configuring a `pool_memory_resource` to use for all allocations to get fast dynamic allocation. To enable this use case, RMM provides the concept of a "default" `device_memory_resource`. This resource is used when another is not explicitly provided. Accessing and modifying the default resource is done through two functions: - `device_memory_resource* get_current_device_resource()` - Returns a pointer to the default resource for the current CUDA device. - The initial default memory resource is an instance of `cuda_memory_resource`. - This function is thread safe with respect to concurrent calls to it and `set_current_device_resource()`. - For more explicit control, you can use `get_per_device_resource()`, which takes a device ID. - `device_memory_resource* set_current_device_resource(device_memory_resource* new_mr)` - Updates the default memory resource pointer for the current CUDA device to `new_mr` - Returns the previous default resource pointer - If `new_mr` is `nullptr`, then resets the default resource to `cuda_memory_resource` - This function is thread safe with respect to concurrent calls to it and `get_current_device_resource()` - For more explicit control, you can use `set_per_device_resource()`, which takes a device ID. #### Example ```c++ rmm::mr::cuda_memory_resource cuda_mr; // Construct a resource that uses a coalescing best-fit pool allocator rmm::mr::pool_memory_resource<rmm::mr::cuda_memory_resource> pool_mr{&cuda_mr}; rmm::mr::set_current_device_resource(&pool_mr); // Updates the current device resource pointer to `pool_mr` rmm::mr::device_memory_resource* mr = rmm::mr::get_current_device_resource(); // Points to `pool_mr` ``` #### Multiple Devices A `device_memory_resource` should only be used when the active CUDA device is the same device that was active when the `device_memory_resource` was created. Otherwise behavior is undefined. If a `device_memory_resource` is used with a stream associated with a different CUDA device than the device for which the memory resource was created, behavior is undefined. Creating a `device_memory_resource` for each device requires care to set the current device before creating each resource, and to maintain the lifetime of the resources as long as they are set as per-device resources. Here is an example loop that creates `unique_ptr`s to `pool_memory_resource` objects for each device and sets them as the per-device resource for that device. ```c++ std::vector<unique_ptr<pool_memory_resource>> per_device_pools; for(int i = 0; i < N; ++i) { cudaSetDevice(i); // set device i before creating MR // Use a vector of unique_ptr to maintain the lifetime of the MRs per_device_pools.push_back(std::make_unique<pool_memory_resource>()); // Set the per-device resource for device i set_per_device_resource(cuda_device_id{i}, &per_device_pools.back()); } ``` Note that the CUDA device that is current when creating a `device_memory_resource` must also be current any time that `device_memory_resource` is used to deallocate memory, including in a destructor. This affects RAII classes like `rmm::device_buffer` and `rmm::device_uvector`. Here's an (incorrect) example that assumes the above example loop has been run to create a `pool_memory_resource` for each device. A correct example adds a call to `cudaSetDevice(0)` on the line of the error comment. ```c++ { RMM_CUDA_TRY(cudaSetDevice(0)); rmm::device_buffer buf_a(16); { RMM_CUDA_TRY(cudaSetDevice(1)); rmm::device_buffer buf_b(16); } // Error: when buf_a is destroyed, the current device must be 0, but it is 1 } ``` ### Allocators C++ interfaces commonly allow customizable memory allocation through an [`Allocator`](https://en.cppreference.com/w/cpp/named_req/Allocator) object. RMM provides several `Allocator` and `Allocator`-like classes. #### `polymorphic_allocator` A [stream-ordered](#stream-ordered-memory-allocation) allocator similar to [`std::pmr::polymorphic_allocator`](https://en.cppreference.com/w/cpp/memory/polymorphic_allocator). Unlike the standard C++ `Allocator` interface, the `allocate` and `deallocate` functions take a `cuda_stream_view` indicating the stream on which the (de)allocation occurs. #### `stream_allocator_adaptor` `stream_allocator_adaptor` can be used to adapt a stream-ordered allocator to present a standard `Allocator` interface to consumers that may not be designed to work with a stream-ordered interface. Example: ```c++ rmm::cuda_stream stream; rmm::mr::polymorphic_allocator<int> stream_alloc; // Constructs an adaptor that forwards all (de)allocations to `stream_alloc` on `stream`. auto adapted = rmm::mr::make_stream_allocator_adaptor(stream_alloc, stream); // Allocates 100 bytes using `stream_alloc` on `stream` auto p = adapted.allocate(100); ... // Deallocates using `stream_alloc` on `stream` adapted.deallocate(p,100); ``` #### `thrust_allocator` `thrust_allocator` is a device memory allocator that uses the strongly typed `thrust::device_ptr`, making it usable with containers like `thrust::device_vector`. See [below](#using-rmm-with-thrust) for more information on using RMM with Thrust. ## Device Data Structures ### `device_buffer` An untyped, uninitialized RAII class for stream ordered device memory allocation. #### Example ```c++ cuda_stream_view s{...}; // Allocates at least 100 bytes on stream `s` using the *default* resource rmm::device_buffer b{100,s}; void* p = b.data(); // Raw, untyped pointer to underlying device memory kernel<<<..., s.value()>>>(b.data()); // `b` is only safe to use on `s` rmm::mr::device_memory_resource * mr = new my_custom_resource{...}; // Allocates at least 100 bytes on stream `s` using the resource `mr` rmm::device_buffer b2{100, s, mr}; ``` ### `device_uvector<T>` A typed, uninitialized RAII class for allocation of a contiguous set of elements in device memory. Similar to a `thrust::device_vector`, but as an optimization, does not default initialize the contained elements. This optimization restricts the types `T` to trivially copyable types. #### Example ```c++ cuda_stream_view s{...}; // Allocates uninitialized storage for 100 `int32_t` elements on stream `s` using the // default resource rmm::device_uvector<int32_t> v(100, s); // Initializes the elements to 0 thrust::uninitialized_fill(thrust::cuda::par.on(s.value()), v.begin(), v.end(), int32_t{0}); rmm::mr::device_memory_resource * mr = new my_custom_resource{...}; // Allocates uninitialized storage for 100 `int32_t` elements on stream `s` using the resource `mr` rmm::device_uvector<int32_t> v2{100, s, mr}; ``` ### `device_scalar` A typed, RAII class for allocation of a single element in device memory. This is similar to a `device_uvector` with a single element, but provides convenience functions like modifying the value in device memory from the host, or retrieving the value from device to host. #### Example ```c++ cuda_stream_view s{...}; // Allocates uninitialized storage for a single `int32_t` in device memory rmm::device_scalar<int32_t> a{s}; a.set_value(42, s); // Updates the value in device memory to `42` on stream `s` kernel<<<...,s.value()>>>(a.data()); // Pass raw pointer to underlying element in device memory int32_t v = a.value(s); // Retrieves the value from device to host on stream `s` ``` ## `host_memory_resource` `rmm::mr::host_memory_resource` is the base class that defines the interface for allocating and freeing host memory. Similar to `device_memory_resource`, it has two key functions for (de)allocation: 1. `void* host_memory_resource::allocate(std::size_t bytes, std::size_t alignment)` - Returns a pointer to an allocation of at least `bytes` bytes aligned to the specified `alignment` 2. `void host_memory_resource::deallocate(void* p, std::size_t bytes, std::size_t alignment)` - Reclaims a previous allocation of size `bytes` pointed to by `p`. Unlike `device_memory_resource`, the `host_memory_resource` interface and behavior is identical to `std::pmr::memory_resource`. ### Available Resources #### `new_delete_resource` Uses the global `operator new` and `operator delete` to allocate host memory. #### `pinned_memory_resource` Allocates "pinned" host memory using `cuda(Malloc/Free)Host`. ## Host Data Structures RMM does not currently provide any data structures that interface with `host_memory_resource`. In the future, RMM will provide a similar host-side structure like `device_buffer` and an allocator that can be used with STL containers. ## Using RMM with Thrust RAPIDS and other CUDA libraries make heavy use of Thrust. Thrust uses CUDA device memory in two situations: 1. As the backing store for `thrust::device_vector`, and 2. As temporary storage inside some algorithms, such as `thrust::sort`. RMM provides `rmm::mr::thrust_allocator` as a conforming Thrust allocator that uses `device_memory_resource`s. ### Thrust Algorithms To instruct a Thrust algorithm to use `rmm::mr::thrust_allocator` to allocate temporary storage, you can use the custom Thrust CUDA device execution policy: `rmm::exec_policy(stream)`. ```c++ thrust::sort(rmm::exec_policy(stream, ...); ``` The first `stream` argument is the `stream` to use for `rmm::mr::thrust_allocator`. The second `stream` argument is what should be used to execute the Thrust algorithm. These two arguments must be identical. ## Logging RMM includes two forms of logging. Memory event logging and debug logging. ### Memory Event Logging and `logging_resource_adaptor` Memory event logging writes details of every allocation or deallocation to a CSV (comma-separated value) file. In C++, Memory Event Logging is enabled by using the `logging_resource_adaptor` as a wrapper around any other `device_memory_resource` object. Each row in the log represents either an allocation or a deallocation. The columns of the file are "Thread, Time, Action, Pointer, Size, Stream". The CSV output files of the `logging_resource_adaptor` can be used as input to `REPLAY_BENCHMARK`, which is available when building RMM from source, in the `gbenchmarks` folder in the build directory. This log replayer can be useful for profiling and debugging allocator issues. The following C++ example creates a logging version of a `cuda_memory_resource` that outputs the log to the file "logs/test1.csv". ```c++ std::string filename{"logs/test1.csv"}; rmm::mr::cuda_memory_resource upstream; rmm::mr::logging_resource_adaptor<rmm::mr::cuda_memory_resource> log_mr{&upstream, filename}; ``` If a file name is not specified, the environment variable `RMM_LOG_FILE` is queried for the file name. If `RMM_LOG_FILE` is not set, then an exception is thrown by the `logging_resource_adaptor` constructor. In Python, memory event logging is enabled when the `logging` parameter of `rmm.reinitialize()` is set to `True`. The log file name can be set using the `log_file_name` parameter. See `help(rmm.reinitialize)` for full details. ### Debug Logging RMM includes a debug logger which can be enabled to log trace and debug information to a file. This information can show when errors occur, when additional memory is allocated from upstream resources, etc. The default log file is `rmm_log.txt` in the current working directory, but the environment variable `RMM_DEBUG_LOG_FILE` can be set to specify the path and file name. There is a CMake configuration variable `RMM_LOGGING_LEVEL`, which can be set to enable compilation of more detailed logging. The default is `INFO`. Available levels are `TRACE`, `DEBUG`, `INFO`, `WARN`, `ERROR`, `CRITICAL` and `OFF`. The log relies on the [spdlog](https://github.com/gabime/spdlog.git) library. Note that to see logging below the `INFO` level, the application must also set the logging level at run time. C++ applications must must call `rmm::logger().set_level()`, for example to enable all levels of logging down to `TRACE`, call `rmm::logger().set_level(spdlog::level::trace)` (and compile librmm with `-DRMM_LOGGING_LEVEL=TRACE`). Python applications must call `rmm.set_logging_level()`, for example to enable all levels of logging down to `TRACE`, call `rmm.set_logging_level("trace")` (and compile the RMM Python module with `-DRMM_LOGGING_LEVEL=TRACE`). Note that debug logging is different from the CSV memory allocation logging provided by `rmm::mr::logging_resource_adapter`. The latter is for logging a history of allocation / deallocation actions which can be useful for replay with RMM's replay benchmark. ## RMM and CUDA Memory Bounds Checking Memory allocations taken from a memory resource that allocates a pool of memory (such as `pool_memory_resource` and `arena_memory_resource`) are part of the same low-level CUDA memory allocation. Therefore, out-of-bounds or misaligned accesses to these allocations are not likely to be detected by CUDA tools such as [CUDA Compute Sanitizer](https://docs.nvidia.com/cuda/compute-sanitizer/index.html) memcheck. Exceptions to this are `cuda_memory_resource`, which wraps `cudaMalloc`, and `cuda_async_memory_resource`, which uses `cudaMallocAsync` with CUDA's built-in memory pool functionality (CUDA 11.2 or later required). Illegal memory accesses to memory allocated by these resources are detectable with Compute Sanitizer Memcheck. It may be possible in the future to add support for memory bounds checking with other memory resources using NVTX APIs. ## Using RMM in Python Code There are two ways to use RMM in Python code: 1. Using the `rmm.DeviceBuffer` API to explicitly create and manage device memory allocations 2. Transparently via external libraries such as CuPy and Numba RMM provides a `MemoryResource` abstraction to control _how_ device memory is allocated in both the above uses. ### DeviceBuffers A DeviceBuffer represents an **untyped, uninitialized device memory allocation**. DeviceBuffers can be created by providing the size of the allocation in bytes: ```python >>> import rmm >>> buf = rmm.DeviceBuffer(size=100) ``` The size of the allocation and the memory address associated with it can be accessed via the `.size` and `.ptr` attributes respectively: ```python >>> buf.size 100 >>> buf.ptr 140202544726016 ``` DeviceBuffers can also be created by copying data from host memory: ```python >>> import rmm >>> import numpy as np >>> a = np.array([1, 2, 3], dtype='float64') >>> buf = rmm.DeviceBuffer.to_device(a.tobytes()) >>> buf.size 24 ``` Conversely, the data underlying a DeviceBuffer can be copied to the host: ```python >>> np.frombuffer(buf.tobytes()) array([1., 2., 3.]) ``` ### MemoryResource objects `MemoryResource` objects are used to configure how device memory allocations are made by RMM. By default if a `MemoryResource` is not set explicitly, RMM uses the `CudaMemoryResource`, which uses `cudaMalloc` for allocating device memory. `rmm.reinitialize()` provides an easy way to initialize RMM with specific memory resource options across multiple devices. See `help(rmm.reinitialize)` for full details. For lower-level control, the `rmm.mr.set_current_device_resource()` function can be used to set a different MemoryResource for the current CUDA device. For example, enabling the `ManagedMemoryResource` tells RMM to use `cudaMallocManaged` instead of `cudaMalloc` for allocating memory: ```python >>> import rmm >>> rmm.mr.set_current_device_resource(rmm.mr.ManagedMemoryResource()) ``` > :warning: The default resource must be set for any device **before** > allocating any device memory on that device. Setting or changing the > resource after device allocations have been made can lead to unexpected > behaviour or crashes. See [Multiple Devices](#multiple-devices) As another example, `PoolMemoryResource` allows you to allocate a large "pool" of device memory up-front. Subsequent allocations will draw from this pool of already allocated memory. The example below shows how to construct a PoolMemoryResource with an initial size of 1 GiB and a maximum size of 4 GiB. The pool uses `CudaMemoryResource` as its underlying ("upstream") memory resource: ```python >>> import rmm >>> pool = rmm.mr.PoolMemoryResource( ... rmm.mr.CudaMemoryResource(), ... initial_pool_size=2**30, ... maximum_pool_size=2**32 ... ) >>> rmm.mr.set_current_device_resource(pool) ``` Other MemoryResources include: * `FixedSizeMemoryResource` for allocating fixed blocks of memory * `BinningMemoryResource` for allocating blocks within specified "bin" sizes from different memory resources MemoryResources are highly configurable and can be composed together in different ways. See `help(rmm.mr)` for more information. ## Using RMM with third-party libraries ### Using RMM with CuPy You can configure [CuPy](https://cupy.dev/) to use RMM for memory allocations by setting the CuPy CUDA allocator to `rmm_cupy_allocator`: ```python >>> from rmm.allocators.cupy import rmm_cupy_allocator >>> import cupy >>> cupy.cuda.set_allocator(rmm_cupy_allocator) ``` **Note:** This only configures CuPy to use the current RMM resource for allocations. It does not initialize nor change the current resource, e.g., enabling a memory pool. See [here](#memoryresource-objects) for more information on changing the current memory resource. ### Using RMM with Numba You can configure Numba to use RMM for memory allocations using the Numba [EMM Plugin](https://numba.readthedocs.io/en/stable/cuda/external-memory.html#setting-emm-plugin). This can be done in two ways: 1. Setting the environment variable `NUMBA_CUDA_MEMORY_MANAGER`: ```python $ NUMBA_CUDA_MEMORY_MANAGER=rmm.allocators.numba python (args) ``` 2. Using the `set_memory_manager()` function provided by Numba: ```python >>> from numba import cuda >>> from rmm.allocators.numba import RMMNumbaManager >>> cuda.set_memory_manager(RMMNumbaManager) ``` **Note:** This only configures Numba to use the current RMM resource for allocations. It does not initialize nor change the current resource, e.g., enabling a memory pool. See [here](#memoryresource-objects) for more information on changing the current memory resource. ### Using RMM with PyTorch [PyTorch](https://pytorch.org/docs/stable/notes/cuda.html) can use RMM for memory allocation. For example, to configure PyTorch to use an RMM-managed pool: ```python import rmm from rmm.allocators.torch import rmm_torch_allocator import torch rmm.reinitialize(pool_allocator=True) torch.cuda.memory.change_current_allocator(rmm_torch_allocator) ``` PyTorch and RMM will now share the same memory pool. You can, of course, use a custom memory resource with PyTorch as well: ```python import rmm from rmm.allocators.torch import rmm_torch_allocator import torch # note that you can configure PyTorch to use RMM either before or # after changing RMM's memory resource. PyTorch will use whatever # memory resource is configured to be the "current" memory resource at # the time of allocation. torch.cuda.change_current_allocator(rmm_torch_allocator) # configure RMM to use a managed memory resource, wrapped with a # statistics resource adaptor that can report information about the # amount of memory allocated: mr = rmm.mr.StatisticsResourceAdaptor(rmm.mr.ManagedMemoryResource()) rmm.mr.set_current_device_resource(mr) x = torch.tensor([1, 2]).cuda() # the memory resource reports information about PyTorch allocations: mr.allocation_counts Out[6]: {'current_bytes': 16, 'current_count': 1, 'peak_bytes': 16, 'peak_count': 1, 'total_bytes': 16, 'total_count': 1} ```
0
rapidsai_public_repos/rmm
rapidsai_public_repos/rmm/python/setup.py
# Copyright (c) 2019-2023, NVIDIA CORPORATION. from setuptools import find_packages from skbuild import setup packages = find_packages(include=["rmm*"]) setup( packages=packages, package_data={key: ["VERSION", "*.pxd"] for key in packages}, zip_safe=False, )
0
rapidsai_public_repos/rmm
rapidsai_public_repos/rmm/python/LICENSE
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
0
rapidsai_public_repos/rmm
rapidsai_public_repos/rmm/python/.coveragerc
# Configuration file for Python coverage tests [run] include = rmm/* omit = rmm/tests/*
0
rapidsai_public_repos/rmm/python
rapidsai_public_repos/rmm/python/docs/cpp_api.rst
API Reference ============= .. toctree:: :maxdepth: 2 :caption: Contents: librmm_docs/index
0
rapidsai_public_repos/rmm/python
rapidsai_public_repos/rmm/python/docs/guide.md
# User Guide Achieving optimal performance in GPU-centric workflows frequently requires customizing how GPU ("device") memory is allocated. RMM is a package that enables you to allocate device memory in a highly configurable way. For example, it enables you to allocate and use pools of GPU memory, or to use [managed memory](https://developer.nvidia.com/blog/unified-memory-cuda-beginners/) for allocations. You can also easily configure other libraries like Numba and CuPy to use RMM for allocating device memory. ## Installation See the project [README](https://github.com/rapidsai/rmm) for how to install RMM. ## Using RMM There are two ways to use RMM in Python code: 1. Using the `rmm.DeviceBuffer` API to explicitly create and manage device memory allocations 2. Transparently via external libraries such as CuPy and Numba RMM provides a `MemoryResource` abstraction to control _how_ device memory is allocated in both the above uses. ### DeviceBuffers A DeviceBuffer represents an **untyped, uninitialized device memory allocation**. DeviceBuffers can be created by providing the size of the allocation in bytes: ```python >>> import rmm >>> buf = rmm.DeviceBuffer(size=100) ``` The size of the allocation and the memory address associated with it can be accessed via the `.size` and `.ptr` attributes respectively: ```python >>> buf.size 100 >>> buf.ptr 140202544726016 ``` DeviceBuffers can also be created by copying data from host memory: ```python >>> import rmm >>> import numpy as np >>> a = np.array([1, 2, 3], dtype='float64') >>> buf = rmm.DeviceBuffer.to_device(a.view("int8")) # to_device expects an 8-bit type or `bytes` >>> buf.size 24 ``` Conversely, the data underlying a DeviceBuffer can be copied to the host: ```python >>> np.frombuffer(buf.tobytes()) array([1., 2., 3.]) ``` ### MemoryResource objects `MemoryResource` objects are used to configure how device memory allocations are made by RMM. By default if a `MemoryResource` is not set explicitly, RMM uses the `CudaMemoryResource`, which uses `cudaMalloc` for allocating device memory. `rmm.reinitialize()` provides an easy way to initialize RMM with specific memory resource options across multiple devices. See `help(rmm.reinitialize)` for full details. For lower-level control, the `rmm.mr.set_current_device_resource()` function can be used to set a different MemoryResource for the current CUDA device. For example, enabling the `ManagedMemoryResource` tells RMM to use `cudaMallocManaged` instead of `cudaMalloc` for allocating memory: ```python >>> import rmm >>> rmm.mr.set_current_device_resource(rmm.mr.ManagedMemoryResource()) ``` > :warning: The default resource must be set for any device **before** > allocating any device memory on that device. Setting or changing the > resource after device allocations have been made can lead to unexpected > behaviour or crashes. As another example, `PoolMemoryResource` allows you to allocate a large "pool" of device memory up-front. Subsequent allocations will draw from this pool of already allocated memory. The example below shows how to construct a PoolMemoryResource with an initial size of 1 GiB and a maximum size of 4 GiB. The pool uses `CudaMemoryResource` as its underlying ("upstream") memory resource: ```python >>> import rmm >>> pool = rmm.mr.PoolMemoryResource( ... rmm.mr.CudaMemoryResource(), ... initial_pool_size=2**30, ... maximum_pool_size=2**32 ... ) >>> rmm.mr.set_current_device_resource(pool) ``` Similarly, to use a pool of managed memory: ```python >>> import rmm >>> pool = rmm.mr.PoolMemoryResource( ... rmm.mr.ManagedMemoryResource(), ... initial_pool_size=2**30, ... maximum_pool_size=2**32 ... ) >>> rmm.mr.set_current_device_resource(pool) ``` Other MemoryResources include: * `FixedSizeMemoryResource` for allocating fixed blocks of memory * `BinningMemoryResource` for allocating blocks within specified "bin" sizes from different memory resources MemoryResources are highly configurable and can be composed together in different ways. See `help(rmm.mr)` for more information. ## Using RMM with third-party libraries A number of libraries provide hooks to control their device allocations. RMM provides implementations of these for [CuPy](https://cupy.dev), [numba](https://numba.readthedocs.io/en/stable/), and [PyTorch](https://pytorch.org) in the `rmm.allocators` submodule. All these approaches configure the library to use the _current_ RMM memory resource for device allocations. ### Using RMM with CuPy You can configure [CuPy](https://cupy.dev/) to use RMM for memory allocations by setting the CuPy CUDA allocator to `rmm.allocators.cupy.rmm_cupy_allocator`: ```python >>> from rmm.allocators.cupy import rmm_cupy_allocator >>> import cupy >>> cupy.cuda.set_allocator(rmm_cupy_allocator) ``` ### Using RMM with Numba You can configure [Numba](https://numba.readthedocs.io/en/stable/) to use RMM for memory allocations using the Numba [EMM Plugin](https://numba.readthedocs.io/en/stable/cuda/external-memory.html#setting-emm-plugin). This can be done in two ways: 1. Setting the environment variable `NUMBA_CUDA_MEMORY_MANAGER`: ```bash $ NUMBA_CUDA_MEMORY_MANAGER=rmm.allocators.numba python (args) ``` 2. Using the `set_memory_manager()` function provided by Numba: ```python >>> from numba import cuda >>> from rmm.allocators.numba import RMMNumbaManager >>> cuda.set_memory_manager(RMMNumbaManager) ``` ### Using RMM with PyTorch You can configure [PyTorch](https://pytorch.org/docs/stable/notes/cuda.html) to use RMM for memory allocations using their by configuring the current allocator. ```python from rmm.allocators.torch import rmm_torch_allocator import torch torch.cuda.memory.change_current_allocator(rmm_torch_allocator) ```
0
rapidsai_public_repos/rmm/python
rapidsai_public_repos/rmm/python/docs/python.rst
Welcome to the rmm Python documentation! ======================================== .. toctree:: :maxdepth: 2 :caption: Contents: guide.md python_api.rst
0
rapidsai_public_repos/rmm/python
rapidsai_public_repos/rmm/python/docs/cpp.rst
Welcome to the rmm C++ documentation! ======================================== .. toctree:: :maxdepth: 2 :caption: Contents: cpp_api.rst
0
rapidsai_public_repos/rmm/python
rapidsai_public_repos/rmm/python/docs/Makefile
# Minimal makefile for Sphinx documentation # # You can set these variables from the command line, and also # from the environment for the first two. SPHINXOPTS = -n -v -W --keep-going SPHINXBUILD ?= sphinx-build SOURCEDIR = . BUILDDIR = _build # Put it first so that "make" without argument is like "make help". help: @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) .PHONY: help Makefile # Catch-all target: route all unknown targets to Sphinx using the new # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). %: Makefile @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
0
rapidsai_public_repos/rmm/python
rapidsai_public_repos/rmm/python/docs/conf.py
# Configuration file for the Sphinx documentation builder. # # This file only contains a selection of the most common options. For a full # list see the documentation: # https://www.sphinx-doc.org/en/master/usage/configuration.html # -- Path setup -------------------------------------------------------------- # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # import os import re # -- Project information ----------------------------------------------------- project = "rmm" copyright = "2020-2023, NVIDIA" author = "NVIDIA" # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = "24.02" # The full version, including alpha/beta/rc tags. release = "24.02.00" # -- General configuration --------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ "sphinxcontrib.jquery", "sphinx.ext.intersphinx", "sphinx.ext.autodoc", "sphinx.ext.autosummary", "sphinx_copybutton", "numpydoc", "sphinx_markdown_tables", "IPython.sphinxext.ipython_console_highlighting", "IPython.sphinxext.ipython_directive", "nbsphinx", "recommonmark", "breathe", ] # Breathe Configuration breathe_projects = {"librmm": "../../doxygen/xml"} breathe_default_project = "librmm" copybutton_prompt_text = ">>> " ipython_mplbackend = "str" # Add any paths that contain templates here, relative to this directory. templates_path = ["_templates"] # The suffix(es) of source filenames. # You can specify multiple suffix as a list of string: # # source_suffix = ['.rst', '.md'] source_suffix = {".rst": "restructuredtext", ".md": "markdown"} # The master toctree document. master_doc = "index" # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # # This is also used if you do content translation via gettext catalogs. # Usually you set "language" from the command line for these cases. language = "en" # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. # This patterns also effect to html_static_path and html_extra_path exclude_patterns = [] # The name of the Pygments (syntax highlighting) style to use. pygments_style = "sphinx" # If true, `todo` and `todoList` produce output, else they produce nothing. todo_include_todos = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. # html_theme = "sphinx_rtd_theme" # on_rtd is whether we are on readthedocs.org on_rtd = os.environ.get("READTHEDOCS", None) == "True" if not on_rtd: # only import and set the theme if we're building docs locally # otherwise, readthedocs.org uses their theme by default, # so no need to specify it import sphinx_rtd_theme html_theme = "sphinx_rtd_theme" html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # # html_theme_options = {} # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = [] # -- Options for HTMLHelp output ------------------------------------------ # Output file base name for HTML help builder. htmlhelp_basename = "rmmdoc" # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # # 'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). # # 'pointsize': '10pt', # Additional stuff for the LaTeX preamble. # # 'preamble': '', # Latex figure (float) alignment # # 'figure_align': 'htbp', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ ( master_doc, "rmm.tex", "RMM Documentation", "NVIDIA Corporation", "manual", ) ] # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [(master_doc, "rmm", "RMM Documentation", [author], 1)] # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ( master_doc, "rmm", "RMM Documentation", author, "rmm", "One line description of project.", "Miscellaneous", ) ] # Example configuration for intersphinx: refer to the Python standard library. intersphinx_mapping = { "python": ("https://docs.python.org/3", None), "numba": ("https://numba.readthedocs.io/en/stable", None), } # Config numpydoc numpydoc_show_inherited_class_members = True numpydoc_class_members_toctree = False autoclass_content = "init" nitpick_ignore = [ ("py:class", "size_t"), ("py:class", "void"), ] def on_missing_reference(app, env, node, contnode): if (refid := node.get("refid")) is not None and "hpp" in refid: # We don't want to link to C++ header files directly from the # Sphinx docs, those are pages that doxygen automatically # generates. Adding those would clutter the Sphinx output. return contnode names_to_skip = [ # External names "cudaStream_t", "cudaStreamLegacy", "cudaStreamPerThread", "thrust", "spdlog", "stream_ref", # libcu++ names "cuda", "cuda::mr", "resource", "resource_ref", "async_resource", "async_resource_ref", "device_accessible", "host_accessible", "forward_property", "enable_if_t", # Unknown types "int64_t", "int8_t", # Internal objects "detail", "RMM_EXEC_CHECK_DISABLE", # Template types "Base", ] if ( node["refdomain"] == "cpp" and (reftarget := node.get("reftarget")) is not None ): if any(toskip in reftarget for toskip in names_to_skip): return contnode # Strip template parameters and just use the base type. if match := re.search("(.*)<.*>", reftarget): reftarget = match.group(1) # Try to find the target prefixed with e.g. namespaces in case that's # all that's missing. Include the empty prefix in case we're searching # for a stripped template. extra_prefixes = ["rmm::", "rmm::mr::", "mr::", ""] for (name, dispname, type, docname, anchor, priority) in env.domains[ "cpp" ].get_objects(): for prefix in extra_prefixes: if ( name == f"{prefix}{reftarget}" or f"{prefix}{name}" == reftarget ): return env.domains["cpp"].resolve_xref( env, docname, app.builder, node["reftype"], name, node, contnode, ) return None def setup(app): app.add_js_file("copybutton_pydocs.js") app.add_css_file("https://docs.rapids.ai/assets/css/custom.css") app.add_js_file( "https://docs.rapids.ai/assets/js/custom.js", loading_method="defer" ) app.connect("missing-reference", on_missing_reference)
0
rapidsai_public_repos/rmm/python
rapidsai_public_repos/rmm/python/docs/index.rst
.. rmm documentation master file, created by sphinx-quickstart on Thu Nov 19 13:16:00 2020. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. Welcome to rmm's documentation! =============================== .. toctree:: :maxdepth: 2 :caption: Contents: Python <python.rst> C++ <cpp.rst> Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search`
0
rapidsai_public_repos/rmm/python
rapidsai_public_repos/rmm/python/docs/python_api.rst
API Reference ============== Module Contents --------------- .. automodule:: rmm :members: :undoc-members: :show-inheritance: Memory Resources ---------------- .. automodule:: rmm.mr :members: :inherited-members: :undoc-members: :show-inheritance: Memory Allocators ----------------- .. automodule:: rmm.allocators.cupy :members: :undoc-members: :show-inheritance: .. automodule:: rmm.allocators.numba :members: :inherited-members: :undoc-members: :show-inheritance: .. automodule:: rmm.allocators.torch :members: :undoc-members: :show-inheritance:
0
rapidsai_public_repos/rmm/python/docs
rapidsai_public_repos/rmm/python/docs/librmm_docs/errors.rst
Errors ====== .. doxygengroup:: errors :members:
0
rapidsai_public_repos/rmm/python/docs
rapidsai_public_repos/rmm/python/docs/librmm_docs/cuda_streams.rst
CUDA Streams ============ .. doxygengroup:: cuda_streams :members:
0
rapidsai_public_repos/rmm/python/docs
rapidsai_public_repos/rmm/python/docs/librmm_docs/cuda_device_management.rst
CUDA Device Management ====================== .. doxygengroup:: cuda_device_management :members:
0
rapidsai_public_repos/rmm/python/docs
rapidsai_public_repos/rmm/python/docs/librmm_docs/memory_resources.rst
Memory Resources ================ .. doxygennamespace:: rmm::mr :desc-only: .. doxygengroup:: memory_resources :members: .. doxygengroup:: device_memory_resources :members: .. doxygengroup:: host_memory_resources :members: .. doxygengroup:: device_resource_adaptors :members:
0
rapidsai_public_repos/rmm/python/docs
rapidsai_public_repos/rmm/python/docs/librmm_docs/data_containers.rst
Data Containers =============== .. doxygengroup:: data_containers :members:
0
rapidsai_public_repos/rmm/python/docs
rapidsai_public_repos/rmm/python/docs/librmm_docs/thrust_integrations.rst
Thrust Integration ================== .. doxygengroup:: thrust_integrations :members:
0
rapidsai_public_repos/rmm/python/docs
rapidsai_public_repos/rmm/python/docs/librmm_docs/logging.rst
Logging ======= .. doxygengroup:: logging :members:
0
rapidsai_public_repos/rmm/python/docs
rapidsai_public_repos/rmm/python/docs/librmm_docs/index.rst
.. rmm documentation master file, created by sphinx-quickstart on Thu Nov 19 13:16:00 2020. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. librmm Documentation ==================== .. toctree:: :maxdepth: 2 :caption: Contents: memory_resources data_containers thrust_integrations cuda_device_management cuda_streams errors logging .. doxygennamespace:: rmm :desc-only: Indices and tables ================== * :ref:`genindex` * :ref:`search`
0
rapidsai_public_repos/rmm/python
rapidsai_public_repos/rmm/python/rmm/_version.py
# Copyright (c) 2023, NVIDIA CORPORATION. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import importlib.resources __version__ = ( importlib.resources.files("rmm").joinpath("VERSION").read_text().strip() ) __git_commit__ = ""
0