repo_id
stringlengths 21
96
| file_path
stringlengths 31
155
| content
stringlengths 1
92.9M
| __index_level_0__
int64 0
0
|
---|---|---|---|
rapidsai_public_repos
|
rapidsai_public_repos/cudf/README.md
|
# <div align="left"><img src="img/rapids_logo.png" width="90px"/> cuDF - GPU DataFrames</div>
## π’ cuDF can now be used as a no-code-change accelerator for pandas! To learn more, see [here](https://rapids.ai/cudf-pandas/)!
cuDF is a GPU DataFrame library for loading joining, aggregating,
filtering, and otherwise manipulating data. cuDF leverages
[libcudf](https://docs.rapids.ai/api/libcudf/stable/), a
blazing-fast C++/CUDA dataframe library and the [Apache
Arrow](https://arrow.apache.org/) columnar format to provide a
GPU-accelerated pandas API.
You can import `cudf` directly and use it like `pandas`:
```python
import cudf
import requests
from io import StringIO
url = "https://github.com/plotly/datasets/raw/master/tips.csv"
content = requests.get(url).content.decode("utf-8")
tips_df = cudf.read_csv(StringIO(content))
tips_df["tip_percentage"] = tips_df["tip"] / tips_df["total_bill"] * 100
# display average tip by dining party size
print(tips_df.groupby("size").tip_percentage.mean())
```
Or, you can use cuDF as a no-code-change accelerator for pandas, using
[`cudf.pandas`](https://docs.rapids.ai/api/cudf/stable/cudf_pandas).
`cudf.pandas` supports 100% of the pandas API, utilizing cuDF for
supported operations and falling back to pandas when needed:
```python
%load_ext cudf.pandas # pandas operations now use the GPU!
import pandas as pd
import requests
from io import StringIO
url = "https://github.com/plotly/datasets/raw/master/tips.csv"
content = requests.get(url).content.decode("utf-8")
tips_df = pd.read_csv(StringIO(content))
tips_df["tip_percentage"] = tips_df["tip"] / tips_df["total_bill"] * 100
# display average tip by dining party size
print(tips_df.groupby("size").tip_percentage.mean())
```
## Resources
- [Try cudf.pandas now](https://nvda.ws/rapids-cudf): Explore `cudf.pandas` on a free GPU enabled instance on Google Colab!
- [Install](https://docs.rapids.ai/install): Instructions for installing cuDF and other [RAPIDS](https://rapids.ai) libraries.
- [cudf (Python) documentation](https://docs.rapids.ai/api/cudf/stable/)
- [libcudf (C++/CUDA) documentation](https://docs.rapids.ai/api/libcudf/stable/)
- [RAPIDS Community](https://rapids.ai/learn-more/#get-involved): Get help, contribute, and collaborate.
## Installation
### CUDA/GPU requirements
* CUDA 11.2+
* NVIDIA driver 450.80.02+
* Pascal architecture or better (Compute Capability >=6.0)
### Conda
cuDF can be installed with conda (via [miniconda](https://docs.conda.io/projects/miniconda/en/latest/) or the full [Anaconda distribution](https://www.anaconda.com/download) from the `rapidsai` channel:
```bash
conda install -c rapidsai -c conda-forge -c nvidia \
cudf=24.02 python=3.10 cuda-version=11.8
```
We also provide [nightly Conda packages](https://anaconda.org/rapidsai-nightly) built from the HEAD
of our latest development branch.
Note: cuDF is supported only on Linux, and with Python versions 3.9 and later.
See the [RAPIDS installation guide](https://docs.rapids.ai/install) for more OS and version info.
## Build/Install from Source
See build [instructions](CONTRIBUTING.md#setting-up-your-build-environment).
## Contributing
Please see our [guide for contributing to cuDF](CONTRIBUTING.md).
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/cudf/CHANGELOG.md
|
# cuDF 23.10.00 (11 Oct 2023)
## π¨ Breaking Changes
- Expose stream parameter in public nvtext ngram APIs ([#14061](https://github.com/rapidsai/cudf/pull/14061)) [@davidwendt](https://github.com/davidwendt)
- Raise `MixedTypeError` when a column of mixed-dtype is being constructed ([#14050](https://github.com/rapidsai/cudf/pull/14050)) [@galipremsagar](https://github.com/galipremsagar)
- Raise `NotImplementedError` for `MultiIndex.to_series` ([#14049](https://github.com/rapidsai/cudf/pull/14049)) [@galipremsagar](https://github.com/galipremsagar)
- Create table_input_metadata from a table_metadata ([#13920](https://github.com/rapidsai/cudf/pull/13920)) [@etseidl](https://github.com/etseidl)
- Enable RLE boolean encoding for v2 Parquet files ([#13886](https://github.com/rapidsai/cudf/pull/13886)) [@etseidl](https://github.com/etseidl)
- Change `NA` to `NaT` for `datetime` and `timedelta` types ([#13868](https://github.com/rapidsai/cudf/pull/13868)) [@galipremsagar](https://github.com/galipremsagar)
- Fix `any`, `all` reduction behavior for `axis=None` and warn for other reductions ([#13831](https://github.com/rapidsai/cudf/pull/13831)) [@galipremsagar](https://github.com/galipremsagar)
- Add minhash support for MurmurHash3_x64_128 ([#13796](https://github.com/rapidsai/cudf/pull/13796)) [@davidwendt](https://github.com/davidwendt)
- Remove the libcudf cudf::offset_type type ([#13788](https://github.com/rapidsai/cudf/pull/13788)) [@davidwendt](https://github.com/davidwendt)
- Raise error when trying to join `datetime` and `timedelta` types with other types ([#13786](https://github.com/rapidsai/cudf/pull/13786)) [@galipremsagar](https://github.com/galipremsagar)
- Update to Cython 3.0.0 ([#13777](https://github.com/rapidsai/cudf/pull/13777)) [@vyasr](https://github.com/vyasr)
- Raise error on constructing an array from mixed type inputs ([#13768](https://github.com/rapidsai/cudf/pull/13768)) [@galipremsagar](https://github.com/galipremsagar)
- Enforce deprecations in `23.10` ([#13732](https://github.com/rapidsai/cudf/pull/13732)) [@galipremsagar](https://github.com/galipremsagar)
- Upgrade to arrow 12 ([#13728](https://github.com/rapidsai/cudf/pull/13728)) [@galipremsagar](https://github.com/galipremsagar)
- Remove Arrow dependency from the `datasource.hpp` public header ([#13698](https://github.com/rapidsai/cudf/pull/13698)) [@vuule](https://github.com/vuule)
## π Bug Fixes
- Fix inaccurate ceil/floor and inaccurate rescaling casts of fixed-point values. ([#14242](https://github.com/rapidsai/cudf/pull/14242)) [@bdice](https://github.com/bdice)
- Fix inaccuracy in decimal128 rounding. ([#14233](https://github.com/rapidsai/cudf/pull/14233)) [@bdice](https://github.com/bdice)
- Workaround for illegal instruction error in sm90 for warp instrinsics with mask ([#14201](https://github.com/rapidsai/cudf/pull/14201)) [@karthikeyann](https://github.com/karthikeyann)
- Fix pytorch related pytest ([#14198](https://github.com/rapidsai/cudf/pull/14198)) [@galipremsagar](https://github.com/galipremsagar)
- Pin to `aws-sdk-cpp<1.11` ([#14173](https://github.com/rapidsai/cudf/pull/14173)) [@pentschev](https://github.com/pentschev)
- Fix assert failure for range window functions ([#14168](https://github.com/rapidsai/cudf/pull/14168)) [@mythrocks](https://github.com/mythrocks)
- Fix Memcheck error found in JSON_TEST JsonReaderTest.ErrorStrings ([#14164](https://github.com/rapidsai/cudf/pull/14164)) [@karthikeyann](https://github.com/karthikeyann)
- Fix calls to copy_bitmask to pass stream parameter ([#14158](https://github.com/rapidsai/cudf/pull/14158)) [@davidwendt](https://github.com/davidwendt)
- Fix DataFrame from Series with different CategoricalIndexes ([#14157](https://github.com/rapidsai/cudf/pull/14157)) [@mroeschke](https://github.com/mroeschke)
- Pin to numpy<1.25 and numba<0.58 to avoid errors and deprecation warnings-as-errors. ([#14156](https://github.com/rapidsai/cudf/pull/14156)) [@bdice](https://github.com/bdice)
- Fix kernel launch error for cudf::io::orc::gpu::rowgroup_char_counts_kernel ([#14139](https://github.com/rapidsai/cudf/pull/14139)) [@davidwendt](https://github.com/davidwendt)
- Don't sort columns for DataFrame init from list of Series ([#14136](https://github.com/rapidsai/cudf/pull/14136)) [@mroeschke](https://github.com/mroeschke)
- Fix DataFrame.values with no columns but index ([#14134](https://github.com/rapidsai/cudf/pull/14134)) [@mroeschke](https://github.com/mroeschke)
- Avoid circular cimports in _lib/cpp/reduce.pxd ([#14125](https://github.com/rapidsai/cudf/pull/14125)) [@vyasr](https://github.com/vyasr)
- Add support for nested dict in `DataFrame` constructor ([#14119](https://github.com/rapidsai/cudf/pull/14119)) [@galipremsagar](https://github.com/galipremsagar)
- Restrict iterables of `DataFrame`'s as input to `DataFrame` constructor ([#14118](https://github.com/rapidsai/cudf/pull/14118)) [@galipremsagar](https://github.com/galipremsagar)
- Allow `numeric_only=True` for reduction operations on numeric types ([#14111](https://github.com/rapidsai/cudf/pull/14111)) [@galipremsagar](https://github.com/galipremsagar)
- Preserve name of the column while initializing a `DataFrame` ([#14110](https://github.com/rapidsai/cudf/pull/14110)) [@galipremsagar](https://github.com/galipremsagar)
- Correct numerous 20054-D: dynamic initialization errors found on arm+12.2 ([#14108](https://github.com/rapidsai/cudf/pull/14108)) [@robertmaynard](https://github.com/robertmaynard)
- Drop `kwargs` from `Series.count` ([#14106](https://github.com/rapidsai/cudf/pull/14106)) [@galipremsagar](https://github.com/galipremsagar)
- Fix naming issues with `Index.to_frame` and `MultiIndex.to_frame` APIs ([#14105](https://github.com/rapidsai/cudf/pull/14105)) [@galipremsagar](https://github.com/galipremsagar)
- Only use memory resources that haven't been freed ([#14103](https://github.com/rapidsai/cudf/pull/14103)) [@robertmaynard](https://github.com/robertmaynard)
- Add support for `__round__` in `Series` and `DataFrame` ([#14099](https://github.com/rapidsai/cudf/pull/14099)) [@galipremsagar](https://github.com/galipremsagar)
- Validate ignore_index type in drop_duplicates ([#14098](https://github.com/rapidsai/cudf/pull/14098)) [@mroeschke](https://github.com/mroeschke)
- Fix renaming `Series` and `Index` ([#14080](https://github.com/rapidsai/cudf/pull/14080)) [@galipremsagar](https://github.com/galipremsagar)
- Raise NotImplementedError in to_datetime if Z (or tz component) in string ([#14074](https://github.com/rapidsai/cudf/pull/14074)) [@mroeschke](https://github.com/mroeschke)
- Raise NotImplementedError for datetime strings with UTC offset ([#14070](https://github.com/rapidsai/cudf/pull/14070)) [@mroeschke](https://github.com/mroeschke)
- Update pyarrow-related dispatch logic in dask_cudf ([#14069](https://github.com/rapidsai/cudf/pull/14069)) [@rjzamora](https://github.com/rjzamora)
- Use `conda mambabuild` rather than `mamba mambabuild` ([#14067](https://github.com/rapidsai/cudf/pull/14067)) [@wence-](https://github.com/wence-)
- Raise NotImplementedError in to_datetime with dayfirst without infer_format ([#14058](https://github.com/rapidsai/cudf/pull/14058)) [@mroeschke](https://github.com/mroeschke)
- Fix various issues in `Index.intersection` ([#14054](https://github.com/rapidsai/cudf/pull/14054)) [@galipremsagar](https://github.com/galipremsagar)
- Fix `Index.difference` to match with pandas ([#14053](https://github.com/rapidsai/cudf/pull/14053)) [@galipremsagar](https://github.com/galipremsagar)
- Fix empty string column construction ([#14052](https://github.com/rapidsai/cudf/pull/14052)) [@galipremsagar](https://github.com/galipremsagar)
- Fix `IntervalIndex.union` to preserve type-metadata ([#14051](https://github.com/rapidsai/cudf/pull/14051)) [@galipremsagar](https://github.com/galipremsagar)
- Raise `MixedTypeError` when a column of mixed-dtype is being constructed ([#14050](https://github.com/rapidsai/cudf/pull/14050)) [@galipremsagar](https://github.com/galipremsagar)
- Raise `NotImplementedError` for `MultiIndex.to_series` ([#14049](https://github.com/rapidsai/cudf/pull/14049)) [@galipremsagar](https://github.com/galipremsagar)
- Ignore compile_commands.json ([#14048](https://github.com/rapidsai/cudf/pull/14048)) [@harrism](https://github.com/harrism)
- Raise TypeError for any non-parseable argument in to_datetime ([#14044](https://github.com/rapidsai/cudf/pull/14044)) [@mroeschke](https://github.com/mroeschke)
- Raise NotImplementedError for to_datetime with z format ([#14037](https://github.com/rapidsai/cudf/pull/14037)) [@mroeschke](https://github.com/mroeschke)
- Implement `sort_remaining` for `sort_index` ([#14033](https://github.com/rapidsai/cudf/pull/14033)) [@wence-](https://github.com/wence-)
- Raise NotImplementedError for Categoricals with timezones ([#14032](https://github.com/rapidsai/cudf/pull/14032)) [@mroeschke](https://github.com/mroeschke)
- Temporary fix Parquet metadata with empty value string being ignored from writing ([#14026](https://github.com/rapidsai/cudf/pull/14026)) [@ttnghia](https://github.com/ttnghia)
- Preserve types of scalar being returned when possible in `quantile` ([#14014](https://github.com/rapidsai/cudf/pull/14014)) [@galipremsagar](https://github.com/galipremsagar)
- Fix return type of `MultiIndex.difference` ([#14009](https://github.com/rapidsai/cudf/pull/14009)) [@galipremsagar](https://github.com/galipremsagar)
- Raise an error when timezone subtypes are encountered in `pd.IntervalDtype` ([#14006](https://github.com/rapidsai/cudf/pull/14006)) [@galipremsagar](https://github.com/galipremsagar)
- Fix map column can not be non-nullable for java ([#14003](https://github.com/rapidsai/cudf/pull/14003)) [@res-life](https://github.com/res-life)
- Fix `name` selection in `Index.difference` and `Index.intersection` ([#13986](https://github.com/rapidsai/cudf/pull/13986)) [@galipremsagar](https://github.com/galipremsagar)
- Restore column type metadata with `dropna` to fix `factorize` API ([#13980](https://github.com/rapidsai/cudf/pull/13980)) [@galipremsagar](https://github.com/galipremsagar)
- Use thread_index_type to avoid out of bounds accesses in conditional joins ([#13971](https://github.com/rapidsai/cudf/pull/13971)) [@vyasr](https://github.com/vyasr)
- Fix `MultiIndex.to_numpy` to return numpy array with tuples ([#13966](https://github.com/rapidsai/cudf/pull/13966)) [@galipremsagar](https://github.com/galipremsagar)
- Use cudf::thread_index_type in get_json_object and tdigest kernels ([#13962](https://github.com/rapidsai/cudf/pull/13962)) [@nvdbaranec](https://github.com/nvdbaranec)
- Fix an issue with `IntervalIndex.repr` when null values are present ([#13958](https://github.com/rapidsai/cudf/pull/13958)) [@galipremsagar](https://github.com/galipremsagar)
- Fix type metadata issue preservation with `Column.unique` ([#13957](https://github.com/rapidsai/cudf/pull/13957)) [@galipremsagar](https://github.com/galipremsagar)
- Handle `Interval` scalars when passed in list-like inputs to `cudf.Index` ([#13956](https://github.com/rapidsai/cudf/pull/13956)) [@galipremsagar](https://github.com/galipremsagar)
- Fix setting of categories order when `dtype` is passed to a `CategoricalColumn` ([#13955](https://github.com/rapidsai/cudf/pull/13955)) [@galipremsagar](https://github.com/galipremsagar)
- Handle `as_index` in `GroupBy.apply` ([#13951](https://github.com/rapidsai/cudf/pull/13951)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Raise error for string types in `nsmallest` and `nlargest` ([#13946](https://github.com/rapidsai/cudf/pull/13946)) [@galipremsagar](https://github.com/galipremsagar)
- Fix `index` of `Groupby.apply` results when it is performed on empty objects ([#13944](https://github.com/rapidsai/cudf/pull/13944)) [@galipremsagar](https://github.com/galipremsagar)
- Fix integer overflow in shim `device_sum` functions ([#13943](https://github.com/rapidsai/cudf/pull/13943)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Fix type mismatch in groupby reduction for empty objects ([#13942](https://github.com/rapidsai/cudf/pull/13942)) [@galipremsagar](https://github.com/galipremsagar)
- Fixed processed bytes calculation in APPLY_BOOLEAN_MASK benchmark. ([#13937](https://github.com/rapidsai/cudf/pull/13937)) [@Blonck](https://github.com/Blonck)
- Fix construction of `Grouping` objects ([#13932](https://github.com/rapidsai/cudf/pull/13932)) [@galipremsagar](https://github.com/galipremsagar)
- Fix an issue with `loc` when column names is `MultiIndex` ([#13929](https://github.com/rapidsai/cudf/pull/13929)) [@galipremsagar](https://github.com/galipremsagar)
- Fix handling of typecasting in `searchsorted` ([#13925](https://github.com/rapidsai/cudf/pull/13925)) [@galipremsagar](https://github.com/galipremsagar)
- Preserve index `name` in `reindex` ([#13917](https://github.com/rapidsai/cudf/pull/13917)) [@galipremsagar](https://github.com/galipremsagar)
- Use `cudf::thread_index_type` in cuIO to prevent overflow in row indexing ([#13910](https://github.com/rapidsai/cudf/pull/13910)) [@vuule](https://github.com/vuule)
- Fix for encodings listed in the Parquet column chunk metadata ([#13907](https://github.com/rapidsai/cudf/pull/13907)) [@etseidl](https://github.com/etseidl)
- Use cudf::thread_index_type in concatenate.cu. ([#13906](https://github.com/rapidsai/cudf/pull/13906)) [@bdice](https://github.com/bdice)
- Use cudf::thread_index_type in replace.cu. ([#13905](https://github.com/rapidsai/cudf/pull/13905)) [@bdice](https://github.com/bdice)
- Add noSanitizer tag to Java reduction tests failing with sanitizer in CUDA 12 ([#13904](https://github.com/rapidsai/cudf/pull/13904)) [@jlowe](https://github.com/jlowe)
- Remove the internal use of the cudf's default stream in cuIO ([#13903](https://github.com/rapidsai/cudf/pull/13903)) [@vuule](https://github.com/vuule)
- Use cuda-nvtx-dev CUDA 12 package. ([#13901](https://github.com/rapidsai/cudf/pull/13901)) [@bdice](https://github.com/bdice)
- Use `thread_index_type` to avoid index overflow in grid-stride loops ([#13895](https://github.com/rapidsai/cudf/pull/13895)) [@PointKernel](https://github.com/PointKernel)
- Fix memory access error in cudf::shift for sliced strings ([#13894](https://github.com/rapidsai/cudf/pull/13894)) [@davidwendt](https://github.com/davidwendt)
- Raise error when trying to construct a `DataFrame` with mixed types ([#13889](https://github.com/rapidsai/cudf/pull/13889)) [@galipremsagar](https://github.com/galipremsagar)
- Return `nan` when one variable to be correlated has zero variance in JIT GroupBy Apply ([#13884](https://github.com/rapidsai/cudf/pull/13884)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Correctly detect the BOM mark in `read_csv` with compressed input ([#13881](https://github.com/rapidsai/cudf/pull/13881)) [@vuule](https://github.com/vuule)
- Check for the presence of all values in `MultiIndex.isin` ([#13879](https://github.com/rapidsai/cudf/pull/13879)) [@galipremsagar](https://github.com/galipremsagar)
- Fix nvtext::generate_character_ngrams performance regression for longer strings ([#13874](https://github.com/rapidsai/cudf/pull/13874)) [@davidwendt](https://github.com/davidwendt)
- Fix return type of `MultiIndex.levels` ([#13870](https://github.com/rapidsai/cudf/pull/13870)) [@galipremsagar](https://github.com/galipremsagar)
- Fix List's missing children metadata in JSON writer ([#13869](https://github.com/rapidsai/cudf/pull/13869)) [@karthikeyann](https://github.com/karthikeyann)
- Disable construction of Index when `freq` is set in pandas-compatibility mode ([#13857](https://github.com/rapidsai/cudf/pull/13857)) [@galipremsagar](https://github.com/galipremsagar)
- Fix an issue with fetching `NA` from a `TimedeltaColumn` ([#13853](https://github.com/rapidsai/cudf/pull/13853)) [@galipremsagar](https://github.com/galipremsagar)
- Simplify implementation of interval_range() and fix behaviour for floating `freq` ([#13844](https://github.com/rapidsai/cudf/pull/13844)) [@shwina](https://github.com/shwina)
- Fix binary operations between `Series` and `Index` ([#13842](https://github.com/rapidsai/cudf/pull/13842)) [@galipremsagar](https://github.com/galipremsagar)
- Update make_lists_column_from_scalar to use make_offsets_child_column utility ([#13841](https://github.com/rapidsai/cudf/pull/13841)) [@davidwendt](https://github.com/davidwendt)
- Fix read out of bounds in string concatenate ([#13838](https://github.com/rapidsai/cudf/pull/13838)) [@pentschev](https://github.com/pentschev)
- Raise error for more cases when `timezone-aware` data is passed to `as_column` ([#13835](https://github.com/rapidsai/cudf/pull/13835)) [@galipremsagar](https://github.com/galipremsagar)
- Fix `any`, `all` reduction behavior for `axis=None` and warn for other reductions ([#13831](https://github.com/rapidsai/cudf/pull/13831)) [@galipremsagar](https://github.com/galipremsagar)
- Raise error when trying to construct time-zone aware timestamps ([#13830](https://github.com/rapidsai/cudf/pull/13830)) [@galipremsagar](https://github.com/galipremsagar)
- Fix cuFile I/O factories ([#13829](https://github.com/rapidsai/cudf/pull/13829)) [@vuule](https://github.com/vuule)
- DataFrame with namedtuples uses ._field as column names ([#13824](https://github.com/rapidsai/cudf/pull/13824)) [@mroeschke](https://github.com/mroeschke)
- Branch 23.10 merge 23.08 ([#13822](https://github.com/rapidsai/cudf/pull/13822)) [@vyasr](https://github.com/vyasr)
- Return a Series from JIT GroupBy apply, rather than a DataFrame ([#13820](https://github.com/rapidsai/cudf/pull/13820)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- No need to dlsym EnsureS3Finalized we can call it directly ([#13819](https://github.com/rapidsai/cudf/pull/13819)) [@robertmaynard](https://github.com/robertmaynard)
- Raise error when mixed types are being constructed ([#13816](https://github.com/rapidsai/cudf/pull/13816)) [@galipremsagar](https://github.com/galipremsagar)
- Fix unbounded sequence issue in `DataFrame` constructor ([#13811](https://github.com/rapidsai/cudf/pull/13811)) [@galipremsagar](https://github.com/galipremsagar)
- Fix Byte-Pair-Encoding usage of cuco static-map for storing merge-pairs ([#13807](https://github.com/rapidsai/cudf/pull/13807)) [@davidwendt](https://github.com/davidwendt)
- Fix for Parquet writer when requested pages per row is smaller than fragment size ([#13806](https://github.com/rapidsai/cudf/pull/13806)) [@etseidl](https://github.com/etseidl)
- Remove hangs from trying to construct un-bounded sequences ([#13799](https://github.com/rapidsai/cudf/pull/13799)) [@galipremsagar](https://github.com/galipremsagar)
- Bug/update libcudf to handle arrow12 changes ([#13794](https://github.com/rapidsai/cudf/pull/13794)) [@robertmaynard](https://github.com/robertmaynard)
- Update get_arrow to arrows 12 CMake target name of arrow::xsimd ([#13790](https://github.com/rapidsai/cudf/pull/13790)) [@robertmaynard](https://github.com/robertmaynard)
- Raise error when trying to join `datetime` and `timedelta` types with other types ([#13786](https://github.com/rapidsai/cudf/pull/13786)) [@galipremsagar](https://github.com/galipremsagar)
- Fix negative unary operation for boolean type ([#13780](https://github.com/rapidsai/cudf/pull/13780)) [@galipremsagar](https://github.com/galipremsagar)
- Fix contains(`in`) method for `Series` ([#13779](https://github.com/rapidsai/cudf/pull/13779)) [@galipremsagar](https://github.com/galipremsagar)
- Fix binary operation column ordering and missing column issues ([#13778](https://github.com/rapidsai/cudf/pull/13778)) [@galipremsagar](https://github.com/galipremsagar)
- Cast only time of day to nanos to avoid an overflow in Parquet INT96 write ([#13776](https://github.com/rapidsai/cudf/pull/13776)) [@gerashegalov](https://github.com/gerashegalov)
- Preserve names of column object in various APIs ([#13772](https://github.com/rapidsai/cudf/pull/13772)) [@galipremsagar](https://github.com/galipremsagar)
- Raise error on constructing an array from mixed type inputs ([#13768](https://github.com/rapidsai/cudf/pull/13768)) [@galipremsagar](https://github.com/galipremsagar)
- Fix construction of DataFrames from dict when columns are provided ([#13766](https://github.com/rapidsai/cudf/pull/13766)) [@wence-](https://github.com/wence-)
- Provide our own Cython declaration for make_unique ([#13746](https://github.com/rapidsai/cudf/pull/13746)) [@wence-](https://github.com/wence-)
## π Documentation
- Fix typo in docstring: metadata. ([#14025](https://github.com/rapidsai/cudf/pull/14025)) [@bdice](https://github.com/bdice)
- Fix typo in parquet/page_decode.cuh ([#13849](https://github.com/rapidsai/cudf/pull/13849)) [@XinyuZeng](https://github.com/XinyuZeng)
- Simplify Python doc configuration ([#13826](https://github.com/rapidsai/cudf/pull/13826)) [@vyasr](https://github.com/vyasr)
- Update documentation to reflect recent changes in JSON reader and writer ([#13791](https://github.com/rapidsai/cudf/pull/13791)) [@vuule](https://github.com/vuule)
- Fix all warnings in Python docs ([#13789](https://github.com/rapidsai/cudf/pull/13789)) [@vyasr](https://github.com/vyasr)
## π New Features
- [Java] Add JNI bindings for `integers_to_hex` ([#14205](https://github.com/rapidsai/cudf/pull/14205)) [@razajafri](https://github.com/razajafri)
- Propagate errors from Parquet reader kernels back to host ([#14167](https://github.com/rapidsai/cudf/pull/14167)) [@vuule](https://github.com/vuule)
- JNI for `HISTOGRAM` and `MERGE_HISTOGRAM` aggregations ([#14154](https://github.com/rapidsai/cudf/pull/14154)) [@ttnghia](https://github.com/ttnghia)
- Expose streams in all public sorting APIs ([#14146](https://github.com/rapidsai/cudf/pull/14146)) [@vyasr](https://github.com/vyasr)
- Enable direct ingestion and production of Arrow scalars ([#14121](https://github.com/rapidsai/cudf/pull/14121)) [@vyasr](https://github.com/vyasr)
- Implement `GroupBy.value_counts` to match pandas API ([#14114](https://github.com/rapidsai/cudf/pull/14114)) [@stmio](https://github.com/stmio)
- Refactor parquet thrift reader ([#14097](https://github.com/rapidsai/cudf/pull/14097)) [@etseidl](https://github.com/etseidl)
- Refactor `hash_reduce_by_row` ([#14095](https://github.com/rapidsai/cudf/pull/14095)) [@ttnghia](https://github.com/ttnghia)
- Support negative preceding/following for ROW window functions ([#14093](https://github.com/rapidsai/cudf/pull/14093)) [@mythrocks](https://github.com/mythrocks)
- Support for progressive parquet chunked reading. ([#14079](https://github.com/rapidsai/cudf/pull/14079)) [@nvdbaranec](https://github.com/nvdbaranec)
- Implement `HISTOGRAM` and `MERGE_HISTOGRAM` aggregations ([#14045](https://github.com/rapidsai/cudf/pull/14045)) [@ttnghia](https://github.com/ttnghia)
- Expose streams in public search APIs ([#14034](https://github.com/rapidsai/cudf/pull/14034)) [@vyasr](https://github.com/vyasr)
- Expose streams in public replace APIs ([#14010](https://github.com/rapidsai/cudf/pull/14010)) [@vyasr](https://github.com/vyasr)
- Add stream parameter to public cudf::strings::split APIs ([#13997](https://github.com/rapidsai/cudf/pull/13997)) [@davidwendt](https://github.com/davidwendt)
- Expose streams in public filling APIs ([#13990](https://github.com/rapidsai/cudf/pull/13990)) [@vyasr](https://github.com/vyasr)
- Expose streams in public concatenate APIs ([#13987](https://github.com/rapidsai/cudf/pull/13987)) [@vyasr](https://github.com/vyasr)
- Use HostMemoryAllocator in jni::allocate_host_buffer ([#13975](https://github.com/rapidsai/cudf/pull/13975)) [@gerashegalov](https://github.com/gerashegalov)
- Enable fractional null probability for hashing benchmark ([#13967](https://github.com/rapidsai/cudf/pull/13967)) [@Blonck](https://github.com/Blonck)
- Switch pylibcudf-enabled types to use enum class in Cython ([#13931](https://github.com/rapidsai/cudf/pull/13931)) [@vyasr](https://github.com/vyasr)
- Add nvtext::tokenize_with_vocabulary API ([#13930](https://github.com/rapidsai/cudf/pull/13930)) [@davidwendt](https://github.com/davidwendt)
- Rewrite `DataFrame.stack` to support multi level column names ([#13927](https://github.com/rapidsai/cudf/pull/13927)) [@isVoid](https://github.com/isVoid)
- Add HostMemoryAllocator interface ([#13924](https://github.com/rapidsai/cudf/pull/13924)) [@gerashegalov](https://github.com/gerashegalov)
- Global stream pool ([#13922](https://github.com/rapidsai/cudf/pull/13922)) [@etseidl](https://github.com/etseidl)
- Create table_input_metadata from a table_metadata ([#13920](https://github.com/rapidsai/cudf/pull/13920)) [@etseidl](https://github.com/etseidl)
- Translate column size overflow exception to JNI ([#13911](https://github.com/rapidsai/cudf/pull/13911)) [@mythrocks](https://github.com/mythrocks)
- Enable RLE boolean encoding for v2 Parquet files ([#13886](https://github.com/rapidsai/cudf/pull/13886)) [@etseidl](https://github.com/etseidl)
- Exclude some tests from running with the compute sanitizer ([#13872](https://github.com/rapidsai/cudf/pull/13872)) [@firestarman](https://github.com/firestarman)
- Expand statistics support in ORC writer ([#13848](https://github.com/rapidsai/cudf/pull/13848)) [@vuule](https://github.com/vuule)
- Register the memory mapped buffer in `datasource` to improve H2D throughput ([#13814](https://github.com/rapidsai/cudf/pull/13814)) [@vuule](https://github.com/vuule)
- Add cudf::strings::find function with target per row ([#13808](https://github.com/rapidsai/cudf/pull/13808)) [@davidwendt](https://github.com/davidwendt)
- Add minhash support for MurmurHash3_x64_128 ([#13796](https://github.com/rapidsai/cudf/pull/13796)) [@davidwendt](https://github.com/davidwendt)
- Remove unnecessary pointer copying in JIT GroupBy Apply ([#13792](https://github.com/rapidsai/cudf/pull/13792)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Add 'poll' function to custreamz kafka consumer ([#13782](https://github.com/rapidsai/cudf/pull/13782)) [@jdye64](https://github.com/jdye64)
- Support `corr` in `GroupBy.apply` through the jit engine ([#13767](https://github.com/rapidsai/cudf/pull/13767)) [@shwina](https://github.com/shwina)
- Optionally write version 2 page headers in Parquet writer ([#13751](https://github.com/rapidsai/cudf/pull/13751)) [@etseidl](https://github.com/etseidl)
- Support more numeric types in `Groupby.apply` with `engine='jit'` ([#13729](https://github.com/rapidsai/cudf/pull/13729)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- [FEA] Add DELTA_BINARY_PACKED decoding support to Parquet reader ([#13637](https://github.com/rapidsai/cudf/pull/13637)) [@etseidl](https://github.com/etseidl)
- Read FIXED_LEN_BYTE_ARRAY as binary in parquet reader ([#13437](https://github.com/rapidsai/cudf/pull/13437)) [@PointKernel](https://github.com/PointKernel)
## π οΈ Improvements
- Pin `dask` and `distributed` for `23.10` release ([#14225](https://github.com/rapidsai/cudf/pull/14225)) [@galipremsagar](https://github.com/galipremsagar)
- update rmm tag path ([#14195](https://github.com/rapidsai/cudf/pull/14195)) [@AyodeAwe](https://github.com/AyodeAwe)
- Disable `Recently Updated` Check ([#14193](https://github.com/rapidsai/cudf/pull/14193)) [@ajschmidt8](https://github.com/ajschmidt8)
- Move cpp/src/hash/hash_allocator.cuh to include/cudf/hashing/detail ([#14163](https://github.com/rapidsai/cudf/pull/14163)) [@davidwendt](https://github.com/davidwendt)
- Add Parquet reader benchmarks for row selection ([#14147](https://github.com/rapidsai/cudf/pull/14147)) [@vuule](https://github.com/vuule)
- Update image names ([#14145](https://github.com/rapidsai/cudf/pull/14145)) [@AyodeAwe](https://github.com/AyodeAwe)
- Support callables in DataFrame.assign ([#14142](https://github.com/rapidsai/cudf/pull/14142)) [@wence-](https://github.com/wence-)
- Reduce memory usage of as_categorical_column ([#14138](https://github.com/rapidsai/cudf/pull/14138)) [@wence-](https://github.com/wence-)
- Replace Python scalar conversions with libcudf ([#14124](https://github.com/rapidsai/cudf/pull/14124)) [@vyasr](https://github.com/vyasr)
- Update to clang 16.0.6. ([#14120](https://github.com/rapidsai/cudf/pull/14120)) [@bdice](https://github.com/bdice)
- Fix type of empty `Index` and raise warning in `Series` constructor ([#14116](https://github.com/rapidsai/cudf/pull/14116)) [@galipremsagar](https://github.com/galipremsagar)
- Add stream parameter to external dict APIs ([#14115](https://github.com/rapidsai/cudf/pull/14115)) [@SurajAralihalli](https://github.com/SurajAralihalli)
- Add fallback matrix for nvcomp. ([#14082](https://github.com/rapidsai/cudf/pull/14082)) [@bdice](https://github.com/bdice)
- [Java] Add recoverWithNull to JSONOptions and pass to Table.readJSON ([#14078](https://github.com/rapidsai/cudf/pull/14078)) [@andygrove](https://github.com/andygrove)
- Remove header tests ([#14072](https://github.com/rapidsai/cudf/pull/14072)) [@ajschmidt8](https://github.com/ajschmidt8)
- Refactor `contains_table` with cuco::static_set ([#14064](https://github.com/rapidsai/cudf/pull/14064)) [@PointKernel](https://github.com/PointKernel)
- Remove debug print in a Parquet test ([#14063](https://github.com/rapidsai/cudf/pull/14063)) [@vuule](https://github.com/vuule)
- Expose stream parameter in public nvtext ngram APIs ([#14061](https://github.com/rapidsai/cudf/pull/14061)) [@davidwendt](https://github.com/davidwendt)
- Expose stream parameter in public strings find APIs ([#14060](https://github.com/rapidsai/cudf/pull/14060)) [@davidwendt](https://github.com/davidwendt)
- Update doxygen to 1.9.1 ([#14059](https://github.com/rapidsai/cudf/pull/14059)) [@vyasr](https://github.com/vyasr)
- Remove the mr from the base fixture ([#14057](https://github.com/rapidsai/cudf/pull/14057)) [@vyasr](https://github.com/vyasr)
- Expose streams in public strings case APIs ([#14056](https://github.com/rapidsai/cudf/pull/14056)) [@davidwendt](https://github.com/davidwendt)
- Refactor libcudf indexalator to typed normalator ([#14043](https://github.com/rapidsai/cudf/pull/14043)) [@davidwendt](https://github.com/davidwendt)
- Use cudf::make_empty_column instead of column_view constructor ([#14030](https://github.com/rapidsai/cudf/pull/14030)) [@davidwendt](https://github.com/davidwendt)
- Remove quadratic runtime due to accessing Frame._dtypes in loop ([#14028](https://github.com/rapidsai/cudf/pull/14028)) [@wence-](https://github.com/wence-)
- Explicitly depend on zlib in conda recipes ([#14018](https://github.com/rapidsai/cudf/pull/14018)) [@wence-](https://github.com/wence-)
- Use grid_stride for stride computations. ([#13996](https://github.com/rapidsai/cudf/pull/13996)) [@bdice](https://github.com/bdice)
- Fix an issue where casting null-array to `object` dtype will result in a failure ([#13994](https://github.com/rapidsai/cudf/pull/13994)) [@galipremsagar](https://github.com/galipremsagar)
- Add tab as literal to cudf::test::to_string output ([#13993](https://github.com/rapidsai/cudf/pull/13993)) [@davidwendt](https://github.com/davidwendt)
- Enable `codes` dtype parity in pandas-compatibility mode for `factorize` API ([#13982](https://github.com/rapidsai/cudf/pull/13982)) [@galipremsagar](https://github.com/galipremsagar)
- Fix `CategoricalIndex` ordering in `Groupby.agg` when pandas-compatibility mode is enabled ([#13978](https://github.com/rapidsai/cudf/pull/13978)) [@galipremsagar](https://github.com/galipremsagar)
- Produce a fatal error if cudf is unable to find pyarrow include directory ([#13976](https://github.com/rapidsai/cudf/pull/13976)) [@cwharris](https://github.com/cwharris)
- Use `thread_index_type` in `partitioning.cu` ([#13973](https://github.com/rapidsai/cudf/pull/13973)) [@divyegala](https://github.com/divyegala)
- Use `cudf::thread_index_type` in `merge.cu` ([#13972](https://github.com/rapidsai/cudf/pull/13972)) [@divyegala](https://github.com/divyegala)
- Use `copy-pr-bot` ([#13970](https://github.com/rapidsai/cudf/pull/13970)) [@ajschmidt8](https://github.com/ajschmidt8)
- Use cudf::thread_index_type in strings custom kernels ([#13968](https://github.com/rapidsai/cudf/pull/13968)) [@davidwendt](https://github.com/davidwendt)
- Add `bytes_per_second` to hash_partition benchmark ([#13965](https://github.com/rapidsai/cudf/pull/13965)) [@Blonck](https://github.com/Blonck)
- Added pinned pool reservation API for java ([#13964](https://github.com/rapidsai/cudf/pull/13964)) [@revans2](https://github.com/revans2)
- Simplify wheel build scripts and allow alphas of RAPIDS dependencies ([#13963](https://github.com/rapidsai/cudf/pull/13963)) [@vyasr](https://github.com/vyasr)
- Add `bytes_per_second` to copy_if_else benchmark ([#13960](https://github.com/rapidsai/cudf/pull/13960)) [@Blonck](https://github.com/Blonck)
- Add pandas compatible output to `Series.unique` ([#13959](https://github.com/rapidsai/cudf/pull/13959)) [@galipremsagar](https://github.com/galipremsagar)
- Add `bytes_per_second` to compiled binaryop benchmark ([#13938](https://github.com/rapidsai/cudf/pull/13938)) [@Blonck](https://github.com/Blonck)
- Unpin `dask` and `distributed` for `23.10` development ([#13935](https://github.com/rapidsai/cudf/pull/13935)) [@galipremsagar](https://github.com/galipremsagar)
- Make HostColumnVector.getRefCount public ([#13934](https://github.com/rapidsai/cudf/pull/13934)) [@abellina](https://github.com/abellina)
- Use cuco::static_set in JSON tree algorithm ([#13928](https://github.com/rapidsai/cudf/pull/13928)) [@karthikeyann](https://github.com/karthikeyann)
- Add java API to get size of host memory needed to copy column view ([#13919](https://github.com/rapidsai/cudf/pull/13919)) [@revans2](https://github.com/revans2)
- Use cudf::size_type instead of int32 where appropriate in nvtext functions ([#13915](https://github.com/rapidsai/cudf/pull/13915)) [@davidwendt](https://github.com/davidwendt)
- Enable hugepage for arrow host allocations ([#13914](https://github.com/rapidsai/cudf/pull/13914)) [@madsbk](https://github.com/madsbk)
- Improve performance of nvtext::edit_distance ([#13912](https://github.com/rapidsai/cudf/pull/13912)) [@davidwendt](https://github.com/davidwendt)
- Ensure cudf internals use pylibcudf in pure Python mode ([#13909](https://github.com/rapidsai/cudf/pull/13909)) [@vyasr](https://github.com/vyasr)
- Use `empty()` instead of `size()` where possible ([#13908](https://github.com/rapidsai/cudf/pull/13908)) [@vuule](https://github.com/vuule)
- [JNI] Adds HostColumnVector.EventHandler for spillability checks ([#13898](https://github.com/rapidsai/cudf/pull/13898)) [@abellina](https://github.com/abellina)
- Return `Timestamp` & `Timedelta` for fetching scalars in `DatetimeIndex` & `TimedeltaIndex` ([#13896](https://github.com/rapidsai/cudf/pull/13896)) [@galipremsagar](https://github.com/galipremsagar)
- Allow explicit `shuffle="p2p"` within dask-cudf API ([#13893](https://github.com/rapidsai/cudf/pull/13893)) [@rjzamora](https://github.com/rjzamora)
- Disable creation of `DatetimeIndex` when `freq` is passed to `cudf.date_range` ([#13890](https://github.com/rapidsai/cudf/pull/13890)) [@galipremsagar](https://github.com/galipremsagar)
- Bring parity with pandas for `datetime` & `timedelta` comparison operations ([#13877](https://github.com/rapidsai/cudf/pull/13877)) [@galipremsagar](https://github.com/galipremsagar)
- Change `NA` to `NaT` for `datetime` and `timedelta` types ([#13868](https://github.com/rapidsai/cudf/pull/13868)) [@galipremsagar](https://github.com/galipremsagar)
- Raise error when `astype(object)` is called in pandas compatibility mode ([#13862](https://github.com/rapidsai/cudf/pull/13862)) [@galipremsagar](https://github.com/galipremsagar)
- Fixes a performance regression in FST ([#13850](https://github.com/rapidsai/cudf/pull/13850)) [@elstehle](https://github.com/elstehle)
- Set native handles to null on close in Java wrapper classes ([#13818](https://github.com/rapidsai/cudf/pull/13818)) [@jlowe](https://github.com/jlowe)
- Avoid use of CUDF_EXPECTS in libcudf unit tests outside of helper functions with return values ([#13812](https://github.com/rapidsai/cudf/pull/13812)) [@vuule](https://github.com/vuule)
- Update `lists::contains` to experimental row comparator ([#13810](https://github.com/rapidsai/cudf/pull/13810)) [@divyegala](https://github.com/divyegala)
- Reduce `lists::contains` dispatches for scalars ([#13805](https://github.com/rapidsai/cudf/pull/13805)) [@divyegala](https://github.com/divyegala)
- Long string optimization for string column parsing in JSON reader ([#13803](https://github.com/rapidsai/cudf/pull/13803)) [@karthikeyann](https://github.com/karthikeyann)
- Raise NotImplementedError for pd.SparseDtype ([#13798](https://github.com/rapidsai/cudf/pull/13798)) [@mroeschke](https://github.com/mroeschke)
- Remove the libcudf cudf::offset_type type ([#13788](https://github.com/rapidsai/cudf/pull/13788)) [@davidwendt](https://github.com/davidwendt)
- Move Spark-indpendent Table debug to cudf Java ([#13783](https://github.com/rapidsai/cudf/pull/13783)) [@gerashegalov](https://github.com/gerashegalov)
- Update to Cython 3.0.0 ([#13777](https://github.com/rapidsai/cudf/pull/13777)) [@vyasr](https://github.com/vyasr)
- Refactor Parquet reader handling of V2 page header info ([#13775](https://github.com/rapidsai/cudf/pull/13775)) [@etseidl](https://github.com/etseidl)
- Branch 23.10 merge 23.08 ([#13773](https://github.com/rapidsai/cudf/pull/13773)) [@vyasr](https://github.com/vyasr)
- Restructure JSON code to correctly reflect legacy/experimental status ([#13757](https://github.com/rapidsai/cudf/pull/13757)) [@vuule](https://github.com/vuule)
- Branch 23.10 merge 23.08 ([#13753](https://github.com/rapidsai/cudf/pull/13753)) [@vyasr](https://github.com/vyasr)
- Enforce deprecations in `23.10` ([#13732](https://github.com/rapidsai/cudf/pull/13732)) [@galipremsagar](https://github.com/galipremsagar)
- Upgrade to arrow 12 ([#13728](https://github.com/rapidsai/cudf/pull/13728)) [@galipremsagar](https://github.com/galipremsagar)
- Refactors JSON reader's pushdown automaton ([#13716](https://github.com/rapidsai/cudf/pull/13716)) [@elstehle](https://github.com/elstehle)
- Remove Arrow dependency from the `datasource.hpp` public header ([#13698](https://github.com/rapidsai/cudf/pull/13698)) [@vuule](https://github.com/vuule)
# cuDF 23.08.00 (9 Aug 2023)
## π¨ Breaking Changes
- Enforce deprecations and add clarifications around existing deprecations ([#13710](https://github.com/rapidsai/cudf/pull/13710)) [@galipremsagar](https://github.com/galipremsagar)
- Separate MurmurHash32 from hash_functions.cuh ([#13681](https://github.com/rapidsai/cudf/pull/13681)) [@davidwendt](https://github.com/davidwendt)
- Avoid storing metadata in pointers in ORC and Parquet writers ([#13648](https://github.com/rapidsai/cudf/pull/13648)) [@vuule](https://github.com/vuule)
- Expose streams in all public copying APIs ([#13629](https://github.com/rapidsai/cudf/pull/13629)) [@vyasr](https://github.com/vyasr)
- Remove deprecated cudf::strings::slice_strings (by delimiter) functions ([#13628](https://github.com/rapidsai/cudf/pull/13628)) [@davidwendt](https://github.com/davidwendt)
- Remove deprecated cudf.set_allocator. ([#13591](https://github.com/rapidsai/cudf/pull/13591)) [@bdice](https://github.com/bdice)
- Change build.sh to use pip install instead of setup.py ([#13507](https://github.com/rapidsai/cudf/pull/13507)) [@vyasr](https://github.com/vyasr)
- Remove unused max_rows_tensor parameter from subword tokenizer ([#13463](https://github.com/rapidsai/cudf/pull/13463)) [@davidwendt](https://github.com/davidwendt)
- Fix decimal scale reductions in `_get_decimal_type` ([#13224](https://github.com/rapidsai/cudf/pull/13224)) [@charlesbluca](https://github.com/charlesbluca)
## π Bug Fixes
- Add CUDA version to cudf_kafka and libcudf-example build strings. ([#13769](https://github.com/rapidsai/cudf/pull/13769)) [@bdice](https://github.com/bdice)
- Fix typo in wheels-test.yaml. ([#13763](https://github.com/rapidsai/cudf/pull/13763)) [@bdice](https://github.com/bdice)
- Don't test strings shorter than the requested ngram size ([#13758](https://github.com/rapidsai/cudf/pull/13758)) [@vyasr](https://github.com/vyasr)
- Add CUDA version to custreamz build string. ([#13754](https://github.com/rapidsai/cudf/pull/13754)) [@bdice](https://github.com/bdice)
- Fix writing of ORC files with empty child string columns ([#13745](https://github.com/rapidsai/cudf/pull/13745)) [@vuule](https://github.com/vuule)
- Remove the erroneous "empty level" short-circuit from ORC reader ([#13722](https://github.com/rapidsai/cudf/pull/13722)) [@vuule](https://github.com/vuule)
- Fix character counting when writing sliced tables into ORC ([#13721](https://github.com/rapidsai/cudf/pull/13721)) [@vuule](https://github.com/vuule)
- Parquet uses row group row count if missing from header ([#13712](https://github.com/rapidsai/cudf/pull/13712)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Fix reading of RLE encoded boolean data from parquet files with V2 page headers ([#13707](https://github.com/rapidsai/cudf/pull/13707)) [@etseidl](https://github.com/etseidl)
- Fix a corner case of list lexicographic comparator ([#13701](https://github.com/rapidsai/cudf/pull/13701)) [@ttnghia](https://github.com/ttnghia)
- Fix combined filtering and column projection in `dask_cudf.read_parquet` ([#13697](https://github.com/rapidsai/cudf/pull/13697)) [@rjzamora](https://github.com/rjzamora)
- Revert fetch-rapids changes ([#13696](https://github.com/rapidsai/cudf/pull/13696)) [@vyasr](https://github.com/vyasr)
- Data generator - include offsets in the size estimate of list elments ([#13688](https://github.com/rapidsai/cudf/pull/13688)) [@vuule](https://github.com/vuule)
- Add `cuda-nvcc-impl` to `cudf` for `numba` CUDA 12 ([#13673](https://github.com/rapidsai/cudf/pull/13673)) [@jakirkham](https://github.com/jakirkham)
- Fix combined filtering and column projection in `read_parquet` ([#13666](https://github.com/rapidsai/cudf/pull/13666)) [@rjzamora](https://github.com/rjzamora)
- Use `thrust::identity` as hash functions for byte pair encoding ([#13665](https://github.com/rapidsai/cudf/pull/13665)) [@PointKernel](https://github.com/PointKernel)
- Fix loc-getitem ordering when index contains duplicate labels ([#13659](https://github.com/rapidsai/cudf/pull/13659)) [@wence-](https://github.com/wence-)
- [REVIEW] Introduce parity with pandas for `MultiIndex.loc` ordering & fix a bug in `Groupby` with `as_index` ([#13657](https://github.com/rapidsai/cudf/pull/13657)) [@galipremsagar](https://github.com/galipremsagar)
- Fix memcheck error found in nvtext tokenize functions ([#13649](https://github.com/rapidsai/cudf/pull/13649)) [@davidwendt](https://github.com/davidwendt)
- Fix `has_nonempty_nulls` ignoring column offset ([#13647](https://github.com/rapidsai/cudf/pull/13647)) [@ttnghia](https://github.com/ttnghia)
- [Java] Avoid double-free corruption in case of an Exception while creating a ColumnView ([#13645](https://github.com/rapidsai/cudf/pull/13645)) [@razajafri](https://github.com/razajafri)
- Fix memcheck error in ORC reader call to cudf::io::copy_uncompressed_kernel ([#13643](https://github.com/rapidsai/cudf/pull/13643)) [@davidwendt](https://github.com/davidwendt)
- Fix CUDA 12 conda environment to remove cubinlinker and ptxcompiler. ([#13636](https://github.com/rapidsai/cudf/pull/13636)) [@bdice](https://github.com/bdice)
- Fix inf/NaN comparisons for FLOAT orderby in window functions ([#13635](https://github.com/rapidsai/cudf/pull/13635)) [@mythrocks](https://github.com/mythrocks)
- Refactor `Index` search to simplify code and increase correctness ([#13625](https://github.com/rapidsai/cudf/pull/13625)) [@wence-](https://github.com/wence-)
- Fix compile warning for unused variable in split_re.cu ([#13621](https://github.com/rapidsai/cudf/pull/13621)) [@davidwendt](https://github.com/davidwendt)
- Fix tz_localize for dask_cudf Series ([#13610](https://github.com/rapidsai/cudf/pull/13610)) [@shwina](https://github.com/shwina)
- Fix issue with no decompressed data in ORC reader ([#13609](https://github.com/rapidsai/cudf/pull/13609)) [@vuule](https://github.com/vuule)
- Fix floating point window range extents. ([#13606](https://github.com/rapidsai/cudf/pull/13606)) [@mythrocks](https://github.com/mythrocks)
- Fix `localize(None)` for timezone-naive columns ([#13603](https://github.com/rapidsai/cudf/pull/13603)) [@shwina](https://github.com/shwina)
- Fixed a memory leak caused by Exception thrown while constructing a ColumnView ([#13597](https://github.com/rapidsai/cudf/pull/13597)) [@razajafri](https://github.com/razajafri)
- Handle nullptr return value from bitmask_or in distinct_count ([#13590](https://github.com/rapidsai/cudf/pull/13590)) [@wence-](https://github.com/wence-)
- Bring parity with pandas in Index.join ([#13589](https://github.com/rapidsai/cudf/pull/13589)) [@galipremsagar](https://github.com/galipremsagar)
- Fix cudf.melt when there are more than 255 columns ([#13588](https://github.com/rapidsai/cudf/pull/13588)) [@hcho3](https://github.com/hcho3)
- Fix memory issues in cuIO due to removal of memory padding ([#13586](https://github.com/rapidsai/cudf/pull/13586)) [@ttnghia](https://github.com/ttnghia)
- Fix Parquet multi-file reading ([#13584](https://github.com/rapidsai/cudf/pull/13584)) [@etseidl](https://github.com/etseidl)
- Fix memcheck error found in LISTS_TEST ([#13579](https://github.com/rapidsai/cudf/pull/13579)) [@davidwendt](https://github.com/davidwendt)
- Fix memcheck error found in STRINGS_TEST ([#13578](https://github.com/rapidsai/cudf/pull/13578)) [@davidwendt](https://github.com/davidwendt)
- Fix memcheck error found in INTEROP_TEST ([#13577](https://github.com/rapidsai/cudf/pull/13577)) [@davidwendt](https://github.com/davidwendt)
- Fix memcheck errors found in REDUCTION_TEST ([#13574](https://github.com/rapidsai/cudf/pull/13574)) [@davidwendt](https://github.com/davidwendt)
- Preemptive fix for hive-partitioning change in dask ([#13564](https://github.com/rapidsai/cudf/pull/13564)) [@rjzamora](https://github.com/rjzamora)
- Fix an issue with `dask_cudf.read_csv` when lines are needed to be skipped ([#13555](https://github.com/rapidsai/cudf/pull/13555)) [@galipremsagar](https://github.com/galipremsagar)
- Fix out-of-bounds memory write in cudf::dictionary::detail::concatenate ([#13554](https://github.com/rapidsai/cudf/pull/13554)) [@davidwendt](https://github.com/davidwendt)
- Fix the null mask size in json reader ([#13537](https://github.com/rapidsai/cudf/pull/13537)) [@karthikeyann](https://github.com/karthikeyann)
- Fix cudf::strings::strip for all-empty input column ([#13533](https://github.com/rapidsai/cudf/pull/13533)) [@davidwendt](https://github.com/davidwendt)
- Make sure to build without isolation or installing dependencies ([#13524](https://github.com/rapidsai/cudf/pull/13524)) [@vyasr](https://github.com/vyasr)
- Remove preload lib from CMake for now ([#13519](https://github.com/rapidsai/cudf/pull/13519)) [@vyasr](https://github.com/vyasr)
- Fix missing separator after null values in JSON writer ([#13503](https://github.com/rapidsai/cudf/pull/13503)) [@karthikeyann](https://github.com/karthikeyann)
- Ensure `single_lane_block_sum_reduce` is safe to call in a loop ([#13488](https://github.com/rapidsai/cudf/pull/13488)) [@wence-](https://github.com/wence-)
- Update all versions in pyproject.toml files. ([#13486](https://github.com/rapidsai/cudf/pull/13486)) [@bdice](https://github.com/bdice)
- Remove applying nvbench that doesn't exist in 23.08 ([#13484](https://github.com/rapidsai/cudf/pull/13484)) [@robertmaynard](https://github.com/robertmaynard)
- Fix chunked Parquet reader benchmark ([#13482](https://github.com/rapidsai/cudf/pull/13482)) [@vuule](https://github.com/vuule)
- Update JNI JSON reader column compatability for Spark ([#13477](https://github.com/rapidsai/cudf/pull/13477)) [@revans2](https://github.com/revans2)
- Fix unsanitized output of scan with strings ([#13455](https://github.com/rapidsai/cudf/pull/13455)) [@davidwendt](https://github.com/davidwendt)
- Reject functions without bytecode from `_can_be_jitted` in GroupBy Apply ([#13429](https://github.com/rapidsai/cudf/pull/13429)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Fix decimal scale reductions in `_get_decimal_type` ([#13224](https://github.com/rapidsai/cudf/pull/13224)) [@charlesbluca](https://github.com/charlesbluca)
## π Documentation
- Fix doxygen groups for io data sources and sinks ([#13718](https://github.com/rapidsai/cudf/pull/13718)) [@davidwendt](https://github.com/davidwendt)
- Add pandas compatibility note to DataFrame.query docstring ([#13693](https://github.com/rapidsai/cudf/pull/13693)) [@beckernick](https://github.com/beckernick)
- Add pylibcudf to developer guide ([#13639](https://github.com/rapidsai/cudf/pull/13639)) [@vyasr](https://github.com/vyasr)
- Fix repeated words in doxygen text ([#13598](https://github.com/rapidsai/cudf/pull/13598)) [@karthikeyann](https://github.com/karthikeyann)
- Update docs for top-level API. ([#13592](https://github.com/rapidsai/cudf/pull/13592)) [@bdice](https://github.com/bdice)
- Fix the the doxygen text for cudf::concatenate and other places ([#13561](https://github.com/rapidsai/cudf/pull/13561)) [@davidwendt](https://github.com/davidwendt)
- Document stream validation approach used in testing ([#13556](https://github.com/rapidsai/cudf/pull/13556)) [@vyasr](https://github.com/vyasr)
- Cleanup doc repetitions in libcudf ([#13470](https://github.com/rapidsai/cudf/pull/13470)) [@karthikeyann](https://github.com/karthikeyann)
## π New Features
- Support `min` and `max` aggregations for list type in groupby and reduction ([#13676](https://github.com/rapidsai/cudf/pull/13676)) [@ttnghia](https://github.com/ttnghia)
- Add nvtext::jaccard_index API for strings columns ([#13669](https://github.com/rapidsai/cudf/pull/13669)) [@davidwendt](https://github.com/davidwendt)
- Add read_parquet_metadata libcudf API ([#13663](https://github.com/rapidsai/cudf/pull/13663)) [@karthikeyann](https://github.com/karthikeyann)
- Expose streams in all public copying APIs ([#13629](https://github.com/rapidsai/cudf/pull/13629)) [@vyasr](https://github.com/vyasr)
- Add XXHash_64 hash function to cudf ([#13612](https://github.com/rapidsai/cudf/pull/13612)) [@davidwendt](https://github.com/davidwendt)
- Java support: Floating point order-by columns for RANGE window functions ([#13595](https://github.com/rapidsai/cudf/pull/13595)) [@mythrocks](https://github.com/mythrocks)
- Use `cuco::static_map` to build string dictionaries in ORC writer ([#13580](https://github.com/rapidsai/cudf/pull/13580)) [@vuule](https://github.com/vuule)
- Add pylibcudf subpackage with gather implementation ([#13562](https://github.com/rapidsai/cudf/pull/13562)) [@vyasr](https://github.com/vyasr)
- Add JNI for `lists::concatenate_list_elements` ([#13547](https://github.com/rapidsai/cudf/pull/13547)) [@ttnghia](https://github.com/ttnghia)
- Enable nested types for `lists::concatenate_list_elements` ([#13545](https://github.com/rapidsai/cudf/pull/13545)) [@ttnghia](https://github.com/ttnghia)
- Add unicode encoding for string columns in JSON writer ([#13539](https://github.com/rapidsai/cudf/pull/13539)) [@karthikeyann](https://github.com/karthikeyann)
- Remove numba kernels from `find_index_of_val` ([#13517](https://github.com/rapidsai/cudf/pull/13517)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Floating point order-by columns for RANGE window functions ([#13512](https://github.com/rapidsai/cudf/pull/13512)) [@mythrocks](https://github.com/mythrocks)
- Parse column chunk metadata statistics in parquet reader ([#13472](https://github.com/rapidsai/cudf/pull/13472)) [@karthikeyann](https://github.com/karthikeyann)
- Add `abs` function to apply ([#13408](https://github.com/rapidsai/cudf/pull/13408)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- [FEA] AST filtering in parquet reader ([#13348](https://github.com/rapidsai/cudf/pull/13348)) [@karthikeyann](https://github.com/karthikeyann)
- [FEA] Adds option to recover from invalid JSON lines in JSON tokenizer ([#13344](https://github.com/rapidsai/cudf/pull/13344)) [@elstehle](https://github.com/elstehle)
- Ensure cccl packages don't clash with upstream version ([#13235](https://github.com/rapidsai/cudf/pull/13235)) [@robertmaynard](https://github.com/robertmaynard)
- Update `struct_minmax_util` to experimental row comparator ([#13069](https://github.com/rapidsai/cudf/pull/13069)) [@divyegala](https://github.com/divyegala)
- Add stream parameter to hashing APIs ([#12090](https://github.com/rapidsai/cudf/pull/12090)) [@vyasr](https://github.com/vyasr)
## π οΈ Improvements
- Pin `dask` and `distributed` for `23.08` release ([#13802](https://github.com/rapidsai/cudf/pull/13802)) [@galipremsagar](https://github.com/galipremsagar)
- Relax protobuf pinnings. ([#13770](https://github.com/rapidsai/cudf/pull/13770)) [@bdice](https://github.com/bdice)
- Switch fully unbounded window functions to use aggregations ([#13727](https://github.com/rapidsai/cudf/pull/13727)) [@mythrocks](https://github.com/mythrocks)
- Switch to new wheel building pipeline ([#13723](https://github.com/rapidsai/cudf/pull/13723)) [@vyasr](https://github.com/vyasr)
- Revert CUDA 12.0 CI workflows to branch-23.08. ([#13719](https://github.com/rapidsai/cudf/pull/13719)) [@bdice](https://github.com/bdice)
- Adding identify minimum version requirement ([#13713](https://github.com/rapidsai/cudf/pull/13713)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Enforce deprecations and add clarifications around existing deprecations ([#13710](https://github.com/rapidsai/cudf/pull/13710)) [@galipremsagar](https://github.com/galipremsagar)
- Optimize ORC reader performance for list data ([#13708](https://github.com/rapidsai/cudf/pull/13708)) [@vyasr](https://github.com/vyasr)
- fix limit overflow message in a docstring ([#13703](https://github.com/rapidsai/cudf/pull/13703)) [@ahmet-uyar](https://github.com/ahmet-uyar)
- Alleviates JSON parser's need for multi-file sources to end with a newline ([#13702](https://github.com/rapidsai/cudf/pull/13702)) [@elstehle](https://github.com/elstehle)
- Update cython-lint and replace flake8 with ruff ([#13699](https://github.com/rapidsai/cudf/pull/13699)) [@vyasr](https://github.com/vyasr)
- Add `__dask_tokenize__` definitions to cudf classes ([#13695](https://github.com/rapidsai/cudf/pull/13695)) [@rjzamora](https://github.com/rjzamora)
- Convert libcudf hashing benchmarks to nvbench ([#13694](https://github.com/rapidsai/cudf/pull/13694)) [@davidwendt](https://github.com/davidwendt)
- Separate MurmurHash32 from hash_functions.cuh ([#13681](https://github.com/rapidsai/cudf/pull/13681)) [@davidwendt](https://github.com/davidwendt)
- Improve performance of cudf::strings::split on whitespace ([#13680](https://github.com/rapidsai/cudf/pull/13680)) [@davidwendt](https://github.com/davidwendt)
- Allow ORC and Parquet writers to write nullable columns without nulls as non-nullable ([#13675](https://github.com/rapidsai/cudf/pull/13675)) [@vuule](https://github.com/vuule)
- Raise a NotImplementedError in to_datetime when utc is passed ([#13670](https://github.com/rapidsai/cudf/pull/13670)) [@shwina](https://github.com/shwina)
- Add rmm_mode parameter to nvbench base fixture ([#13668](https://github.com/rapidsai/cudf/pull/13668)) [@davidwendt](https://github.com/davidwendt)
- Fix multiindex loc ordering in pandas-compat mode ([#13660](https://github.com/rapidsai/cudf/pull/13660)) [@wence-](https://github.com/wence-)
- Add nvtext hash_character_ngrams function ([#13654](https://github.com/rapidsai/cudf/pull/13654)) [@davidwendt](https://github.com/davidwendt)
- Avoid storing metadata in pointers in ORC and Parquet writers ([#13648](https://github.com/rapidsai/cudf/pull/13648)) [@vuule](https://github.com/vuule)
- Acquire spill lock in to/from_arrow ([#13646](https://github.com/rapidsai/cudf/pull/13646)) [@shwina](https://github.com/shwina)
- Expose stable versions of libcudf sort routines ([#13634](https://github.com/rapidsai/cudf/pull/13634)) [@wence-](https://github.com/wence-)
- Separate out hash_test.cpp source for each hash API ([#13633](https://github.com/rapidsai/cudf/pull/13633)) [@davidwendt](https://github.com/davidwendt)
- Remove deprecated cudf::strings::slice_strings (by delimiter) functions ([#13628](https://github.com/rapidsai/cudf/pull/13628)) [@davidwendt](https://github.com/davidwendt)
- Create separate libcudf hash APIs for each supported hash function ([#13626](https://github.com/rapidsai/cudf/pull/13626)) [@davidwendt](https://github.com/davidwendt)
- Add convert_dtypes API ([#13623](https://github.com/rapidsai/cudf/pull/13623)) [@shwina](https://github.com/shwina)
- Clean up cupy in dependencies.yaml. ([#13617](https://github.com/rapidsai/cudf/pull/13617)) [@bdice](https://github.com/bdice)
- Use cuda-version to constrain cudatoolkit. ([#13615](https://github.com/rapidsai/cudf/pull/13615)) [@bdice](https://github.com/bdice)
- Add murmurhash3_x64_128 function to libcudf ([#13604](https://github.com/rapidsai/cudf/pull/13604)) [@davidwendt](https://github.com/davidwendt)
- Performance improvement for cudf::strings::like ([#13594](https://github.com/rapidsai/cudf/pull/13594)) [@davidwendt](https://github.com/davidwendt)
- Remove deprecated cudf.set_allocator. ([#13591](https://github.com/rapidsai/cudf/pull/13591)) [@bdice](https://github.com/bdice)
- Clean up cudf device atomic with `cuda::atomic_ref` ([#13583](https://github.com/rapidsai/cudf/pull/13583)) [@PointKernel](https://github.com/PointKernel)
- Add java bindings for distinct count ([#13573](https://github.com/rapidsai/cudf/pull/13573)) [@revans2](https://github.com/revans2)
- Use nvcomp conda package. ([#13566](https://github.com/rapidsai/cudf/pull/13566)) [@bdice](https://github.com/bdice)
- Add exception to string_scalar if input string exceeds size_type ([#13560](https://github.com/rapidsai/cudf/pull/13560)) [@davidwendt](https://github.com/davidwendt)
- Add dispatch for `cudf.Dataframe` to/from `pyarrow.Table` conversion ([#13558](https://github.com/rapidsai/cudf/pull/13558)) [@rjzamora](https://github.com/rjzamora)
- Get rid of `cuco::pair_type` aliases ([#13553](https://github.com/rapidsai/cudf/pull/13553)) [@PointKernel](https://github.com/PointKernel)
- Introduce parity with pandas when `sort=False` in `Groupby` ([#13551](https://github.com/rapidsai/cudf/pull/13551)) [@galipremsagar](https://github.com/galipremsagar)
- Update CMake in docker to 3.26.4 ([#13550](https://github.com/rapidsai/cudf/pull/13550)) [@NvTimLiu](https://github.com/NvTimLiu)
- Clarify source of error message in stream testing. ([#13541](https://github.com/rapidsai/cudf/pull/13541)) [@bdice](https://github.com/bdice)
- Deprecate `strings_to_categorical` in `cudf.read_parquet` ([#13540](https://github.com/rapidsai/cudf/pull/13540)) [@galipremsagar](https://github.com/galipremsagar)
- Update to CMake 3.26.4 ([#13538](https://github.com/rapidsai/cudf/pull/13538)) [@vyasr](https://github.com/vyasr)
- s3 folder naming fix ([#13536](https://github.com/rapidsai/cudf/pull/13536)) [@AyodeAwe](https://github.com/AyodeAwe)
- Implement iloc-getitem using parse-don't-validate approach ([#13534](https://github.com/rapidsai/cudf/pull/13534)) [@wence-](https://github.com/wence-)
- Make synchronization explicit in the names of `hostdevice_*` copying APIs ([#13530](https://github.com/rapidsai/cudf/pull/13530)) [@ttnghia](https://github.com/ttnghia)
- Add benchmark (Google Benchmark) dependency to conda packages. ([#13528](https://github.com/rapidsai/cudf/pull/13528)) [@bdice](https://github.com/bdice)
- Add libcufile to dependencies.yaml. ([#13523](https://github.com/rapidsai/cudf/pull/13523)) [@bdice](https://github.com/bdice)
- Fix some memoization logic in groupby/sort/sort_helper.cu ([#13521](https://github.com/rapidsai/cudf/pull/13521)) [@davidwendt](https://github.com/davidwendt)
- Use sizes_to_offsets_iterator in cudf::gather for strings ([#13520](https://github.com/rapidsai/cudf/pull/13520)) [@davidwendt](https://github.com/davidwendt)
- use rapids-upload-docs script ([#13518](https://github.com/rapidsai/cudf/pull/13518)) [@AyodeAwe](https://github.com/AyodeAwe)
- Support UTF-8 BOM in CSV reader ([#13516](https://github.com/rapidsai/cudf/pull/13516)) [@davidwendt](https://github.com/davidwendt)
- Move stream-related test configuration to CMake ([#13513](https://github.com/rapidsai/cudf/pull/13513)) [@vyasr](https://github.com/vyasr)
- Implement `cudf.option_context` ([#13511](https://github.com/rapidsai/cudf/pull/13511)) [@galipremsagar](https://github.com/galipremsagar)
- Unpin `dask` and `distributed` for development ([#13508](https://github.com/rapidsai/cudf/pull/13508)) [@galipremsagar](https://github.com/galipremsagar)
- Change build.sh to use pip install instead of setup.py ([#13507](https://github.com/rapidsai/cudf/pull/13507)) [@vyasr](https://github.com/vyasr)
- Use test default stream ([#13506](https://github.com/rapidsai/cudf/pull/13506)) [@vyasr](https://github.com/vyasr)
- Remove documentation build scripts for Jenkins ([#13495](https://github.com/rapidsai/cudf/pull/13495)) [@ajschmidt8](https://github.com/ajschmidt8)
- Use east const in include files ([#13494](https://github.com/rapidsai/cudf/pull/13494)) [@karthikeyann](https://github.com/karthikeyann)
- Use east const in src files ([#13493](https://github.com/rapidsai/cudf/pull/13493)) [@karthikeyann](https://github.com/karthikeyann)
- Use east const in tests files ([#13492](https://github.com/rapidsai/cudf/pull/13492)) [@karthikeyann](https://github.com/karthikeyann)
- Use east const in benchmarks files ([#13491](https://github.com/rapidsai/cudf/pull/13491)) [@karthikeyann](https://github.com/karthikeyann)
- Performance improvement for nvtext tokenize/token functions ([#13480](https://github.com/rapidsai/cudf/pull/13480)) [@davidwendt](https://github.com/davidwendt)
- Add pd.Float*Dtype to Avro and ORC mappings ([#13475](https://github.com/rapidsai/cudf/pull/13475)) [@mroeschke](https://github.com/mroeschke)
- Use pandas public APIs where available ([#13467](https://github.com/rapidsai/cudf/pull/13467)) [@mroeschke](https://github.com/mroeschke)
- Allow pd.ArrowDtype in cudf.from_pandas ([#13465](https://github.com/rapidsai/cudf/pull/13465)) [@mroeschke](https://github.com/mroeschke)
- Rework libcudf regex benchmarks with nvbench ([#13464](https://github.com/rapidsai/cudf/pull/13464)) [@davidwendt](https://github.com/davidwendt)
- Remove unused max_rows_tensor parameter from subword tokenizer ([#13463](https://github.com/rapidsai/cudf/pull/13463)) [@davidwendt](https://github.com/davidwendt)
- Separate io-text and nvtext pytests into different files ([#13435](https://github.com/rapidsai/cudf/pull/13435)) [@davidwendt](https://github.com/davidwendt)
- Add a move_to function to cudf::string_view::const_iterator ([#13428](https://github.com/rapidsai/cudf/pull/13428)) [@davidwendt](https://github.com/davidwendt)
- Allow newer scikit-build ([#13424](https://github.com/rapidsai/cudf/pull/13424)) [@vyasr](https://github.com/vyasr)
- Refactor sort_by_values to sort_values, drop indices from return values. ([#13419](https://github.com/rapidsai/cudf/pull/13419)) [@bdice](https://github.com/bdice)
- Inline Cython exception handler ([#13411](https://github.com/rapidsai/cudf/pull/13411)) [@vyasr](https://github.com/vyasr)
- Init JNI version 23.08.0-SNAPSHOT ([#13401](https://github.com/rapidsai/cudf/pull/13401)) [@pxLi](https://github.com/pxLi)
- Refactor ORC reader ([#13396](https://github.com/rapidsai/cudf/pull/13396)) [@ttnghia](https://github.com/ttnghia)
- JNI: Remove cleaned objects in memory cleaner ([#13378](https://github.com/rapidsai/cudf/pull/13378)) [@res-life](https://github.com/res-life)
- Add tests of currently unsupported indexing ([#13338](https://github.com/rapidsai/cudf/pull/13338)) [@wence-](https://github.com/wence-)
- Performance improvement for some libcudf regex functions for long strings ([#13322](https://github.com/rapidsai/cudf/pull/13322)) [@davidwendt](https://github.com/davidwendt)
- Exposure Tracked Buffer (first step towards unifying copy-on-write and spilling) ([#13307](https://github.com/rapidsai/cudf/pull/13307)) [@madsbk](https://github.com/madsbk)
- Write string data directly to column_buffer in Parquet reader ([#13302](https://github.com/rapidsai/cudf/pull/13302)) [@etseidl](https://github.com/etseidl)
- Add stacktrace into cudf exception types ([#13298](https://github.com/rapidsai/cudf/pull/13298)) [@ttnghia](https://github.com/ttnghia)
- cuDF: Build CUDA 12 packages ([#12922](https://github.com/rapidsai/cudf/pull/12922)) [@bdice](https://github.com/bdice)
# cuDF 23.06.00 (7 Jun 2023)
## π¨ Breaking Changes
- Fix batch processing for parquet writer ([#13438](https://github.com/rapidsai/cudf/pull/13438)) [@ttnghia](https://github.com/ttnghia)
- Use <NA> instead of null to match pandas. ([#13415](https://github.com/rapidsai/cudf/pull/13415)) [@bdice](https://github.com/bdice)
- Remove UNKNOWN_NULL_COUNT ([#13372](https://github.com/rapidsai/cudf/pull/13372)) [@vyasr](https://github.com/vyasr)
- Remove default UNKNOWN_NULL_COUNT from cudf::column member functions ([#13341](https://github.com/rapidsai/cudf/pull/13341)) [@davidwendt](https://github.com/davidwendt)
- Use std::overflow_error when output would exceed column size limit ([#13323](https://github.com/rapidsai/cudf/pull/13323)) [@davidwendt](https://github.com/davidwendt)
- Remove null mask and null count from column_view constructors ([#13311](https://github.com/rapidsai/cudf/pull/13311)) [@vyasr](https://github.com/vyasr)
- Change default value of the `observed=` argument in groupby to `True` to reflect the actual behaviour ([#13296](https://github.com/rapidsai/cudf/pull/13296)) [@shwina](https://github.com/shwina)
- Throw error if UNINITIALIZED is passed to cudf::state_null_count ([#13292](https://github.com/rapidsai/cudf/pull/13292)) [@davidwendt](https://github.com/davidwendt)
- Remove default null-count parameter from cudf::make_strings_column factory ([#13227](https://github.com/rapidsai/cudf/pull/13227)) [@davidwendt](https://github.com/davidwendt)
- Remove UNKNOWN_NULL_COUNT where it can be easily computed ([#13205](https://github.com/rapidsai/cudf/pull/13205)) [@vyasr](https://github.com/vyasr)
- Update minimum Python version to Python 3.9 ([#13196](https://github.com/rapidsai/cudf/pull/13196)) [@shwina](https://github.com/shwina)
- Refactor contiguous_split API into contiguous_split.hpp ([#13186](https://github.com/rapidsai/cudf/pull/13186)) [@abellina](https://github.com/abellina)
- Cleanup Parquet chunked writer ([#13094](https://github.com/rapidsai/cudf/pull/13094)) [@ttnghia](https://github.com/ttnghia)
- Cleanup ORC chunked writer ([#13091](https://github.com/rapidsai/cudf/pull/13091)) [@ttnghia](https://github.com/ttnghia)
- Raise `NotImplementedError` when attempting to construct cuDF objects from timezone-aware datetimes ([#13086](https://github.com/rapidsai/cudf/pull/13086)) [@shwina](https://github.com/shwina)
- Remove deprecated regex functions from libcudf ([#13067](https://github.com/rapidsai/cudf/pull/13067)) [@davidwendt](https://github.com/davidwendt)
- [REVIEW] Upgrade to `arrow-11` ([#12757](https://github.com/rapidsai/cudf/pull/12757)) [@galipremsagar](https://github.com/galipremsagar)
- Implement Python drop_duplicates with cudf::stable_distinct. ([#11656](https://github.com/rapidsai/cudf/pull/11656)) [@brandon-b-miller](https://github.com/brandon-b-miller)
## π Bug Fixes
- Fix valid count computation in offset_bitmask_binop kernel ([#13489](https://github.com/rapidsai/cudf/pull/13489)) [@davidwendt](https://github.com/davidwendt)
- Fix writing of ORC files with empty rowgroups ([#13466](https://github.com/rapidsai/cudf/pull/13466)) [@vuule](https://github.com/vuule)
- Fix cudf::repeat logic when count is zero ([#13459](https://github.com/rapidsai/cudf/pull/13459)) [@davidwendt](https://github.com/davidwendt)
- Fix batch processing for parquet writer ([#13438](https://github.com/rapidsai/cudf/pull/13438)) [@ttnghia](https://github.com/ttnghia)
- Fix invalid use of std::exclusive_scan in Parquet writer ([#13434](https://github.com/rapidsai/cudf/pull/13434)) [@etseidl](https://github.com/etseidl)
- Patch numba if it is imported first to ensure minor version compatibility works. ([#13433](https://github.com/rapidsai/cudf/pull/13433)) [@bdice](https://github.com/bdice)
- Fix cudf::strings::replace_with_backrefs hang on empty match result ([#13418](https://github.com/rapidsai/cudf/pull/13418)) [@davidwendt](https://github.com/davidwendt)
- Use <NA> instead of null to match pandas. ([#13415](https://github.com/rapidsai/cudf/pull/13415)) [@bdice](https://github.com/bdice)
- Fix tokenize with non-space delimiter ([#13403](https://github.com/rapidsai/cudf/pull/13403)) [@shwina](https://github.com/shwina)
- Fix groupby head/tail for empty dataframe ([#13398](https://github.com/rapidsai/cudf/pull/13398)) [@shwina](https://github.com/shwina)
- Default to closed="right" in `IntervalIndex` constructor ([#13394](https://github.com/rapidsai/cudf/pull/13394)) [@shwina](https://github.com/shwina)
- Correctly reorder and reindex scan groupbys with null keys ([#13389](https://github.com/rapidsai/cudf/pull/13389)) [@wence-](https://github.com/wence-)
- Fix unused argument errors in nvcc 11.5 ([#13387](https://github.com/rapidsai/cudf/pull/13387)) [@abellina](https://github.com/abellina)
- Updates needed to work with jitify that leverages libcudacxx ([#13383](https://github.com/rapidsai/cudf/pull/13383)) [@robertmaynard](https://github.com/robertmaynard)
- Fix unused parameter warning/error in parquet/page_data.cu ([#13367](https://github.com/rapidsai/cudf/pull/13367)) [@davidwendt](https://github.com/davidwendt)
- Fix page size estimation in Parquet writer ([#13364](https://github.com/rapidsai/cudf/pull/13364)) [@etseidl](https://github.com/etseidl)
- Fix subword_tokenize error when input contains no tokens ([#13320](https://github.com/rapidsai/cudf/pull/13320)) [@davidwendt](https://github.com/davidwendt)
- Support gcc 12 as the C++ compiler ([#13316](https://github.com/rapidsai/cudf/pull/13316)) [@robertmaynard](https://github.com/robertmaynard)
- Correctly set bitmask size in `from_column_view` ([#13315](https://github.com/rapidsai/cudf/pull/13315)) [@wence-](https://github.com/wence-)
- Fix approach to detecting assignment for gte/lte operators ([#13285](https://github.com/rapidsai/cudf/pull/13285)) [@vyasr](https://github.com/vyasr)
- Fix parquet schema interpretation issue ([#13277](https://github.com/rapidsai/cudf/pull/13277)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Fix 64bit shift bug in avro reader ([#13276](https://github.com/rapidsai/cudf/pull/13276)) [@karthikeyann](https://github.com/karthikeyann)
- Fix unused variables/parameters in parquet/writer_impl.cu ([#13263](https://github.com/rapidsai/cudf/pull/13263)) [@davidwendt](https://github.com/davidwendt)
- Clean up buffers in case AssertionError ([#13262](https://github.com/rapidsai/cudf/pull/13262)) [@razajafri](https://github.com/razajafri)
- Allow empty input table in ast `compute_column` ([#13245](https://github.com/rapidsai/cudf/pull/13245)) [@wence-](https://github.com/wence-)
- Fix structs_column_wrapper constructors to copy input column wrappers ([#13243](https://github.com/rapidsai/cudf/pull/13243)) [@davidwendt](https://github.com/davidwendt)
- Fix the row index stream order in ORC reader ([#13242](https://github.com/rapidsai/cudf/pull/13242)) [@vuule](https://github.com/vuule)
- Make `is_decompression_disabled` and `is_compression_disabled` thread-safe ([#13240](https://github.com/rapidsai/cudf/pull/13240)) [@vuule](https://github.com/vuule)
- Add [[maybe_unused]] to nvbench environment. ([#13219](https://github.com/rapidsai/cudf/pull/13219)) [@bdice](https://github.com/bdice)
- Fix race in ORC string dictionary creation ([#13214](https://github.com/rapidsai/cudf/pull/13214)) [@revans2](https://github.com/revans2)
- Add scalar argtypes to udf cache keys ([#13194](https://github.com/rapidsai/cudf/pull/13194)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Fix unused parameter warning/error in grouped_rolling.cu ([#13192](https://github.com/rapidsai/cudf/pull/13192)) [@davidwendt](https://github.com/davidwendt)
- Avoid skbuild 0.17.2 which affected the cmake -DPython_LIBRARY string ([#13188](https://github.com/rapidsai/cudf/pull/13188)) [@sevagh](https://github.com/sevagh)
- Fix `hostdevice_vector::subspan` ([#13187](https://github.com/rapidsai/cudf/pull/13187)) [@ttnghia](https://github.com/ttnghia)
- Use custom nvbench entry point to ensure `cudf::nvbench_base_fixture` usage ([#13183](https://github.com/rapidsai/cudf/pull/13183)) [@robertmaynard](https://github.com/robertmaynard)
- Fix slice_strings to return empty strings for stop < start indices ([#13178](https://github.com/rapidsai/cudf/pull/13178)) [@davidwendt](https://github.com/davidwendt)
- Allow compilation with any GTest version 1.11+ ([#13153](https://github.com/rapidsai/cudf/pull/13153)) [@robertmaynard](https://github.com/robertmaynard)
- Fix a few clang-format style check errors ([#13146](https://github.com/rapidsai/cudf/pull/13146)) [@davidwendt](https://github.com/davidwendt)
- [REVIEW] Fix `Series` and `DataFrame` constructors to validate index lengths ([#13122](https://github.com/rapidsai/cudf/pull/13122)) [@galipremsagar](https://github.com/galipremsagar)
- Fix hash join when the input tables have nulls on only one side ([#13120](https://github.com/rapidsai/cudf/pull/13120)) [@ttnghia](https://github.com/ttnghia)
- Fix GPU_ARCHS setting in Java CMake build and CMAKE_CUDA_ARCHITECTURES in Python package build. ([#13117](https://github.com/rapidsai/cudf/pull/13117)) [@davidwendt](https://github.com/davidwendt)
- Adds checks to make sure json reader won't overflow ([#13115](https://github.com/rapidsai/cudf/pull/13115)) [@elstehle](https://github.com/elstehle)
- Fix `null_count` of columns returned by `chunked_parquet_reader` ([#13111](https://github.com/rapidsai/cudf/pull/13111)) [@vuule](https://github.com/vuule)
- Fixes sliced list and struct column bug in JSON chunked writer ([#13108](https://github.com/rapidsai/cudf/pull/13108)) [@karthikeyann](https://github.com/karthikeyann)
- [REVIEW] Fix missing confluent kafka version ([#13101](https://github.com/rapidsai/cudf/pull/13101)) [@galipremsagar](https://github.com/galipremsagar)
- Use make_empty_lists_column instead of make_empty_column(type_id::LIST) ([#13099](https://github.com/rapidsai/cudf/pull/13099)) [@davidwendt](https://github.com/davidwendt)
- Raise `NotImplementedError` when attempting to construct cuDF objects from timezone-aware datetimes ([#13086](https://github.com/rapidsai/cudf/pull/13086)) [@shwina](https://github.com/shwina)
- Fix column selection `read_parquet` benchmarks ([#13082](https://github.com/rapidsai/cudf/pull/13082)) [@vuule](https://github.com/vuule)
- Fix bugs in iterative groupby apply algorithm ([#13078](https://github.com/rapidsai/cudf/pull/13078)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Add algorithm include in data_sink.hpp ([#13068](https://github.com/rapidsai/cudf/pull/13068)) [@ahendriksen](https://github.com/ahendriksen)
- Fix tests/identify_stream_usage.cpp ([#13066](https://github.com/rapidsai/cudf/pull/13066)) [@ahendriksen](https://github.com/ahendriksen)
- Prevent overflow with `skip_rows` in ORC and Parquet readers ([#13063](https://github.com/rapidsai/cudf/pull/13063)) [@vuule](https://github.com/vuule)
- Add except declaration in Cython interface for regex_program::create ([#13054](https://github.com/rapidsai/cudf/pull/13054)) [@davidwendt](https://github.com/davidwendt)
- [REVIEW] Fix branch version in CI scripts ([#13029](https://github.com/rapidsai/cudf/pull/13029)) [@galipremsagar](https://github.com/galipremsagar)
- Fix OOB memory access in CSV reader when reading without NA values ([#13011](https://github.com/rapidsai/cudf/pull/13011)) [@vuule](https://github.com/vuule)
- Fix read_avro() skip_rows and num_rows. ([#12912](https://github.com/rapidsai/cudf/pull/12912)) [@tpn](https://github.com/tpn)
- Purge nonempty nulls from byte_cast list outputs. ([#11971](https://github.com/rapidsai/cudf/pull/11971)) [@bdice](https://github.com/bdice)
- Fix consumption of CPU-backed interchange protocol dataframes ([#11392](https://github.com/rapidsai/cudf/pull/11392)) [@shwina](https://github.com/shwina)
## π New Features
- Remove numba JIT kernel usage from dataframe copy tests ([#13385](https://github.com/rapidsai/cudf/pull/13385)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Add JNI for ORC/Parquet writer compression statistics ([#13376](https://github.com/rapidsai/cudf/pull/13376)) [@ttnghia](https://github.com/ttnghia)
- Use _compile_or_get in JIT groupby apply ([#13350](https://github.com/rapidsai/cudf/pull/13350)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- cuDF numba cuda 12 updates ([#13337](https://github.com/rapidsai/cudf/pull/13337)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Add tz_convert method to convert between timestamps ([#13328](https://github.com/rapidsai/cudf/pull/13328)) [@shwina](https://github.com/shwina)
- Optionally return compression statistics from ORC and Parquet writers ([#13294](https://github.com/rapidsai/cudf/pull/13294)) [@vuule](https://github.com/vuule)
- Support the case=False argument to str.contains ([#13290](https://github.com/rapidsai/cudf/pull/13290)) [@shwina](https://github.com/shwina)
- Add an event handler for ColumnVector.close ([#13279](https://github.com/rapidsai/cudf/pull/13279)) [@abellina](https://github.com/abellina)
- JNI api for cudf::chunked_pack ([#13278](https://github.com/rapidsai/cudf/pull/13278)) [@abellina](https://github.com/abellina)
- Implement a chunked_pack API ([#13260](https://github.com/rapidsai/cudf/pull/13260)) [@abellina](https://github.com/abellina)
- Update cudf recipes to use GTest version to >=1.13 ([#13207](https://github.com/rapidsai/cudf/pull/13207)) [@robertmaynard](https://github.com/robertmaynard)
- JNI changes for range-extents in window functions. ([#13199](https://github.com/rapidsai/cudf/pull/13199)) [@mythrocks](https://github.com/mythrocks)
- Add support for DatetimeTZDtype and tz_localize ([#13163](https://github.com/rapidsai/cudf/pull/13163)) [@shwina](https://github.com/shwina)
- Add IS_NULL operator to AST ([#13145](https://github.com/rapidsai/cudf/pull/13145)) [@karthikeyann](https://github.com/karthikeyann)
- STRING order-by column for RANGE window functions ([#13143](https://github.com/rapidsai/cudf/pull/13143)) [@mythrocks](https://github.com/mythrocks)
- Update `contains_table` to experimental row hasher and equality comparator ([#13119](https://github.com/rapidsai/cudf/pull/13119)) [@divyegala](https://github.com/divyegala)
- Automatically select `GroupBy.apply` algorithm based on if the UDF is jittable ([#13113](https://github.com/rapidsai/cudf/pull/13113)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Refactor Parquet chunked writer ([#13076](https://github.com/rapidsai/cudf/pull/13076)) [@ttnghia](https://github.com/ttnghia)
- Add Python bindings for string literal support in AST ([#13073](https://github.com/rapidsai/cudf/pull/13073)) [@karthikeyann](https://github.com/karthikeyann)
- Add Java bindings for string literal support in AST ([#13072](https://github.com/rapidsai/cudf/pull/13072)) [@karthikeyann](https://github.com/karthikeyann)
- Add string scalar support in AST ([#13061](https://github.com/rapidsai/cudf/pull/13061)) [@karthikeyann](https://github.com/karthikeyann)
- Log cuIO warnings using the libcudf logger ([#13043](https://github.com/rapidsai/cudf/pull/13043)) [@vuule](https://github.com/vuule)
- Update `mixed_join` to use experimental row hasher and comparator ([#13028](https://github.com/rapidsai/cudf/pull/13028)) [@divyegala](https://github.com/divyegala)
- Support structs of lists in row lexicographic comparator ([#13005](https://github.com/rapidsai/cudf/pull/13005)) [@ttnghia](https://github.com/ttnghia)
- Adding `hostdevice_span` that is a span createable from `hostdevice_vector` ([#12981](https://github.com/rapidsai/cudf/pull/12981)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Add nvtext::minhash function ([#12961](https://github.com/rapidsai/cudf/pull/12961)) [@davidwendt](https://github.com/davidwendt)
- Support lists of structs in row lexicographic comparator ([#12953](https://github.com/rapidsai/cudf/pull/12953)) [@ttnghia](https://github.com/ttnghia)
- Update `join` to use experimental row hasher and comparator ([#12787](https://github.com/rapidsai/cudf/pull/12787)) [@divyegala](https://github.com/divyegala)
- Implement Python drop_duplicates with cudf::stable_distinct. ([#11656](https://github.com/rapidsai/cudf/pull/11656)) [@brandon-b-miller](https://github.com/brandon-b-miller)
## π οΈ Improvements
- Drop extraneous dependencies from cudf conda recipe. ([#13406](https://github.com/rapidsai/cudf/pull/13406)) [@bdice](https://github.com/bdice)
- Handle some corner-cases in indexing with boolean masks ([#13402](https://github.com/rapidsai/cudf/pull/13402)) [@wence-](https://github.com/wence-)
- Add cudf::stable_distinct public API, tests, and benchmarks. ([#13392](https://github.com/rapidsai/cudf/pull/13392)) [@bdice](https://github.com/bdice)
- [JNI] Pass this ColumnVector to the onClosed event handler ([#13386](https://github.com/rapidsai/cudf/pull/13386)) [@abellina](https://github.com/abellina)
- Fix JNI method with mismatched parameter list ([#13384](https://github.com/rapidsai/cudf/pull/13384)) [@ttnghia](https://github.com/ttnghia)
- Split up experimental_row_operator_tests.cu to improve its compile time ([#13382](https://github.com/rapidsai/cudf/pull/13382)) [@davidwendt](https://github.com/davidwendt)
- Deprecate cudf::strings::slice_strings APIs that accept delimiters ([#13373](https://github.com/rapidsai/cudf/pull/13373)) [@davidwendt](https://github.com/davidwendt)
- Remove UNKNOWN_NULL_COUNT ([#13372](https://github.com/rapidsai/cudf/pull/13372)) [@vyasr](https://github.com/vyasr)
- Move some nvtext benchmarks to nvbench ([#13368](https://github.com/rapidsai/cudf/pull/13368)) [@davidwendt](https://github.com/davidwendt)
- run docs nightly too ([#13366](https://github.com/rapidsai/cudf/pull/13366)) [@AyodeAwe](https://github.com/AyodeAwe)
- Add warning for default `dtype` parameter in `get_dummies` ([#13365](https://github.com/rapidsai/cudf/pull/13365)) [@galipremsagar](https://github.com/galipremsagar)
- Add log messages about kvikIO compatibility mode ([#13363](https://github.com/rapidsai/cudf/pull/13363)) [@vuule](https://github.com/vuule)
- Switch back to using primary shared-action-workflows branch ([#13362](https://github.com/rapidsai/cudf/pull/13362)) [@vyasr](https://github.com/vyasr)
- Deprecate `StringIndex` and use `Index` instead ([#13361](https://github.com/rapidsai/cudf/pull/13361)) [@galipremsagar](https://github.com/galipremsagar)
- Ensure columns have valid null counts in CUDF JNI. ([#13355](https://github.com/rapidsai/cudf/pull/13355)) [@mythrocks](https://github.com/mythrocks)
- Expunge most uses of `TypeVar(bound="Foo")` ([#13346](https://github.com/rapidsai/cudf/pull/13346)) [@wence-](https://github.com/wence-)
- Remove all references to UNKNOWN_NULL_COUNT in Python ([#13345](https://github.com/rapidsai/cudf/pull/13345)) [@vyasr](https://github.com/vyasr)
- Improve `distinct_count` with `cuco::static_set` ([#13343](https://github.com/rapidsai/cudf/pull/13343)) [@PointKernel](https://github.com/PointKernel)
- Fix `contiguous_split` performance ([#13342](https://github.com/rapidsai/cudf/pull/13342)) [@ttnghia](https://github.com/ttnghia)
- Remove default UNKNOWN_NULL_COUNT from cudf::column member functions ([#13341](https://github.com/rapidsai/cudf/pull/13341)) [@davidwendt](https://github.com/davidwendt)
- Update mypy to 1.3 ([#13340](https://github.com/rapidsai/cudf/pull/13340)) [@wence-](https://github.com/wence-)
- [Java] Purge non-empty nulls when setting validity ([#13335](https://github.com/rapidsai/cudf/pull/13335)) [@razajafri](https://github.com/razajafri)
- Add row-wise filtering step to `read_parquet` ([#13334](https://github.com/rapidsai/cudf/pull/13334)) [@rjzamora](https://github.com/rjzamora)
- Performance improvement for nvtext::minhash ([#13333](https://github.com/rapidsai/cudf/pull/13333)) [@davidwendt](https://github.com/davidwendt)
- Fix some libcudf functions to set the null count on returning columns ([#13331](https://github.com/rapidsai/cudf/pull/13331)) [@davidwendt](https://github.com/davidwendt)
- Change cudf::detail::concatenate_masks to return null-count ([#13330](https://github.com/rapidsai/cudf/pull/13330)) [@davidwendt](https://github.com/davidwendt)
- Move `meta` calculation in `dask_cudf.read_parquet` ([#13327](https://github.com/rapidsai/cudf/pull/13327)) [@rjzamora](https://github.com/rjzamora)
- Changes to support Numpy >= 1.24 ([#13325](https://github.com/rapidsai/cudf/pull/13325)) [@shwina](https://github.com/shwina)
- Use std::overflow_error when output would exceed column size limit ([#13323](https://github.com/rapidsai/cudf/pull/13323)) [@davidwendt](https://github.com/davidwendt)
- Clean up `distinct_count` benchmark ([#13321](https://github.com/rapidsai/cudf/pull/13321)) [@PointKernel](https://github.com/PointKernel)
- Fix gtest pinning to 1.13.0. ([#13319](https://github.com/rapidsai/cudf/pull/13319)) [@bdice](https://github.com/bdice)
- Remove null mask and null count from column_view constructors ([#13311](https://github.com/rapidsai/cudf/pull/13311)) [@vyasr](https://github.com/vyasr)
- Address feedback from 13289 ([#13306](https://github.com/rapidsai/cudf/pull/13306)) [@vyasr](https://github.com/vyasr)
- Change default value of the `observed=` argument in groupby to `True` to reflect the actual behaviour ([#13296](https://github.com/rapidsai/cudf/pull/13296)) [@shwina](https://github.com/shwina)
- First check for `BaseDtype` when infering the data type of an arbitrary object ([#13295](https://github.com/rapidsai/cudf/pull/13295)) [@shwina](https://github.com/shwina)
- Throw error if UNINITIALIZED is passed to cudf::state_null_count ([#13292](https://github.com/rapidsai/cudf/pull/13292)) [@davidwendt](https://github.com/davidwendt)
- Support CUDA 12.0 for pip wheels ([#13289](https://github.com/rapidsai/cudf/pull/13289)) [@divyegala](https://github.com/divyegala)
- Refactor `transform_lists_of_structs` in `row_operators.cu` ([#13288](https://github.com/rapidsai/cudf/pull/13288)) [@ttnghia](https://github.com/ttnghia)
- Branch 23.06 merge 23.04 ([#13286](https://github.com/rapidsai/cudf/pull/13286)) [@vyasr](https://github.com/vyasr)
- Update cupy dependency ([#13284](https://github.com/rapidsai/cudf/pull/13284)) [@vyasr](https://github.com/vyasr)
- Performance improvement in cudf::strings::join_strings for long strings ([#13283](https://github.com/rapidsai/cudf/pull/13283)) [@davidwendt](https://github.com/davidwendt)
- Fix unused variables and functions ([#13275](https://github.com/rapidsai/cudf/pull/13275)) [@karthikeyann](https://github.com/karthikeyann)
- Fix integer overflow in `partition` `scatter_map` construction ([#13272](https://github.com/rapidsai/cudf/pull/13272)) [@wence-](https://github.com/wence-)
- Numba 0.57 compatibility fixes ([#13271](https://github.com/rapidsai/cudf/pull/13271)) [@gmarkall](https://github.com/gmarkall)
- Performance improvement in cudf::strings::all_characters_of_type ([#13259](https://github.com/rapidsai/cudf/pull/13259)) [@davidwendt](https://github.com/davidwendt)
- Remove default null-count parameter from some libcudf factory functions ([#13258](https://github.com/rapidsai/cudf/pull/13258)) [@davidwendt](https://github.com/davidwendt)
- Roll our own generate_string() because mimesis' has gone away ([#13257](https://github.com/rapidsai/cudf/pull/13257)) [@shwina](https://github.com/shwina)
- Build wheels using new single image workflow ([#13249](https://github.com/rapidsai/cudf/pull/13249)) [@vyasr](https://github.com/vyasr)
- Enable sccache hits from local builds ([#13248](https://github.com/rapidsai/cudf/pull/13248)) [@AyodeAwe](https://github.com/AyodeAwe)
- Revert to branch-23.06 for shared-action-workflows ([#13247](https://github.com/rapidsai/cudf/pull/13247)) [@shwina](https://github.com/shwina)
- Introduce `pandas_compatible` option in `cudf` ([#13241](https://github.com/rapidsai/cudf/pull/13241)) [@galipremsagar](https://github.com/galipremsagar)
- Add metadata_builder helper class ([#13232](https://github.com/rapidsai/cudf/pull/13232)) [@abellina](https://github.com/abellina)
- Use libkvikio conda packages in libcudf, add explicit libcufile dependency. ([#13231](https://github.com/rapidsai/cudf/pull/13231)) [@bdice](https://github.com/bdice)
- Remove default null-count parameter from cudf::make_strings_column factory ([#13227](https://github.com/rapidsai/cudf/pull/13227)) [@davidwendt](https://github.com/davidwendt)
- Performance improvement in cudf::strings::find/rfind for long strings ([#13226](https://github.com/rapidsai/cudf/pull/13226)) [@davidwendt](https://github.com/davidwendt)
- Add chunked reader benchmark ([#13223](https://github.com/rapidsai/cudf/pull/13223)) [@SrikarVanavasam](https://github.com/SrikarVanavasam)
- Set the null count in output columns in the CSV reader ([#13221](https://github.com/rapidsai/cudf/pull/13221)) [@vuule](https://github.com/vuule)
- Skip Non-Empty nulls tests for the nightly build just like we skip CuFileTest and CudaFatalTest ([#13213](https://github.com/rapidsai/cudf/pull/13213)) [@razajafri](https://github.com/razajafri)
- Fix string_scalar stream usage in write_json.cu ([#13212](https://github.com/rapidsai/cudf/pull/13212)) [@davidwendt](https://github.com/davidwendt)
- Use canonicalized name for dlopen'd libraries (libcufile) ([#13210](https://github.com/rapidsai/cudf/pull/13210)) [@shwina](https://github.com/shwina)
- Refactor pinned memory vector and ORC+Parquet writers ([#13206](https://github.com/rapidsai/cudf/pull/13206)) [@ttnghia](https://github.com/ttnghia)
- Remove UNKNOWN_NULL_COUNT where it can be easily computed ([#13205](https://github.com/rapidsai/cudf/pull/13205)) [@vyasr](https://github.com/vyasr)
- Optimization to decoding of parquet level streams ([#13203](https://github.com/rapidsai/cudf/pull/13203)) [@nvdbaranec](https://github.com/nvdbaranec)
- Clean up and simplify `gpuDecideCompression` ([#13202](https://github.com/rapidsai/cudf/pull/13202)) [@vuule](https://github.com/vuule)
- Use std::array for a statically sized vector in `create_serialized_trie` ([#13201](https://github.com/rapidsai/cudf/pull/13201)) [@vuule](https://github.com/vuule)
- Update minimum Python version to Python 3.9 ([#13196](https://github.com/rapidsai/cudf/pull/13196)) [@shwina](https://github.com/shwina)
- Refactor contiguous_split API into contiguous_split.hpp ([#13186](https://github.com/rapidsai/cudf/pull/13186)) [@abellina](https://github.com/abellina)
- Remove usage of rapids-get-rapids-version-from-git ([#13184](https://github.com/rapidsai/cudf/pull/13184)) [@jjacobelli](https://github.com/jjacobelli)
- Enable mixed-dtype decimal/scalar binary operations ([#13171](https://github.com/rapidsai/cudf/pull/13171)) [@shwina](https://github.com/shwina)
- Split up unique_count.cu to improve build time ([#13169](https://github.com/rapidsai/cudf/pull/13169)) [@davidwendt](https://github.com/davidwendt)
- Use nvtx3 includes in string examples. ([#13165](https://github.com/rapidsai/cudf/pull/13165)) [@bdice](https://github.com/bdice)
- Change some .cu gtest files to .cpp ([#13155](https://github.com/rapidsai/cudf/pull/13155)) [@davidwendt](https://github.com/davidwendt)
- Remove wheel pytest verbosity ([#13151](https://github.com/rapidsai/cudf/pull/13151)) [@sevagh](https://github.com/sevagh)
- Fix libcudf to always pass null-count to set_null_mask ([#13149](https://github.com/rapidsai/cudf/pull/13149)) [@davidwendt](https://github.com/davidwendt)
- Fix gtests to always pass null-count to set_null_mask calls ([#13148](https://github.com/rapidsai/cudf/pull/13148)) [@davidwendt](https://github.com/davidwendt)
- Optimize JSON writer ([#13144](https://github.com/rapidsai/cudf/pull/13144)) [@karthikeyann](https://github.com/karthikeyann)
- Performance improvement for libcudf upper/lower conversion for long strings ([#13142](https://github.com/rapidsai/cudf/pull/13142)) [@davidwendt](https://github.com/davidwendt)
- [REVIEW] Deprecate `pad` and `backfill` methods ([#13140](https://github.com/rapidsai/cudf/pull/13140)) [@galipremsagar](https://github.com/galipremsagar)
- Use CTAD instead of functions in ProtobufReader ([#13135](https://github.com/rapidsai/cudf/pull/13135)) [@vuule](https://github.com/vuule)
- Remove more instances of `UNKNOWN_NULL_COUNT` ([#13134](https://github.com/rapidsai/cudf/pull/13134)) [@vyasr](https://github.com/vyasr)
- Update clang-format to 16.0.1. ([#13133](https://github.com/rapidsai/cudf/pull/13133)) [@bdice](https://github.com/bdice)
- Add log messages about cuIO's nvCOMP and cuFile use ([#13132](https://github.com/rapidsai/cudf/pull/13132)) [@vuule](https://github.com/vuule)
- Branch 23.06 merge 23.04 ([#13131](https://github.com/rapidsai/cudf/pull/13131)) [@vyasr](https://github.com/vyasr)
- Compute null-count in cudf::detail::slice ([#13124](https://github.com/rapidsai/cudf/pull/13124)) [@davidwendt](https://github.com/davidwendt)
- Use ARC V2 self-hosted runners for GPU jobs ([#13123](https://github.com/rapidsai/cudf/pull/13123)) [@jjacobelli](https://github.com/jjacobelli)
- Set null-count in linked_column_view conversion operator ([#13121](https://github.com/rapidsai/cudf/pull/13121)) [@davidwendt](https://github.com/davidwendt)
- Adding ifdefs around nvcc-specific pragmas ([#13110](https://github.com/rapidsai/cudf/pull/13110)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Add null-count parameter to json experimental parse_data utility ([#13107](https://github.com/rapidsai/cudf/pull/13107)) [@davidwendt](https://github.com/davidwendt)
- Remove uses-setup-env-vars ([#13105](https://github.com/rapidsai/cudf/pull/13105)) [@vyasr](https://github.com/vyasr)
- Explicitly compute null count in concatenate APIs ([#13104](https://github.com/rapidsai/cudf/pull/13104)) [@vyasr](https://github.com/vyasr)
- Replace unnecessary uses of `UNKNOWN_NULL_COUNT` ([#13102](https://github.com/rapidsai/cudf/pull/13102)) [@vyasr](https://github.com/vyasr)
- Performance improvement for cudf::string_view::find functions ([#13100](https://github.com/rapidsai/cudf/pull/13100)) [@davidwendt](https://github.com/davidwendt)
- Use `.element()` instead of `.data()` for window range calculations ([#13095](https://github.com/rapidsai/cudf/pull/13095)) [@mythrocks](https://github.com/mythrocks)
- Cleanup Parquet chunked writer ([#13094](https://github.com/rapidsai/cudf/pull/13094)) [@ttnghia](https://github.com/ttnghia)
- Fix unused variable error/warning in page_data.cu ([#13093](https://github.com/rapidsai/cudf/pull/13093)) [@davidwendt](https://github.com/davidwendt)
- Cleanup ORC chunked writer ([#13091](https://github.com/rapidsai/cudf/pull/13091)) [@ttnghia](https://github.com/ttnghia)
- Remove using namespace cudf; from libcudf gtests source ([#13089](https://github.com/rapidsai/cudf/pull/13089)) [@davidwendt](https://github.com/davidwendt)
- Change cudf::test::make_null_mask to also return null-count ([#13081](https://github.com/rapidsai/cudf/pull/13081)) [@davidwendt](https://github.com/davidwendt)
- Resolved automerger from `branch-23.04` to `branch-23.06` ([#13080](https://github.com/rapidsai/cudf/pull/13080)) [@galipremsagar](https://github.com/galipremsagar)
- Assert for non-empty nulls ([#13071](https://github.com/rapidsai/cudf/pull/13071)) [@razajafri](https://github.com/razajafri)
- Remove deprecated regex functions from libcudf ([#13067](https://github.com/rapidsai/cudf/pull/13067)) [@davidwendt](https://github.com/davidwendt)
- Refactor `cudf::detail::sorted_order` ([#13062](https://github.com/rapidsai/cudf/pull/13062)) [@ttnghia](https://github.com/ttnghia)
- Improve performance of slice_strings for long strings ([#13057](https://github.com/rapidsai/cudf/pull/13057)) [@davidwendt](https://github.com/davidwendt)
- Reduce shared memory usage in gpuComputePageSizes by 50% ([#13047](https://github.com/rapidsai/cudf/pull/13047)) [@nvdbaranec](https://github.com/nvdbaranec)
- [REVIEW] Add notes to performance comparisons notebook ([#13044](https://github.com/rapidsai/cudf/pull/13044)) [@galipremsagar](https://github.com/galipremsagar)
- Enable binary operations between scalars and columns of differing decimal types ([#13034](https://github.com/rapidsai/cudf/pull/13034)) [@shwina](https://github.com/shwina)
- Remove console output from some libcudf gtests ([#13027](https://github.com/rapidsai/cudf/pull/13027)) [@davidwendt](https://github.com/davidwendt)
- Remove underscore in build string. ([#13025](https://github.com/rapidsai/cudf/pull/13025)) [@bdice](https://github.com/bdice)
- Bump up JNI version 23.06.0-SNAPSHOT ([#13021](https://github.com/rapidsai/cudf/pull/13021)) [@pxLi](https://github.com/pxLi)
- Fix auto merger from `branch-23.04` to `branch-23.06` ([#13009](https://github.com/rapidsai/cudf/pull/13009)) [@galipremsagar](https://github.com/galipremsagar)
- Reduce peak memory use when writing compressed ORC files. ([#12963](https://github.com/rapidsai/cudf/pull/12963)) [@vuule](https://github.com/vuule)
- Add nvtx annotatations to groupby methods ([#12941](https://github.com/rapidsai/cudf/pull/12941)) [@wence-](https://github.com/wence-)
- Compute column sizes in Parquet preprocess with single kernel ([#12931](https://github.com/rapidsai/cudf/pull/12931)) [@SrikarVanavasam](https://github.com/SrikarVanavasam)
- Add Python bindings for time zone data (TZiF) reader ([#12826](https://github.com/rapidsai/cudf/pull/12826)) [@shwina](https://github.com/shwina)
- Optimize set-like operations ([#12769](https://github.com/rapidsai/cudf/pull/12769)) [@ttnghia](https://github.com/ttnghia)
- [REVIEW] Upgrade to `arrow-11` ([#12757](https://github.com/rapidsai/cudf/pull/12757)) [@galipremsagar](https://github.com/galipremsagar)
- Add empty test files for test reorganization ([#12288](https://github.com/rapidsai/cudf/pull/12288)) [@shwina](https://github.com/shwina)
# cuDF 23.04.00 (6 Apr 2023)
## π¨ Breaking Changes
- Pin `dask` and `distributed` for release ([#13070](https://github.com/rapidsai/cudf/pull/13070)) [@galipremsagar](https://github.com/galipremsagar)
- Declare a different name for nan_equality.UNEQUAL to prevent Cython warnings. ([#12947](https://github.com/rapidsai/cudf/pull/12947)) [@bdice](https://github.com/bdice)
- Update minimum `pandas` and `numpy` pinnings ([#12887](https://github.com/rapidsai/cudf/pull/12887)) [@galipremsagar](https://github.com/galipremsagar)
- Deprecate `names` & `dtype` in `Index.copy` ([#12825](https://github.com/rapidsai/cudf/pull/12825)) [@galipremsagar](https://github.com/galipremsagar)
- Deprecate `Index.is_*` methods ([#12820](https://github.com/rapidsai/cudf/pull/12820)) [@galipremsagar](https://github.com/galipremsagar)
- Deprecate `datetime_is_numeric` from `describe` ([#12818](https://github.com/rapidsai/cudf/pull/12818)) [@galipremsagar](https://github.com/galipremsagar)
- Deprecate `na_sentinel` in `factorize` ([#12817](https://github.com/rapidsai/cudf/pull/12817)) [@galipremsagar](https://github.com/galipremsagar)
- Make string methods return a Series with a useful Index ([#12814](https://github.com/rapidsai/cudf/pull/12814)) [@shwina](https://github.com/shwina)
- Produce useful guidance on overflow error in `to_csv` ([#12705](https://github.com/rapidsai/cudf/pull/12705)) [@wence-](https://github.com/wence-)
- Move `strings_udf` code into cuDF ([#12669](https://github.com/rapidsai/cudf/pull/12669)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Remove cudf::strings::repeat_strings_output_sizes and optional parameter from cudf::strings::repeat_strings ([#12609](https://github.com/rapidsai/cudf/pull/12609)) [@davidwendt](https://github.com/davidwendt)
- Replace message parsing with throwing more specific exceptions ([#12426](https://github.com/rapidsai/cudf/pull/12426)) [@vyasr](https://github.com/vyasr)
## π Bug Fixes
- Fix memcheck script to execute only _TEST files found in bin/gtests/libcudf ([#13006](https://github.com/rapidsai/cudf/pull/13006)) [@davidwendt](https://github.com/davidwendt)
- Fix `DataFrame` constructor to broadcast scalar inputs properly ([#12997](https://github.com/rapidsai/cudf/pull/12997)) [@galipremsagar](https://github.com/galipremsagar)
- Drop `force_nullable_schema` from chunked parquet writer ([#12996](https://github.com/rapidsai/cudf/pull/12996)) [@galipremsagar](https://github.com/galipremsagar)
- Fix gtest column utility comparator diff reporting ([#12995](https://github.com/rapidsai/cudf/pull/12995)) [@davidwendt](https://github.com/davidwendt)
- Handle index names while performing `groupby` ([#12992](https://github.com/rapidsai/cudf/pull/12992)) [@galipremsagar](https://github.com/galipremsagar)
- Fix `__setitem__` on string columns when the scalar value ends in a null byte ([#12991](https://github.com/rapidsai/cudf/pull/12991)) [@wence-](https://github.com/wence-)
- Fix `sort_values` when column is all empty strings ([#12988](https://github.com/rapidsai/cudf/pull/12988)) [@eriknw](https://github.com/eriknw)
- Remove unused variable and fix memory issue in ORC writer ([#12984](https://github.com/rapidsai/cudf/pull/12984)) [@ttnghia](https://github.com/ttnghia)
- Pre-emptive fix for upstream `dask.dataframe.read_parquet` changes ([#12983](https://github.com/rapidsai/cudf/pull/12983)) [@rjzamora](https://github.com/rjzamora)
- Remove MANIFEST.in use auto-generated one for sdists and package_data for wheels ([#12960](https://github.com/rapidsai/cudf/pull/12960)) [@vyasr](https://github.com/vyasr)
- Update to use rapids-export(COMPONENTS) feature. ([#12959](https://github.com/rapidsai/cudf/pull/12959)) [@robertmaynard](https://github.com/robertmaynard)
- cudftestutil supports static gtest dependencies ([#12957](https://github.com/rapidsai/cudf/pull/12957)) [@robertmaynard](https://github.com/robertmaynard)
- Include gtest in build environment. ([#12956](https://github.com/rapidsai/cudf/pull/12956)) [@vyasr](https://github.com/vyasr)
- Correctly handle scalar indices in `Index.__getitem__` ([#12955](https://github.com/rapidsai/cudf/pull/12955)) [@wence-](https://github.com/wence-)
- Avoid building cython twice ([#12945](https://github.com/rapidsai/cudf/pull/12945)) [@galipremsagar](https://github.com/galipremsagar)
- Fix set index error for Series rolling window operations ([#12942](https://github.com/rapidsai/cudf/pull/12942)) [@galipremsagar](https://github.com/galipremsagar)
- Fix calculation of null counts for Parquet statistics ([#12938](https://github.com/rapidsai/cudf/pull/12938)) [@etseidl](https://github.com/etseidl)
- Preserve integer dtype of hive-partitioned column containing nulls ([#12930](https://github.com/rapidsai/cudf/pull/12930)) [@rjzamora](https://github.com/rjzamora)
- Use get_current_device_resource for intermediate allocations in COLLECT_LIST window code ([#12927](https://github.com/rapidsai/cudf/pull/12927)) [@karthikeyann](https://github.com/karthikeyann)
- Mark dlpack tensor deleter as noexcept to match PyCapsule_Destructor signature. ([#12921](https://github.com/rapidsai/cudf/pull/12921)) [@bdice](https://github.com/bdice)
- Fix conda recipe post-link.sh typo ([#12916](https://github.com/rapidsai/cudf/pull/12916)) [@pentschev](https://github.com/pentschev)
- min_rows and num_rows are swapped in ComputePageSizes declaration in Parquet reader ([#12886](https://github.com/rapidsai/cudf/pull/12886)) [@etseidl](https://github.com/etseidl)
- Expect cupy to now support bool arrays for dlpack. ([#12883](https://github.com/rapidsai/cudf/pull/12883)) [@vyasr](https://github.com/vyasr)
- Use python -m pytest for nightly wheel tests ([#12871](https://github.com/rapidsai/cudf/pull/12871)) [@bdice](https://github.com/bdice)
- Parquet writer column_size() should return a size_t ([#12870](https://github.com/rapidsai/cudf/pull/12870)) [@etseidl](https://github.com/etseidl)
- Fix cudf::hash_partition kernel launch error with decimal128 types ([#12863](https://github.com/rapidsai/cudf/pull/12863)) [@davidwendt](https://github.com/davidwendt)
- Fix an issue with parquet chunked reader undercounting string lengths. ([#12859](https://github.com/rapidsai/cudf/pull/12859)) [@nvdbaranec](https://github.com/nvdbaranec)
- Remove tokenizers pre-install pinning. ([#12854](https://github.com/rapidsai/cudf/pull/12854)) [@vyasr](https://github.com/vyasr)
- Fix parquet `RangeIndex` bug ([#12838](https://github.com/rapidsai/cudf/pull/12838)) [@rjzamora](https://github.com/rjzamora)
- Remove KAFKA_HOST_TEST from compute-sanitizer check ([#12831](https://github.com/rapidsai/cudf/pull/12831)) [@davidwendt](https://github.com/davidwendt)
- Make string methods return a Series with a useful Index ([#12814](https://github.com/rapidsai/cudf/pull/12814)) [@shwina](https://github.com/shwina)
- Tell cudf_kafka to use header-only fmt ([#12796](https://github.com/rapidsai/cudf/pull/12796)) [@vyasr](https://github.com/vyasr)
- Add `GroupBy.dtypes` ([#12783](https://github.com/rapidsai/cudf/pull/12783)) [@galipremsagar](https://github.com/galipremsagar)
- Fix a leak in a test and clarify some test names ([#12781](https://github.com/rapidsai/cudf/pull/12781)) [@revans2](https://github.com/revans2)
- Fix bug in all-null list due to join_list_elements special handling ([#12767](https://github.com/rapidsai/cudf/pull/12767)) [@karthikeyann](https://github.com/karthikeyann)
- Add try/except for expected null-schema error in read_parquet ([#12756](https://github.com/rapidsai/cudf/pull/12756)) [@rjzamora](https://github.com/rjzamora)
- Throw an exception if an unsupported page encoding is detected in Parquet reader ([#12754](https://github.com/rapidsai/cudf/pull/12754)) [@etseidl](https://github.com/etseidl)
- Fix a bug with `num_keys` in `_scatter_by_slice` ([#12749](https://github.com/rapidsai/cudf/pull/12749)) [@thomcom](https://github.com/thomcom)
- Bump pinned rapids wheel deps to 23.4 ([#12735](https://github.com/rapidsai/cudf/pull/12735)) [@sevagh](https://github.com/sevagh)
- Rework logic in cudf::strings::split_record to improve performance ([#12729](https://github.com/rapidsai/cudf/pull/12729)) [@davidwendt](https://github.com/davidwendt)
- Add `always_nullable` flag to Dremel encoding ([#12727](https://github.com/rapidsai/cudf/pull/12727)) [@divyegala](https://github.com/divyegala)
- Fix memcheck read error in compound segmented reduce ([#12722](https://github.com/rapidsai/cudf/pull/12722)) [@davidwendt](https://github.com/davidwendt)
- Fix faulty conditional logic in JIT `GroupBy.apply` ([#12706](https://github.com/rapidsai/cudf/pull/12706)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Produce useful guidance on overflow error in `to_csv` ([#12705](https://github.com/rapidsai/cudf/pull/12705)) [@wence-](https://github.com/wence-)
- Handle parquet list data corner case ([#12698](https://github.com/rapidsai/cudf/pull/12698)) [@nvdbaranec](https://github.com/nvdbaranec)
- Fix missing trailing comma in json writer ([#12688](https://github.com/rapidsai/cudf/pull/12688)) [@karthikeyann](https://github.com/karthikeyann)
- Remove child fom newCudaAsyncMemoryResource ([#12681](https://github.com/rapidsai/cudf/pull/12681)) [@abellina](https://github.com/abellina)
- Handle bool types in `round` API ([#12670](https://github.com/rapidsai/cudf/pull/12670)) [@galipremsagar](https://github.com/galipremsagar)
- Ensure all of device bitmask is initialized in from_arrow ([#12668](https://github.com/rapidsai/cudf/pull/12668)) [@wence-](https://github.com/wence-)
- Fix `from_arrow` to load a sliced arrow table ([#12665](https://github.com/rapidsai/cudf/pull/12665)) [@galipremsagar](https://github.com/galipremsagar)
- Fix dask-cudf read_parquet bug for multi-file aggregation ([#12663](https://github.com/rapidsai/cudf/pull/12663)) [@rjzamora](https://github.com/rjzamora)
- Fix AllocateLikeTest gtests reading uninitialized null-mask ([#12643](https://github.com/rapidsai/cudf/pull/12643)) [@davidwendt](https://github.com/davidwendt)
- Fix `find_common_dtype` and `values` to handle complex dtypes ([#12537](https://github.com/rapidsai/cudf/pull/12537)) [@galipremsagar](https://github.com/galipremsagar)
- Fix fetching of MultiIndex values when a label is passed ([#12521](https://github.com/rapidsai/cudf/pull/12521)) [@galipremsagar](https://github.com/galipremsagar)
- Fix `Series` comparison vs scalars ([#12519](https://github.com/rapidsai/cudf/pull/12519)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Allow casting from `UDFString` back to `StringView` to call methods in `strings_udf` ([#12363](https://github.com/rapidsai/cudf/pull/12363)) [@brandon-b-miller](https://github.com/brandon-b-miller)
## π Documentation
- Fix `GroupBy.apply` doc examples rendering ([#12994](https://github.com/rapidsai/cudf/pull/12994)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- add sphinx building and s3 uploading for dask-cudf docs ([#12982](https://github.com/rapidsai/cudf/pull/12982)) [@quasiben](https://github.com/quasiben)
- Add developer documentation forbidding default parameters in detail APIs ([#12978](https://github.com/rapidsai/cudf/pull/12978)) [@vyasr](https://github.com/vyasr)
- Add README symlink for dask-cudf. ([#12946](https://github.com/rapidsai/cudf/pull/12946)) [@bdice](https://github.com/bdice)
- Remove return type from [@return doxygen tags ([#12908](https://github.com/rapidsai/cudf/pull/12908)) @davidwendt](https://github.com/return doxygen tags ([#12908](https://github.com/rapidsai/cudf/pull/12908)) @davidwendt)
- Fix docs build to be `pydata-sphinx-theme=0.13.0` compatible ([#12874](https://github.com/rapidsai/cudf/pull/12874)) [@galipremsagar](https://github.com/galipremsagar)
- Add skeleton API and prose documentation for dask-cudf ([#12725](https://github.com/rapidsai/cudf/pull/12725)) [@wence-](https://github.com/wence-)
- Enable doctests for GroupBy methods ([#12658](https://github.com/rapidsai/cudf/pull/12658)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Add comment about CUB patch for SegmentedSortInt.Bool gtest ([#12611](https://github.com/rapidsai/cudf/pull/12611)) [@davidwendt](https://github.com/davidwendt)
## π New Features
- Add JNI method for strings::replace multi variety ([#12979](https://github.com/rapidsai/cudf/pull/12979)) [@NVnavkumar](https://github.com/NVnavkumar)
- Add nunique aggregation support for cudf::segmented_reduce ([#12972](https://github.com/rapidsai/cudf/pull/12972)) [@davidwendt](https://github.com/davidwendt)
- Refactor orc chunked writer ([#12949](https://github.com/rapidsai/cudf/pull/12949)) [@ttnghia](https://github.com/ttnghia)
- Make Parquet writer `nullable` option application to single table writes ([#12933](https://github.com/rapidsai/cudf/pull/12933)) [@vuule](https://github.com/vuule)
- Refactor `io::orc::ProtobufWriter` ([#12877](https://github.com/rapidsai/cudf/pull/12877)) [@ttnghia](https://github.com/ttnghia)
- Make timezone table independent from ORC ([#12805](https://github.com/rapidsai/cudf/pull/12805)) [@vuule](https://github.com/vuule)
- Cache JIT `GroupBy.apply` functions ([#12802](https://github.com/rapidsai/cudf/pull/12802)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Implement initial support for avro logical types ([#6482) (#12788](https://github.com/rapidsai/cudf/pull/6482) (#12788)) [@tpn](https://github.com/tpn)
- Update `tests/column_utilities` to use `experimental::equality` row comparator ([#12777](https://github.com/rapidsai/cudf/pull/12777)) [@divyegala](https://github.com/divyegala)
- Update `distinct/unique_count` to `experimental::row` hasher/comparator ([#12776](https://github.com/rapidsai/cudf/pull/12776)) [@divyegala](https://github.com/divyegala)
- Update `hash_partition` to use `experimental::row::row_hasher` ([#12761](https://github.com/rapidsai/cudf/pull/12761)) [@divyegala](https://github.com/divyegala)
- Update `is_sorted` to use `experimental::row::lexicographic` ([#12752](https://github.com/rapidsai/cudf/pull/12752)) [@divyegala](https://github.com/divyegala)
- Update default data source in cuio reader benchmarks ([#12740](https://github.com/rapidsai/cudf/pull/12740)) [@PointKernel](https://github.com/PointKernel)
- Reenable stream identification library in CI ([#12714](https://github.com/rapidsai/cudf/pull/12714)) [@vyasr](https://github.com/vyasr)
- Add `regex_program` strings splitting java APIs and tests ([#12713](https://github.com/rapidsai/cudf/pull/12713)) [@cindyyuanjiang](https://github.com/cindyyuanjiang)
- Add `regex_program` strings replacing java APIs and tests ([#12701](https://github.com/rapidsai/cudf/pull/12701)) [@cindyyuanjiang](https://github.com/cindyyuanjiang)
- Add `regex_program` strings extract java APIs and tests ([#12699](https://github.com/rapidsai/cudf/pull/12699)) [@cindyyuanjiang](https://github.com/cindyyuanjiang)
- Variable fragment sizes for Parquet writer ([#12685](https://github.com/rapidsai/cudf/pull/12685)) [@etseidl](https://github.com/etseidl)
- Add segmented reduction support for fixed-point types ([#12680](https://github.com/rapidsai/cudf/pull/12680)) [@davidwendt](https://github.com/davidwendt)
- Move `strings_udf` code into cuDF ([#12669](https://github.com/rapidsai/cudf/pull/12669)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Add `regex_program` searching APIs and related java classes ([#12666](https://github.com/rapidsai/cudf/pull/12666)) [@cindyyuanjiang](https://github.com/cindyyuanjiang)
- Add logging to libcudf ([#12637](https://github.com/rapidsai/cudf/pull/12637)) [@vuule](https://github.com/vuule)
- Add compound aggregations to cudf::segmented_reduce ([#12573](https://github.com/rapidsai/cudf/pull/12573)) [@davidwendt](https://github.com/davidwendt)
- Convert `rank` to use to experimental row comparators ([#12481](https://github.com/rapidsai/cudf/pull/12481)) [@divyegala](https://github.com/divyegala)
- Use rapids-cmake parallel testing feature ([#12451](https://github.com/rapidsai/cudf/pull/12451)) [@robertmaynard](https://github.com/robertmaynard)
- Enable detection of undesired stream usage ([#12089](https://github.com/rapidsai/cudf/pull/12089)) [@vyasr](https://github.com/vyasr)
## π οΈ Improvements
- Pin `dask` and `distributed` for release ([#13070](https://github.com/rapidsai/cudf/pull/13070)) [@galipremsagar](https://github.com/galipremsagar)
- Pin cupy in wheel tests to supported versions ([#13041](https://github.com/rapidsai/cudf/pull/13041)) [@vyasr](https://github.com/vyasr)
- Pin numba version ([#13001](https://github.com/rapidsai/cudf/pull/13001)) [@vyasr](https://github.com/vyasr)
- Rework gtests SequenceTest to remove using namepace cudf ([#12985](https://github.com/rapidsai/cudf/pull/12985)) [@davidwendt](https://github.com/davidwendt)
- Stop setting package version attribute in wheels ([#12977](https://github.com/rapidsai/cudf/pull/12977)) [@vyasr](https://github.com/vyasr)
- Move detail reduction functions to cudf::reduction::detail namespace ([#12971](https://github.com/rapidsai/cudf/pull/12971)) [@davidwendt](https://github.com/davidwendt)
- Remove default detail mrs: part7 ([#12970](https://github.com/rapidsai/cudf/pull/12970)) [@vyasr](https://github.com/vyasr)
- Remove default detail mrs: part6 ([#12969](https://github.com/rapidsai/cudf/pull/12969)) [@vyasr](https://github.com/vyasr)
- Remove default detail mrs: part5 ([#12968](https://github.com/rapidsai/cudf/pull/12968)) [@vyasr](https://github.com/vyasr)
- Remove default detail mrs: part4 ([#12967](https://github.com/rapidsai/cudf/pull/12967)) [@vyasr](https://github.com/vyasr)
- Remove default detail mrs: part3 ([#12966](https://github.com/rapidsai/cudf/pull/12966)) [@vyasr](https://github.com/vyasr)
- Remove default detail mrs: part2 ([#12965](https://github.com/rapidsai/cudf/pull/12965)) [@vyasr](https://github.com/vyasr)
- Remove default detail mrs: part1 ([#12964](https://github.com/rapidsai/cudf/pull/12964)) [@vyasr](https://github.com/vyasr)
- Add `force_nullable_schema` parameter to Parquet writer. ([#12952](https://github.com/rapidsai/cudf/pull/12952)) [@galipremsagar](https://github.com/galipremsagar)
- Declare a different name for nan_equality.UNEQUAL to prevent Cython warnings. ([#12947](https://github.com/rapidsai/cudf/pull/12947)) [@bdice](https://github.com/bdice)
- Remove remaining default stream parameters ([#12943](https://github.com/rapidsai/cudf/pull/12943)) [@vyasr](https://github.com/vyasr)
- Fix cudf::segmented_reduce gtest for ANY aggregation ([#12940](https://github.com/rapidsai/cudf/pull/12940)) [@davidwendt](https://github.com/davidwendt)
- Implement `groupby.head` and `groupby.tail` ([#12939](https://github.com/rapidsai/cudf/pull/12939)) [@wence-](https://github.com/wence-)
- Fix libcudf gtests to pass null-count=0 for empty validity masks ([#12923](https://github.com/rapidsai/cudf/pull/12923)) [@davidwendt](https://github.com/davidwendt)
- Migrate parquet encoding to use experimental row operators ([#12918](https://github.com/rapidsai/cudf/pull/12918)) [@PointKernel](https://github.com/PointKernel)
- Fix benchmarks coded in namespace cudf and using namespace cudf ([#12915](https://github.com/rapidsai/cudf/pull/12915)) [@karthikeyann](https://github.com/karthikeyann)
- Fix io/text gtests coded in namespace cudf::test ([#12914](https://github.com/rapidsai/cudf/pull/12914)) [@karthikeyann](https://github.com/karthikeyann)
- Pass `SCCACHE_S3_USE_SSL` to conda builds ([#12910](https://github.com/rapidsai/cudf/pull/12910)) [@ajschmidt8](https://github.com/ajschmidt8)
- Fix FST, JSON gtests & benchmarks coded in namespace cudf::test ([#12907](https://github.com/rapidsai/cudf/pull/12907)) [@karthikeyann](https://github.com/karthikeyann)
- Generate pyproject dependencies using dfg ([#12906](https://github.com/rapidsai/cudf/pull/12906)) [@vyasr](https://github.com/vyasr)
- Update libcudf counting functions to specify cudf::size_type ([#12904](https://github.com/rapidsai/cudf/pull/12904)) [@davidwendt](https://github.com/davidwendt)
- Fix `moto` env vars & pass `AWS_SESSION_TOKEN` to conda builds ([#12902](https://github.com/rapidsai/cudf/pull/12902)) [@ajschmidt8](https://github.com/ajschmidt8)
- Rewrite CSV writer benchmark with nvbench ([#12901](https://github.com/rapidsai/cudf/pull/12901)) [@PointKernel](https://github.com/PointKernel)
- Rework some code logic to reduce iterator and comparator inlining to improve compile time ([#12900](https://github.com/rapidsai/cudf/pull/12900)) [@davidwendt](https://github.com/davidwendt)
- Deprecate `line_terminator` in favor of `lineterminator` in `to_csv` ([#12896](https://github.com/rapidsai/cudf/pull/12896)) [@wence-](https://github.com/wence-)
- Add `stream` and `mr` parameters for `structs::detail::flatten_nested_columns` ([#12892](https://github.com/rapidsai/cudf/pull/12892)) [@ttnghia](https://github.com/ttnghia)
- Deprecate libcudf regex APIs accepting pattern strings directly ([#12891](https://github.com/rapidsai/cudf/pull/12891)) [@davidwendt](https://github.com/davidwendt)
- Remove default parameters from detail headers in include ([#12888](https://github.com/rapidsai/cudf/pull/12888)) [@vyasr](https://github.com/vyasr)
- Update minimum `pandas` and `numpy` pinnings ([#12887](https://github.com/rapidsai/cudf/pull/12887)) [@galipremsagar](https://github.com/galipremsagar)
- Implement `groupby.sample` ([#12882](https://github.com/rapidsai/cudf/pull/12882)) [@wence-](https://github.com/wence-)
- Update JNI build ENV default to gcc 11 ([#12881](https://github.com/rapidsai/cudf/pull/12881)) [@pxLi](https://github.com/pxLi)
- Change return type of `cudf::structs::detail::flatten_nested_columns` to smart pointer ([#12878](https://github.com/rapidsai/cudf/pull/12878)) [@ttnghia](https://github.com/ttnghia)
- Fix passing seed parameter to MurmurHash3_32 in cudf::hash() function ([#12875](https://github.com/rapidsai/cudf/pull/12875)) [@davidwendt](https://github.com/davidwendt)
- Remove manual artifact upload step in CI ([#12869](https://github.com/rapidsai/cudf/pull/12869)) [@ajschmidt8](https://github.com/ajschmidt8)
- Update to GCC 11 ([#12868](https://github.com/rapidsai/cudf/pull/12868)) [@bdice](https://github.com/bdice)
- Fix null hive-partition behavior in dask-cudf parquet ([#12866](https://github.com/rapidsai/cudf/pull/12866)) [@rjzamora](https://github.com/rjzamora)
- Update to protobuf>=4.21.6,<4.22. ([#12864](https://github.com/rapidsai/cudf/pull/12864)) [@bdice](https://github.com/bdice)
- Update RMM allocators ([#12861](https://github.com/rapidsai/cudf/pull/12861)) [@pentschev](https://github.com/pentschev)
- Improve performance for replace-multi for long strings ([#12858](https://github.com/rapidsai/cudf/pull/12858)) [@davidwendt](https://github.com/davidwendt)
- Drop Python 3.7 handling for pickle protocol 4 ([#12857](https://github.com/rapidsai/cudf/pull/12857)) [@jakirkham](https://github.com/jakirkham)
- Migrate as much as possible to pyproject.toml ([#12850](https://github.com/rapidsai/cudf/pull/12850)) [@vyasr](https://github.com/vyasr)
- Enable nbqa pre-commit hooks for isort and black. ([#12848](https://github.com/rapidsai/cudf/pull/12848)) [@bdice](https://github.com/bdice)
- Setting a threshold for KvikIO IO ([#12841](https://github.com/rapidsai/cudf/pull/12841)) [@madsbk](https://github.com/madsbk)
- Update datasets download URL ([#12840](https://github.com/rapidsai/cudf/pull/12840)) [@jjacobelli](https://github.com/jjacobelli)
- Make docs builds less verbose ([#12836](https://github.com/rapidsai/cudf/pull/12836)) [@AyodeAwe](https://github.com/AyodeAwe)
- Consolidate linter configs into pyproject.toml ([#12834](https://github.com/rapidsai/cudf/pull/12834)) [@vyasr](https://github.com/vyasr)
- Deprecate `names` & `dtype` in `Index.copy` ([#12825](https://github.com/rapidsai/cudf/pull/12825)) [@galipremsagar](https://github.com/galipremsagar)
- Deprecate `inplace` parameters in categorical methods ([#12824](https://github.com/rapidsai/cudf/pull/12824)) [@galipremsagar](https://github.com/galipremsagar)
- Add optional text file support to ninja-log utility ([#12823](https://github.com/rapidsai/cudf/pull/12823)) [@davidwendt](https://github.com/davidwendt)
- Deprecate `Index.is_*` methods ([#12820](https://github.com/rapidsai/cudf/pull/12820)) [@galipremsagar](https://github.com/galipremsagar)
- Add dfg as a pre-commit hook ([#12819](https://github.com/rapidsai/cudf/pull/12819)) [@vyasr](https://github.com/vyasr)
- Deprecate `datetime_is_numeric` from `describe` ([#12818](https://github.com/rapidsai/cudf/pull/12818)) [@galipremsagar](https://github.com/galipremsagar)
- Deprecate `na_sentinel` in `factorize` ([#12817](https://github.com/rapidsai/cudf/pull/12817)) [@galipremsagar](https://github.com/galipremsagar)
- Shuffling read into a sub function in parquet read ([#12809](https://github.com/rapidsai/cudf/pull/12809)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Fixing parquet coalescing of reads ([#12808](https://github.com/rapidsai/cudf/pull/12808)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- CI: Remove specification of manual stage for check_style.sh script. ([#12803](https://github.com/rapidsai/cudf/pull/12803)) [@csadorf](https://github.com/csadorf)
- Add compute-sanitizer github workflow action to nightly tests ([#12800](https://github.com/rapidsai/cudf/pull/12800)) [@davidwendt](https://github.com/davidwendt)
- Enable groupby std and variance aggregation types in libcudf Debug build ([#12799](https://github.com/rapidsai/cudf/pull/12799)) [@davidwendt](https://github.com/davidwendt)
- Expose seed argument to hash_values ([#12795](https://github.com/rapidsai/cudf/pull/12795)) [@ayushdg](https://github.com/ayushdg)
- Fix groupby gtests coded in namespace cudf::test ([#12784](https://github.com/rapidsai/cudf/pull/12784)) [@davidwendt](https://github.com/davidwendt)
- Improve performance for cudf::strings::count_characters for long strings ([#12779](https://github.com/rapidsai/cudf/pull/12779)) [@davidwendt](https://github.com/davidwendt)
- Deallocate encoded data in ORC writer immediately after compression ([#12770](https://github.com/rapidsai/cudf/pull/12770)) [@vuule](https://github.com/vuule)
- Stop force pulling fmt in nvbench. ([#12768](https://github.com/rapidsai/cudf/pull/12768)) [@vyasr](https://github.com/vyasr)
- Remove now redundant cuda initialization ([#12758](https://github.com/rapidsai/cudf/pull/12758)) [@vyasr](https://github.com/vyasr)
- Adds JSON reader, writer io benchmark ([#12753](https://github.com/rapidsai/cudf/pull/12753)) [@karthikeyann](https://github.com/karthikeyann)
- Use test paths relative to package directory. ([#12751](https://github.com/rapidsai/cudf/pull/12751)) [@bdice](https://github.com/bdice)
- Add build metrics report as artifact to cpp-build workflow ([#12750](https://github.com/rapidsai/cudf/pull/12750)) [@davidwendt](https://github.com/davidwendt)
- Add JNI methods for detecting and purging non-empty nulls from LIST and STRUCT ([#12742](https://github.com/rapidsai/cudf/pull/12742)) [@razajafri](https://github.com/razajafri)
- Stop using versioneer to manage versions ([#12741](https://github.com/rapidsai/cudf/pull/12741)) [@vyasr](https://github.com/vyasr)
- Reduce error handling verbosity in CI tests scripts ([#12738](https://github.com/rapidsai/cudf/pull/12738)) [@AjayThorve](https://github.com/AjayThorve)
- Reduce the number of test cases in multibyte_split benchmark ([#12737](https://github.com/rapidsai/cudf/pull/12737)) [@PointKernel](https://github.com/PointKernel)
- Update shared workflow branches ([#12733](https://github.com/rapidsai/cudf/pull/12733)) [@ajschmidt8](https://github.com/ajschmidt8)
- JNI switches to nested JSON reader ([#12732](https://github.com/rapidsai/cudf/pull/12732)) [@res-life](https://github.com/res-life)
- Changing `cudf::io::source_info` to use `cudf::host_span<std::byte>` in a non-breaking form ([#12730](https://github.com/rapidsai/cudf/pull/12730)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Add nvbench environment class for initializing RMM in benchmarks ([#12728](https://github.com/rapidsai/cudf/pull/12728)) [@davidwendt](https://github.com/davidwendt)
- Split C++ and Python build dependencies into separate lists. ([#12724](https://github.com/rapidsai/cudf/pull/12724)) [@bdice](https://github.com/bdice)
- Add build dependencies to Java tests. ([#12723](https://github.com/rapidsai/cudf/pull/12723)) [@bdice](https://github.com/bdice)
- Allow setting the seed argument for hash partition ([#12715](https://github.com/rapidsai/cudf/pull/12715)) [@firestarman](https://github.com/firestarman)
- Remove gpuCI scripts. ([#12712](https://github.com/rapidsai/cudf/pull/12712)) [@bdice](https://github.com/bdice)
- Unpin `dask` and `distributed` for development ([#12710](https://github.com/rapidsai/cudf/pull/12710)) [@galipremsagar](https://github.com/galipremsagar)
- `partition_by_hash()`: use `_split()` ([#12704](https://github.com/rapidsai/cudf/pull/12704)) [@madsbk](https://github.com/madsbk)
- Remove DataFrame.quantiles from docs. ([#12684](https://github.com/rapidsai/cudf/pull/12684)) [@bdice](https://github.com/bdice)
- Fast path for `experimental::row::equality` ([#12676](https://github.com/rapidsai/cudf/pull/12676)) [@divyegala](https://github.com/divyegala)
- Move date to build string in `conda` recipe ([#12661](https://github.com/rapidsai/cudf/pull/12661)) [@ajschmidt8](https://github.com/ajschmidt8)
- Refactor reduction logic for fixed-point types ([#12652](https://github.com/rapidsai/cudf/pull/12652)) [@davidwendt](https://github.com/davidwendt)
- Pay off some JNI RMM API tech debt ([#12632](https://github.com/rapidsai/cudf/pull/12632)) [@revans2](https://github.com/revans2)
- Merge `copy-on-write` feature branch into `branch-23.04` ([#12619](https://github.com/rapidsai/cudf/pull/12619)) [@galipremsagar](https://github.com/galipremsagar)
- Remove cudf::strings::repeat_strings_output_sizes and optional parameter from cudf::strings::repeat_strings ([#12609](https://github.com/rapidsai/cudf/pull/12609)) [@davidwendt](https://github.com/davidwendt)
- Pin cuda-nvrtc. ([#12606](https://github.com/rapidsai/cudf/pull/12606)) [@bdice](https://github.com/bdice)
- Remove cudf::test::print calls from libcudf gtests ([#12604](https://github.com/rapidsai/cudf/pull/12604)) [@davidwendt](https://github.com/davidwendt)
- Init JNI version 23.04.0-SNAPSHOT ([#12599](https://github.com/rapidsai/cudf/pull/12599)) [@pxLi](https://github.com/pxLi)
- Add performance benchmarks to user facing docs ([#12595](https://github.com/rapidsai/cudf/pull/12595)) [@galipremsagar](https://github.com/galipremsagar)
- Add docs build job ([#12592](https://github.com/rapidsai/cudf/pull/12592)) [@AyodeAwe](https://github.com/AyodeAwe)
- Replace message parsing with throwing more specific exceptions ([#12426](https://github.com/rapidsai/cudf/pull/12426)) [@vyasr](https://github.com/vyasr)
- Support conversion to/from cudf in dask.dataframe.core.to_backend ([#12380](https://github.com/rapidsai/cudf/pull/12380)) [@rjzamora](https://github.com/rjzamora)
# cuDF 23.02.00 (9 Feb 2023)
## π¨ Breaking Changes
- Pin `dask` and `distributed` for release ([#12695](https://github.com/rapidsai/cudf/pull/12695)) [@galipremsagar](https://github.com/galipremsagar)
- Change ways to access `ptr` in `Buffer` ([#12587](https://github.com/rapidsai/cudf/pull/12587)) [@galipremsagar](https://github.com/galipremsagar)
- Remove column names ([#12578](https://github.com/rapidsai/cudf/pull/12578)) [@vuule](https://github.com/vuule)
- Default `cudf::io::read_json` to nested JSON parser ([#12544](https://github.com/rapidsai/cudf/pull/12544)) [@vuule](https://github.com/vuule)
- Switch `engine=cudf` to the new `JSON` reader ([#12509](https://github.com/rapidsai/cudf/pull/12509)) [@galipremsagar](https://github.com/galipremsagar)
- Add trailing comma support for nested JSON reader ([#12448](https://github.com/rapidsai/cudf/pull/12448)) [@karthikeyann](https://github.com/karthikeyann)
- Upgrade to `arrow-10.0.1` ([#12327](https://github.com/rapidsai/cudf/pull/12327)) [@galipremsagar](https://github.com/galipremsagar)
- Fail loudly to avoid data corruption with unsupported input in `read_orc` ([#12325](https://github.com/rapidsai/cudf/pull/12325)) [@vuule](https://github.com/vuule)
- CSV, JSON reader to infer integer column with nulls as int64 instead of float64 ([#12309](https://github.com/rapidsai/cudf/pull/12309)) [@karthikeyann](https://github.com/karthikeyann)
- Remove deprecated code for 23.02 ([#12281](https://github.com/rapidsai/cudf/pull/12281)) [@vyasr](https://github.com/vyasr)
- Null element for parsing error in numeric types in JSON, CSV reader ([#12272](https://github.com/rapidsai/cudf/pull/12272)) [@karthikeyann](https://github.com/karthikeyann)
- Purge non-empty nulls for `superimpose_nulls` and `push_down_nulls` ([#12239](https://github.com/rapidsai/cudf/pull/12239)) [@ttnghia](https://github.com/ttnghia)
- Rename `cudf::structs::detail::superimpose_parent_nulls` APIs ([#12230](https://github.com/rapidsai/cudf/pull/12230)) [@ttnghia](https://github.com/ttnghia)
- Remove JIT type names, refactor id_to_type. ([#12158](https://github.com/rapidsai/cudf/pull/12158)) [@bdice](https://github.com/bdice)
- Floor division uses integer division for integral arguments ([#12131](https://github.com/rapidsai/cudf/pull/12131)) [@wence-](https://github.com/wence-)
## π Bug Fixes
- Fix a mask data corruption in UDF ([#12647](https://github.com/rapidsai/cudf/pull/12647)) [@galipremsagar](https://github.com/galipremsagar)
- pre-commit: Update isort version to 5.12.0 ([#12645](https://github.com/rapidsai/cudf/pull/12645)) [@wence-](https://github.com/wence-)
- tests: Skip cuInit tests if cuda-gdb is not found or not working ([#12644](https://github.com/rapidsai/cudf/pull/12644)) [@wence-](https://github.com/wence-)
- Revert regex program java APIs and tests ([#12639](https://github.com/rapidsai/cudf/pull/12639)) [@cindyyuanjiang](https://github.com/cindyyuanjiang)
- Fix leaks in ColumnVectorTest ([#12625](https://github.com/rapidsai/cudf/pull/12625)) [@jlowe](https://github.com/jlowe)
- Handle when spillable buffers own each other ([#12607](https://github.com/rapidsai/cudf/pull/12607)) [@madsbk](https://github.com/madsbk)
- Fix incorrect null counts for sliced columns in JCudfSerialization ([#12589](https://github.com/rapidsai/cudf/pull/12589)) [@jlowe](https://github.com/jlowe)
- lists: Transfer dtypes correctly through list.get ([#12586](https://github.com/rapidsai/cudf/pull/12586)) [@wence-](https://github.com/wence-)
- timedelta: Don't go via float intermediates for floordiv ([#12585](https://github.com/rapidsai/cudf/pull/12585)) [@wence-](https://github.com/wence-)
- Fixing BUG, `get_next_chunk()` should use the blocking function `device_read()` ([#12584](https://github.com/rapidsai/cudf/pull/12584)) [@madsbk](https://github.com/madsbk)
- Make JNI QuoteStyle accessible outside ai.rapids.cudf ([#12572](https://github.com/rapidsai/cudf/pull/12572)) [@mythrocks](https://github.com/mythrocks)
- `partition_by_hash()`: support index ([#12554](https://github.com/rapidsai/cudf/pull/12554)) [@madsbk](https://github.com/madsbk)
- Mixed Join benchmark bug due to wrong conditional column ([#12553](https://github.com/rapidsai/cudf/pull/12553)) [@divyegala](https://github.com/divyegala)
- Update List Lexicographical Comparator ([#12538](https://github.com/rapidsai/cudf/pull/12538)) [@divyegala](https://github.com/divyegala)
- Dynamically read PTX version ([#12534](https://github.com/rapidsai/cudf/pull/12534)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- build.sh switch to use `RAPIDS` magic value ([#12525](https://github.com/rapidsai/cudf/pull/12525)) [@robertmaynard](https://github.com/robertmaynard)
- Loosen runtime arrow pinning ([#12522](https://github.com/rapidsai/cudf/pull/12522)) [@vyasr](https://github.com/vyasr)
- Enable metadata transfer for complex types in transpose ([#12491](https://github.com/rapidsai/cudf/pull/12491)) [@galipremsagar](https://github.com/galipremsagar)
- Fix issues with parquet chunked reader ([#12488](https://github.com/rapidsai/cudf/pull/12488)) [@nvdbaranec](https://github.com/nvdbaranec)
- Fix missing metadata transfer in concat for `ListColumn` ([#12487](https://github.com/rapidsai/cudf/pull/12487)) [@galipremsagar](https://github.com/galipremsagar)
- Rename libcudf substring source files to slice ([#12484](https://github.com/rapidsai/cudf/pull/12484)) [@davidwendt](https://github.com/davidwendt)
- Fix compile issue with arrow 10 ([#12465](https://github.com/rapidsai/cudf/pull/12465)) [@ttnghia](https://github.com/ttnghia)
- Fix List offsets bug in mixed type list column in nested JSON reader ([#12447](https://github.com/rapidsai/cudf/pull/12447)) [@karthikeyann](https://github.com/karthikeyann)
- Fix xfail incompatibilities ([#12423](https://github.com/rapidsai/cudf/pull/12423)) [@vyasr](https://github.com/vyasr)
- Fix bug in Parquet column index encoding ([#12404](https://github.com/rapidsai/cudf/pull/12404)) [@etseidl](https://github.com/etseidl)
- When building Arrow shared look for a shared OpenSSL ([#12396](https://github.com/rapidsai/cudf/pull/12396)) [@robertmaynard](https://github.com/robertmaynard)
- Fix get_json_object to return empty column on empty input ([#12384](https://github.com/rapidsai/cudf/pull/12384)) [@davidwendt](https://github.com/davidwendt)
- Pin arrow 9 in testing dependencies to prevent conda solve issues ([#12377](https://github.com/rapidsai/cudf/pull/12377)) [@vyasr](https://github.com/vyasr)
- Fix reductions any/all return value for empty input ([#12374](https://github.com/rapidsai/cudf/pull/12374)) [@davidwendt](https://github.com/davidwendt)
- Fix debug compile errors in parquet.hpp ([#12372](https://github.com/rapidsai/cudf/pull/12372)) [@davidwendt](https://github.com/davidwendt)
- Purge non-empty nulls in `cudf::make_lists_column` ([#12370](https://github.com/rapidsai/cudf/pull/12370)) [@ttnghia](https://github.com/ttnghia)
- Use correct memory resource in io::make_column ([#12364](https://github.com/rapidsai/cudf/pull/12364)) [@vyasr](https://github.com/vyasr)
- Add code to detect possible malformed page data in parquet files. ([#12360](https://github.com/rapidsai/cudf/pull/12360)) [@nvdbaranec](https://github.com/nvdbaranec)
- Fail loudly to avoid data corruption with unsupported input in `read_orc` ([#12325](https://github.com/rapidsai/cudf/pull/12325)) [@vuule](https://github.com/vuule)
- Fix NumericPairIteratorTest for float values ([#12306](https://github.com/rapidsai/cudf/pull/12306)) [@davidwendt](https://github.com/davidwendt)
- Fixes memory allocation in nested JSON tokenizer ([#12300](https://github.com/rapidsai/cudf/pull/12300)) [@elstehle](https://github.com/elstehle)
- Reconstruct dtypes correctly for list aggs of struct columns ([#12290](https://github.com/rapidsai/cudf/pull/12290)) [@wence-](https://github.com/wence-)
- Fix regex \A and \Z to strictly match string begin/end ([#12282](https://github.com/rapidsai/cudf/pull/12282)) [@davidwendt](https://github.com/davidwendt)
- Fix compile issue in `json_chunked_reader.cpp` ([#12280](https://github.com/rapidsai/cudf/pull/12280)) [@ttnghia](https://github.com/ttnghia)
- Change reductions any/all to return valid values for empty input ([#12279](https://github.com/rapidsai/cudf/pull/12279)) [@davidwendt](https://github.com/davidwendt)
- Only exclude join keys that are indices from key columns ([#12271](https://github.com/rapidsai/cudf/pull/12271)) [@wence-](https://github.com/wence-)
- Fix spill to device limit ([#12252](https://github.com/rapidsai/cudf/pull/12252)) [@madsbk](https://github.com/madsbk)
- Correct behaviour of sort in `concat` for singleton concatenations ([#12247](https://github.com/rapidsai/cudf/pull/12247)) [@wence-](https://github.com/wence-)
- Purge non-empty nulls for `superimpose_nulls` and `push_down_nulls` ([#12239](https://github.com/rapidsai/cudf/pull/12239)) [@ttnghia](https://github.com/ttnghia)
- Patch CUB DeviceSegmentedSort and remove workaround ([#12234](https://github.com/rapidsai/cudf/pull/12234)) [@davidwendt](https://github.com/davidwendt)
- Fix memory leak in udf_string::assign(&&) function ([#12206](https://github.com/rapidsai/cudf/pull/12206)) [@davidwendt](https://github.com/davidwendt)
- Workaround thrust-copy-if limit in json get_tree_representation ([#12190](https://github.com/rapidsai/cudf/pull/12190)) [@davidwendt](https://github.com/davidwendt)
- Fix page size calculation in Parquet writer ([#12182](https://github.com/rapidsai/cudf/pull/12182)) [@etseidl](https://github.com/etseidl)
- Add cudf::detail::sizes_to_offsets_iterator to allow checking overflow in offsets ([#12180](https://github.com/rapidsai/cudf/pull/12180)) [@davidwendt](https://github.com/davidwendt)
- Workaround thrust-copy-if limit in wordpiece-tokenizer ([#12168](https://github.com/rapidsai/cudf/pull/12168)) [@davidwendt](https://github.com/davidwendt)
- Floor division uses integer division for integral arguments ([#12131](https://github.com/rapidsai/cudf/pull/12131)) [@wence-](https://github.com/wence-)
## π Documentation
- Fix link to NVTX ([#12598](https://github.com/rapidsai/cudf/pull/12598)) [@sameerz](https://github.com/sameerz)
- Include missing groupby functions in documentation ([#12580](https://github.com/rapidsai/cudf/pull/12580)) [@quasiben](https://github.com/quasiben)
- Fix documentation author ([#12527](https://github.com/rapidsai/cudf/pull/12527)) [@bdice](https://github.com/bdice)
- Update libcudf reduction docs for casting output types ([#12526](https://github.com/rapidsai/cudf/pull/12526)) [@davidwendt](https://github.com/davidwendt)
- Add JSON reader page in user guide ([#12499](https://github.com/rapidsai/cudf/pull/12499)) [@GregoryKimball](https://github.com/GregoryKimball)
- Link unsupported iteration API docstrings ([#12482](https://github.com/rapidsai/cudf/pull/12482)) [@galipremsagar](https://github.com/galipremsagar)
- `strings_udf` doc update ([#12469](https://github.com/rapidsai/cudf/pull/12469)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Update cudf_assert docs with correct NDEBUG behavior ([#12464](https://github.com/rapidsai/cudf/pull/12464)) [@robertmaynard](https://github.com/robertmaynard)
- Update pre-commit hooks guide ([#12395](https://github.com/rapidsai/cudf/pull/12395)) [@bdice](https://github.com/bdice)
- Update test docs to not use detail comparison utilities ([#12332](https://github.com/rapidsai/cudf/pull/12332)) [@PointKernel](https://github.com/PointKernel)
- Fix doxygen description for regex_program::compute_working_memory_size ([#12329](https://github.com/rapidsai/cudf/pull/12329)) [@davidwendt](https://github.com/davidwendt)
- Add eval to docs. ([#12322](https://github.com/rapidsai/cudf/pull/12322)) [@vyasr](https://github.com/vyasr)
- Turn on xfail_strict=true ([#12244](https://github.com/rapidsai/cudf/pull/12244)) [@wence-](https://github.com/wence-)
- Update 10 minutes to cuDF ([#12114](https://github.com/rapidsai/cudf/pull/12114)) [@wence-](https://github.com/wence-)
## π New Features
- Use kvikIO as the default IO backend ([#12574](https://github.com/rapidsai/cudf/pull/12574)) [@vuule](https://github.com/vuule)
- Use `has_nonempty_nulls` instead of `may_contain_non_empty_nulls` in `superimpose_nulls` and `push_down_nulls` ([#12560](https://github.com/rapidsai/cudf/pull/12560)) [@ttnghia](https://github.com/ttnghia)
- Add strings methods removeprefix and removesuffix ([#12557](https://github.com/rapidsai/cudf/pull/12557)) [@davidwendt](https://github.com/davidwendt)
- Add `regex_program` java APIs and unit tests ([#12548](https://github.com/rapidsai/cudf/pull/12548)) [@cindyyuanjiang](https://github.com/cindyyuanjiang)
- Default `cudf::io::read_json` to nested JSON parser ([#12544](https://github.com/rapidsai/cudf/pull/12544)) [@vuule](https://github.com/vuule)
- Make string quoting optional on CSV write ([#12539](https://github.com/rapidsai/cudf/pull/12539)) [@mythrocks](https://github.com/mythrocks)
- Use new nvCOMP API to optimize the compression temp memory size ([#12533](https://github.com/rapidsai/cudf/pull/12533)) [@vuule](https://github.com/vuule)
- Support "values" orient (array of arrays) in Nested JSON reader ([#12498](https://github.com/rapidsai/cudf/pull/12498)) [@karthikeyann](https://github.com/karthikeyann)
- `one_hot_encode` to use experimental row comparators ([#12478](https://github.com/rapidsai/cudf/pull/12478)) [@divyegala](https://github.com/divyegala)
- Support %W and %w format specifiers in cudf::strings::to_timestamps ([#12475](https://github.com/rapidsai/cudf/pull/12475)) [@davidwendt](https://github.com/davidwendt)
- Add JSON Writer ([#12474](https://github.com/rapidsai/cudf/pull/12474)) [@karthikeyann](https://github.com/karthikeyann)
- Refactor `thrust_copy_if` into `cudf::detail::copy_if_safe` ([#12455](https://github.com/rapidsai/cudf/pull/12455)) [@ttnghia](https://github.com/ttnghia)
- Add trailing comma support for nested JSON reader ([#12448](https://github.com/rapidsai/cudf/pull/12448)) [@karthikeyann](https://github.com/karthikeyann)
- Extract `tokenize_json.hpp` detail header from `src/io/json/nested_json.hpp` ([#12432](https://github.com/rapidsai/cudf/pull/12432)) [@ttnghia](https://github.com/ttnghia)
- JNI bindings to write CSV ([#12425](https://github.com/rapidsai/cudf/pull/12425)) [@mythrocks](https://github.com/mythrocks)
- Nested JSON depth benchmark ([#12371](https://github.com/rapidsai/cudf/pull/12371)) [@karthikeyann](https://github.com/karthikeyann)
- Implement `lists::reverse` ([#12336](https://github.com/rapidsai/cudf/pull/12336)) [@ttnghia](https://github.com/ttnghia)
- Use `device_read` in experimental `read_json` ([#12314](https://github.com/rapidsai/cudf/pull/12314)) [@vuule](https://github.com/vuule)
- Implement JNI for `strings::reverse` ([#12283](https://github.com/rapidsai/cudf/pull/12283)) [@ttnghia](https://github.com/ttnghia)
- Null element for parsing error in numeric types in JSON, CSV reader ([#12272](https://github.com/rapidsai/cudf/pull/12272)) [@karthikeyann](https://github.com/karthikeyann)
- Add cudf::strings:like function with multiple patterns ([#12269](https://github.com/rapidsai/cudf/pull/12269)) [@davidwendt](https://github.com/davidwendt)
- Add environment variable to control host memory allocation in `hostdevice_vector` ([#12251](https://github.com/rapidsai/cudf/pull/12251)) [@vuule](https://github.com/vuule)
- Add cudf::strings::reverse function ([#12227](https://github.com/rapidsai/cudf/pull/12227)) [@davidwendt](https://github.com/davidwendt)
- Selectively use dictionary encoding in Parquet writer ([#12211](https://github.com/rapidsai/cudf/pull/12211)) [@etseidl](https://github.com/etseidl)
- Support `replace` in `strings_udf` ([#12207](https://github.com/rapidsai/cudf/pull/12207)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Add support to read binary encoded decimals in parquet ([#12205](https://github.com/rapidsai/cudf/pull/12205)) [@PointKernel](https://github.com/PointKernel)
- Support regex EOL where the string ends with a new-line character ([#12181](https://github.com/rapidsai/cudf/pull/12181)) [@davidwendt](https://github.com/davidwendt)
- Updating `stream_compaction/unique` to use new row comparators ([#12159](https://github.com/rapidsai/cudf/pull/12159)) [@divyegala](https://github.com/divyegala)
- Add device buffer datasource ([#12024](https://github.com/rapidsai/cudf/pull/12024)) [@PointKernel](https://github.com/PointKernel)
- Implement groupby apply with JIT ([#11452](https://github.com/rapidsai/cudf/pull/11452)) [@bwyogatama](https://github.com/bwyogatama)
## π οΈ Improvements
- Update shared workflow branches ([#12696](https://github.com/rapidsai/cudf/pull/12696)) [@ajschmidt8](https://github.com/ajschmidt8)
- Pin `dask` and `distributed` for release ([#12695](https://github.com/rapidsai/cudf/pull/12695)) [@galipremsagar](https://github.com/galipremsagar)
- Don't upload `libcudf-example` to Anaconda.org ([#12671](https://github.com/rapidsai/cudf/pull/12671)) [@ajschmidt8](https://github.com/ajschmidt8)
- Pin wheel dependencies to same RAPIDS release ([#12659](https://github.com/rapidsai/cudf/pull/12659)) [@sevagh](https://github.com/sevagh)
- Use CTK 118/cp310 branch of wheel workflows ([#12602](https://github.com/rapidsai/cudf/pull/12602)) [@sevagh](https://github.com/sevagh)
- Change ways to access `ptr` in `Buffer` ([#12587](https://github.com/rapidsai/cudf/pull/12587)) [@galipremsagar](https://github.com/galipremsagar)
- Version a parquet writer xfail ([#12579](https://github.com/rapidsai/cudf/pull/12579)) [@galipremsagar](https://github.com/galipremsagar)
- Remove column names ([#12578](https://github.com/rapidsai/cudf/pull/12578)) [@vuule](https://github.com/vuule)
- Parquet reader optimization to address V100 regression. ([#12577](https://github.com/rapidsai/cudf/pull/12577)) [@nvdbaranec](https://github.com/nvdbaranec)
- Add support for `category` dtypes in CSV reader ([#12571](https://github.com/rapidsai/cudf/pull/12571)) [@galipremsagar](https://github.com/galipremsagar)
- Remove `spill_lock` parameter from `SpillableBuffer.get_ptr()` ([#12564](https://github.com/rapidsai/cudf/pull/12564)) [@madsbk](https://github.com/madsbk)
- Optimize `cudf::make_lists_column` ([#12547](https://github.com/rapidsai/cudf/pull/12547)) [@ttnghia](https://github.com/ttnghia)
- Remove `cudf::strings::repeat_strings_output_sizes` from Java and JNI ([#12546](https://github.com/rapidsai/cudf/pull/12546)) [@ttnghia](https://github.com/ttnghia)
- Test that cuInit is not called when RAPIDS_NO_INITIALIZE is set ([#12545](https://github.com/rapidsai/cudf/pull/12545)) [@wence-](https://github.com/wence-)
- Rework repeat_strings to use sizes-to-offsets utility ([#12543](https://github.com/rapidsai/cudf/pull/12543)) [@davidwendt](https://github.com/davidwendt)
- Replace exclusive_scan with sizes_to_offsets in cudf::lists::sequences ([#12541](https://github.com/rapidsai/cudf/pull/12541)) [@davidwendt](https://github.com/davidwendt)
- Rework nvtext::ngrams_tokenize to use sizes-to-offsets utility ([#12540](https://github.com/rapidsai/cudf/pull/12540)) [@davidwendt](https://github.com/davidwendt)
- Fix binary-ops gtests coded in namespace cudf::test ([#12536](https://github.com/rapidsai/cudf/pull/12536)) [@davidwendt](https://github.com/davidwendt)
- More `[@acquire_spill_lock()` and `as_buffer(..., exposed=False)` ([#12535](https://github.com/rapidsai/cudf/pull/12535)) @madsbk](https://github.com/acquire_spill_lock()` and `as_buffer(..., exposed=False)` ([#12535](https://github.com/rapidsai/cudf/pull/12535)) @madsbk)
- Guard CUDA runtime APIs with error checking ([#12531](https://github.com/rapidsai/cudf/pull/12531)) [@PointKernel](https://github.com/PointKernel)
- Update TODOs from issue 10432. ([#12528](https://github.com/rapidsai/cudf/pull/12528)) [@bdice](https://github.com/bdice)
- Update rapids-cmake definitions version in GitHub Actions style checks. ([#12511](https://github.com/rapidsai/cudf/pull/12511)) [@bdice](https://github.com/bdice)
- Switch `engine=cudf` to the new `JSON` reader ([#12509](https://github.com/rapidsai/cudf/pull/12509)) [@galipremsagar](https://github.com/galipremsagar)
- Fix SUM/MEAN aggregation type support. ([#12503](https://github.com/rapidsai/cudf/pull/12503)) [@bdice](https://github.com/bdice)
- Stop using pandas._testing ([#12492](https://github.com/rapidsai/cudf/pull/12492)) [@vyasr](https://github.com/vyasr)
- Fix ROLLING_TEST gtests coded in namespace cudf::test ([#12490](https://github.com/rapidsai/cudf/pull/12490)) [@davidwendt](https://github.com/davidwendt)
- Fix erroneously skipped ORC ZSTD test ([#12486](https://github.com/rapidsai/cudf/pull/12486)) [@vuule](https://github.com/vuule)
- Rework nvtext::generate_character_ngrams to use make_strings_children ([#12480](https://github.com/rapidsai/cudf/pull/12480)) [@davidwendt](https://github.com/davidwendt)
- Raise warnings as errors in the test suite ([#12468](https://github.com/rapidsai/cudf/pull/12468)) [@vyasr](https://github.com/vyasr)
- Remove `int32` hard-coding in python ([#12467](https://github.com/rapidsai/cudf/pull/12467)) [@galipremsagar](https://github.com/galipremsagar)
- Use cudaMemcpyDefault. ([#12466](https://github.com/rapidsai/cudf/pull/12466)) [@bdice](https://github.com/bdice)
- Update workflows for nightly tests ([#12462](https://github.com/rapidsai/cudf/pull/12462)) [@ajschmidt8](https://github.com/ajschmidt8)
- Build CUDA `11.8` and Python `3.10` Packages ([#12457](https://github.com/rapidsai/cudf/pull/12457)) [@ajschmidt8](https://github.com/ajschmidt8)
- JNI build image default as cuda11.8 ([#12441](https://github.com/rapidsai/cudf/pull/12441)) [@pxLi](https://github.com/pxLi)
- Re-enable `Recently Updated` Check ([#12435](https://github.com/rapidsai/cudf/pull/12435)) [@ajschmidt8](https://github.com/ajschmidt8)
- Rework remaining cudf::strings::from_xyz functions to use make_strings_children ([#12434](https://github.com/rapidsai/cudf/pull/12434)) [@vuule](https://github.com/vuule)
- Build wheels alongside conda CI ([#12427](https://github.com/rapidsai/cudf/pull/12427)) [@sevagh](https://github.com/sevagh)
- Remove arguments for checking exception messages in Python ([#12424](https://github.com/rapidsai/cudf/pull/12424)) [@vyasr](https://github.com/vyasr)
- Clean up cuco usage ([#12421](https://github.com/rapidsai/cudf/pull/12421)) [@PointKernel](https://github.com/PointKernel)
- Fix warnings in remaining modules ([#12406](https://github.com/rapidsai/cudf/pull/12406)) [@vyasr](https://github.com/vyasr)
- Update `ops-bot.yaml` ([#12402](https://github.com/rapidsai/cudf/pull/12402)) [@ajschmidt8](https://github.com/ajschmidt8)
- Rework cudf::strings::integers_to_ipv4 to use make_strings_children utility ([#12401](https://github.com/rapidsai/cudf/pull/12401)) [@davidwendt](https://github.com/davidwendt)
- Use `numpy.empty()` instead of `bytearray` to allocate host memory for spilling ([#12399](https://github.com/rapidsai/cudf/pull/12399)) [@madsbk](https://github.com/madsbk)
- Deprecate chunksize from dask_cudf.read_csv ([#12394](https://github.com/rapidsai/cudf/pull/12394)) [@rjzamora](https://github.com/rjzamora)
- Expose the RMM pool size in JNI ([#12390](https://github.com/rapidsai/cudf/pull/12390)) [@revans2](https://github.com/revans2)
- Fix COPYING_TEST: gtests coded in namespace cudf::test ([#12387](https://github.com/rapidsai/cudf/pull/12387)) [@davidwendt](https://github.com/davidwendt)
- Rework cudf::strings::url_encode to use make_strings_children utility ([#12385](https://github.com/rapidsai/cudf/pull/12385)) [@davidwendt](https://github.com/davidwendt)
- Use make_strings_children in parse_data nested json reader ([#12382](https://github.com/rapidsai/cudf/pull/12382)) [@karthikeyann](https://github.com/karthikeyann)
- Fix warnings in test_datetime.py ([#12381](https://github.com/rapidsai/cudf/pull/12381)) [@vyasr](https://github.com/vyasr)
- Mixed Join Benchmarks ([#12375](https://github.com/rapidsai/cudf/pull/12375)) [@divyegala](https://github.com/divyegala)
- Fix warnings in dataframe.py ([#12369](https://github.com/rapidsai/cudf/pull/12369)) [@vyasr](https://github.com/vyasr)
- Update conda recipes. ([#12368](https://github.com/rapidsai/cudf/pull/12368)) [@bdice](https://github.com/bdice)
- Use gpu-latest-1 runner tag ([#12366](https://github.com/rapidsai/cudf/pull/12366)) [@bdice](https://github.com/bdice)
- Rework cudf::strings::from_booleans to use make_strings_children ([#12365](https://github.com/rapidsai/cudf/pull/12365)) [@vuule](https://github.com/vuule)
- Fix warnings in test modules up to test_dataframe.py ([#12355](https://github.com/rapidsai/cudf/pull/12355)) [@vyasr](https://github.com/vyasr)
- JSON column performance optimization - struct column nulls ([#12354](https://github.com/rapidsai/cudf/pull/12354)) [@karthikeyann](https://github.com/karthikeyann)
- Accelerate stable-segmented-sort with CUB segmented sort ([#12347](https://github.com/rapidsai/cudf/pull/12347)) [@davidwendt](https://github.com/davidwendt)
- Add size check to make_offsets_child_column utility ([#12345](https://github.com/rapidsai/cudf/pull/12345)) [@davidwendt](https://github.com/davidwendt)
- Enable max compression ratio small block optimization for ZSTD ([#12338](https://github.com/rapidsai/cudf/pull/12338)) [@vuule](https://github.com/vuule)
- Fix warnings in test_monotonic.py ([#12334](https://github.com/rapidsai/cudf/pull/12334)) [@vyasr](https://github.com/vyasr)
- Improve JSON column creation performance (list offsets) ([#12330](https://github.com/rapidsai/cudf/pull/12330)) [@karthikeyann](https://github.com/karthikeyann)
- Upgrade to `arrow-10.0.1` ([#12327](https://github.com/rapidsai/cudf/pull/12327)) [@galipremsagar](https://github.com/galipremsagar)
- Fix warnings in test_orc.py ([#12326](https://github.com/rapidsai/cudf/pull/12326)) [@vyasr](https://github.com/vyasr)
- Fix warnings in test_groupby.py ([#12324](https://github.com/rapidsai/cudf/pull/12324)) [@vyasr](https://github.com/vyasr)
- Fix `test_notebooks.sh` ([#12323](https://github.com/rapidsai/cudf/pull/12323)) [@ajschmidt8](https://github.com/ajschmidt8)
- Fix transform gtests coded in namespace cudf::test ([#12321](https://github.com/rapidsai/cudf/pull/12321)) [@davidwendt](https://github.com/davidwendt)
- Fix `check_style.sh` script ([#12320](https://github.com/rapidsai/cudf/pull/12320)) [@ajschmidt8](https://github.com/ajschmidt8)
- Rework cudf::strings::from_timestamps to use make_strings_children ([#12317](https://github.com/rapidsai/cudf/pull/12317)) [@davidwendt](https://github.com/davidwendt)
- Fix warnings in test_index.py ([#12313](https://github.com/rapidsai/cudf/pull/12313)) [@vyasr](https://github.com/vyasr)
- Fix warnings in test_multiindex.py ([#12310](https://github.com/rapidsai/cudf/pull/12310)) [@vyasr](https://github.com/vyasr)
- CSV, JSON reader to infer integer column with nulls as int64 instead of float64 ([#12309](https://github.com/rapidsai/cudf/pull/12309)) [@karthikeyann](https://github.com/karthikeyann)
- Fix warnings in test_indexing.py ([#12305](https://github.com/rapidsai/cudf/pull/12305)) [@vyasr](https://github.com/vyasr)
- Fix warnings in test_joining.py ([#12304](https://github.com/rapidsai/cudf/pull/12304)) [@vyasr](https://github.com/vyasr)
- Unpin `dask` and `distributed` for development ([#12302](https://github.com/rapidsai/cudf/pull/12302)) [@galipremsagar](https://github.com/galipremsagar)
- Re-enable `sccache` for Jenkins builds ([#12297](https://github.com/rapidsai/cudf/pull/12297)) [@ajschmidt8](https://github.com/ajschmidt8)
- Define needs for pr-builder workflow. ([#12296](https://github.com/rapidsai/cudf/pull/12296)) [@bdice](https://github.com/bdice)
- Forward merge 22.12 into 23.02 ([#12294](https://github.com/rapidsai/cudf/pull/12294)) [@vyasr](https://github.com/vyasr)
- Fix warnings in test_stats.py ([#12293](https://github.com/rapidsai/cudf/pull/12293)) [@vyasr](https://github.com/vyasr)
- Fix table gtests coded in namespace cudf::test ([#12292](https://github.com/rapidsai/cudf/pull/12292)) [@davidwendt](https://github.com/davidwendt)
- Change cython for regex calls to use cudf::strings::regex_program ([#12289](https://github.com/rapidsai/cudf/pull/12289)) [@davidwendt](https://github.com/davidwendt)
- Improved error reporting when reading multiple JSON files ([#12285](https://github.com/rapidsai/cudf/pull/12285)) [@vuule](https://github.com/vuule)
- Deprecate Frame.sum_of_squares ([#12284](https://github.com/rapidsai/cudf/pull/12284)) [@vyasr](https://github.com/vyasr)
- Remove deprecated code for 23.02 ([#12281](https://github.com/rapidsai/cudf/pull/12281)) [@vyasr](https://github.com/vyasr)
- Clean up handling of max_page_size_bytes in Parquet writer ([#12277](https://github.com/rapidsai/cudf/pull/12277)) [@etseidl](https://github.com/etseidl)
- Fix replace gtests coded in namespace cudf::test ([#12270](https://github.com/rapidsai/cudf/pull/12270)) [@davidwendt](https://github.com/davidwendt)
- Add pandas nullable type support in `Index.to_pandas` ([#12268](https://github.com/rapidsai/cudf/pull/12268)) [@galipremsagar](https://github.com/galipremsagar)
- Rework nvtext::detokenize to use indexalator for row indices ([#12267](https://github.com/rapidsai/cudf/pull/12267)) [@davidwendt](https://github.com/davidwendt)
- Fix reduction gtests coded in namespace cudf::test ([#12257](https://github.com/rapidsai/cudf/pull/12257)) [@davidwendt](https://github.com/davidwendt)
- Remove default parameters from cudf::detail::sort function declarations ([#12254](https://github.com/rapidsai/cudf/pull/12254)) [@davidwendt](https://github.com/davidwendt)
- Add `duplicated` support for `Series`, `DataFrame` and `Index` ([#12246](https://github.com/rapidsai/cudf/pull/12246)) [@galipremsagar](https://github.com/galipremsagar)
- Replace column/table test utilities with macros ([#12242](https://github.com/rapidsai/cudf/pull/12242)) [@PointKernel](https://github.com/PointKernel)
- Rework cudf::strings::pad and zfill to use make_strings_children ([#12238](https://github.com/rapidsai/cudf/pull/12238)) [@davidwendt](https://github.com/davidwendt)
- Fix sort gtests coded in namespace cudf::test ([#12237](https://github.com/rapidsai/cudf/pull/12237)) [@davidwendt](https://github.com/davidwendt)
- Wrapping concat and file writes in `[@acquire_spill_lock()` ([#12232](https://github.com/rapidsai/cudf/pull/12232)) @madsbk](https://github.com/acquire_spill_lock()` ([#12232](https://github.com/rapidsai/cudf/pull/12232)) @madsbk)
- Rename `cudf::structs::detail::superimpose_parent_nulls` APIs ([#12230](https://github.com/rapidsai/cudf/pull/12230)) [@ttnghia](https://github.com/ttnghia)
- Cover parsing to decimal types in `read_json` tests ([#12229](https://github.com/rapidsai/cudf/pull/12229)) [@vuule](https://github.com/vuule)
- Spill Statistics ([#12223](https://github.com/rapidsai/cudf/pull/12223)) [@madsbk](https://github.com/madsbk)
- Use CUDF_JNI_ENABLE_PROFILING to conditionally enable profiling support. ([#12221](https://github.com/rapidsai/cudf/pull/12221)) [@bdice](https://github.com/bdice)
- Clean up of `test_spilling.py` ([#12220](https://github.com/rapidsai/cudf/pull/12220)) [@madsbk](https://github.com/madsbk)
- Simplify repetitive boolean logic ([#12218](https://github.com/rapidsai/cudf/pull/12218)) [@vuule](https://github.com/vuule)
- Add `Series.hasnans` and `Index.hasnans` ([#12214](https://github.com/rapidsai/cudf/pull/12214)) [@galipremsagar](https://github.com/galipremsagar)
- Add cudf::strings:udf::replace function ([#12210](https://github.com/rapidsai/cudf/pull/12210)) [@davidwendt](https://github.com/davidwendt)
- Adds in new java APIs for appending byte arrays to host columnar data ([#12208](https://github.com/rapidsai/cudf/pull/12208)) [@revans2](https://github.com/revans2)
- Remove Python dependencies from Java CI. ([#12193](https://github.com/rapidsai/cudf/pull/12193)) [@bdice](https://github.com/bdice)
- Fix null order in sort-based groupby and improve groupby tests ([#12191](https://github.com/rapidsai/cudf/pull/12191)) [@divyegala](https://github.com/divyegala)
- Move strings children functions from cudf/strings/detail/utilities.cuh to new header ([#12185](https://github.com/rapidsai/cudf/pull/12185)) [@davidwendt](https://github.com/davidwendt)
- Clean up existing JNI scalar to column code ([#12173](https://github.com/rapidsai/cudf/pull/12173)) [@revans2](https://github.com/revans2)
- Remove JIT type names, refactor id_to_type. ([#12158](https://github.com/rapidsai/cudf/pull/12158)) [@bdice](https://github.com/bdice)
- Update JNI version to 23.02.0-SNAPSHOT ([#12129](https://github.com/rapidsai/cudf/pull/12129)) [@pxLi](https://github.com/pxLi)
- Minor refactor of cpp/src/io/parquet/page_data.cu ([#12126](https://github.com/rapidsai/cudf/pull/12126)) [@etseidl](https://github.com/etseidl)
- Add codespell as a linter ([#12097](https://github.com/rapidsai/cudf/pull/12097)) [@benfred](https://github.com/benfred)
- Enable specifying exceptions in error macros ([#12078](https://github.com/rapidsai/cudf/pull/12078)) [@vyasr](https://github.com/vyasr)
- Move `_label_encoding` from Series to Column ([#12040](https://github.com/rapidsai/cudf/pull/12040)) [@shwina](https://github.com/shwina)
- Add GitHub Actions Workflows ([#12002](https://github.com/rapidsai/cudf/pull/12002)) [@ajschmidt8](https://github.com/ajschmidt8)
- Consolidate dask-cudf `groupby_agg` calls in one place ([#10835](https://github.com/rapidsai/cudf/pull/10835)) [@charlesbluca](https://github.com/charlesbluca)
# cuDF 22.12.00 (8 Dec 2022)
## π¨ Breaking Changes
- Add JNI for `substring` without 'end' parameter. ([#12113](https://github.com/rapidsai/cudf/pull/12113)) [@firestarman](https://github.com/firestarman)
- Refactor `purge_nonempty_nulls` ([#12111](https://github.com/rapidsai/cudf/pull/12111)) [@ttnghia](https://github.com/ttnghia)
- Create an `int8` column in `read_csv` when all elements are missing ([#12110](https://github.com/rapidsai/cudf/pull/12110)) [@vuule](https://github.com/vuule)
- Throw an error when libcudf is built without cuFile and `LIBCUDF_CUFILE_POLICY` is set to `"ALWAYS"` ([#12080](https://github.com/rapidsai/cudf/pull/12080)) [@vuule](https://github.com/vuule)
- Fix type promotion edge cases in numerical binops ([#12074](https://github.com/rapidsai/cudf/pull/12074)) [@wence-](https://github.com/wence-)
- Reduce/Remove reliance on `**kwargs` and `*args` in `IO` readers & writers ([#12025](https://github.com/rapidsai/cudf/pull/12025)) [@galipremsagar](https://github.com/galipremsagar)
- Rollback of `DeviceBufferLike` ([#12009](https://github.com/rapidsai/cudf/pull/12009)) [@madsbk](https://github.com/madsbk)
- Remove unused `managed_allocator` ([#12005](https://github.com/rapidsai/cudf/pull/12005)) [@vyasr](https://github.com/vyasr)
- Pass column names to `write_csv` instead of `table_metadata` pointer ([#11972](https://github.com/rapidsai/cudf/pull/11972)) [@vuule](https://github.com/vuule)
- Accept const refs instead of const unique_ptr refs in reduce and scan APIs. ([#11960](https://github.com/rapidsai/cudf/pull/11960)) [@vyasr](https://github.com/vyasr)
- Default to equal NaNs in make_merge_sets_aggregation. ([#11952](https://github.com/rapidsai/cudf/pull/11952)) [@bdice](https://github.com/bdice)
- Remove validation that requires introspection ([#11938](https://github.com/rapidsai/cudf/pull/11938)) [@vyasr](https://github.com/vyasr)
- Trim quotes for non-string values in nested json parsing ([#11898](https://github.com/rapidsai/cudf/pull/11898)) [@karthikeyann](https://github.com/karthikeyann)
- Add tests ensuring that cudf's default stream is always used ([#11875](https://github.com/rapidsai/cudf/pull/11875)) [@vyasr](https://github.com/vyasr)
- Support nested types as groupby keys in libcudf ([#11792](https://github.com/rapidsai/cudf/pull/11792)) [@PointKernel](https://github.com/PointKernel)
- Default to equal NaNs in make_collect_set_aggregation. ([#11621](https://github.com/rapidsai/cudf/pull/11621)) [@bdice](https://github.com/bdice)
- Removing int8 column option from parquet byte_array writing ([#11539](https://github.com/rapidsai/cudf/pull/11539)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- part1: Simplify BaseIndex to an abstract class ([#10389](https://github.com/rapidsai/cudf/pull/10389)) [@skirui-source](https://github.com/skirui-source)
## π Bug Fixes
- Fix include line for IO Cython modules ([#12250](https://github.com/rapidsai/cudf/pull/12250)) [@vyasr](https://github.com/vyasr)
- Make dask pinning looser ([#12231](https://github.com/rapidsai/cudf/pull/12231)) [@vyasr](https://github.com/vyasr)
- Workaround for CUB segmented-sort bug with boolean keys ([#12217](https://github.com/rapidsai/cudf/pull/12217)) [@davidwendt](https://github.com/davidwendt)
- Fix `from_dict` backend dispatch to match upstream `dask` ([#12203](https://github.com/rapidsai/cudf/pull/12203)) [@galipremsagar](https://github.com/galipremsagar)
- Merge branch-22.10 into branch-22.12 ([#12198](https://github.com/rapidsai/cudf/pull/12198)) [@davidwendt](https://github.com/davidwendt)
- Fix compression in ORC writer ([#12194](https://github.com/rapidsai/cudf/pull/12194)) [@vuule](https://github.com/vuule)
- Don't use CMake 3.25.0 as it has a show stopping FindCUDAToolkit bug ([#12188](https://github.com/rapidsai/cudf/pull/12188)) [@robertmaynard](https://github.com/robertmaynard)
- Fix data corruption when reading ORC files with empty stripes ([#12160](https://github.com/rapidsai/cudf/pull/12160)) [@vuule](https://github.com/vuule)
- Fix decimal binary operations ([#12142](https://github.com/rapidsai/cudf/pull/12142)) [@galipremsagar](https://github.com/galipremsagar)
- Ensure dlpack include is provided to cudf interop lib ([#12139](https://github.com/rapidsai/cudf/pull/12139)) [@robertmaynard](https://github.com/robertmaynard)
- Safely allocate `udf_string` pointers in `strings_udf` ([#12138](https://github.com/rapidsai/cudf/pull/12138)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Fix/disable jitify lto ([#12122](https://github.com/rapidsai/cudf/pull/12122)) [@robertmaynard](https://github.com/robertmaynard)
- Fix conditional_full_join benchmark ([#12121](https://github.com/rapidsai/cudf/pull/12121)) [@GregoryKimball](https://github.com/GregoryKimball)
- Fix regex working-memory-size refactor error ([#12119](https://github.com/rapidsai/cudf/pull/12119)) [@davidwendt](https://github.com/davidwendt)
- Add in negative size checks for columns ([#12118](https://github.com/rapidsai/cudf/pull/12118)) [@revans2](https://github.com/revans2)
- Add JNI for `substring` without 'end' parameter. ([#12113](https://github.com/rapidsai/cudf/pull/12113)) [@firestarman](https://github.com/firestarman)
- Fix reading of CSV files with blank second row ([#12098](https://github.com/rapidsai/cudf/pull/12098)) [@vuule](https://github.com/vuule)
- Fix an error in IO with `GzipFile` type ([#12085](https://github.com/rapidsai/cudf/pull/12085)) [@galipremsagar](https://github.com/galipremsagar)
- Workaround groupby aggregate thrust::copy_if overflow ([#12079](https://github.com/rapidsai/cudf/pull/12079)) [@davidwendt](https://github.com/davidwendt)
- Fix alignment of compressed blocks in ORC writer ([#12077](https://github.com/rapidsai/cudf/pull/12077)) [@vuule](https://github.com/vuule)
- Fix singleton-range `__setitem__` edge case ([#12075](https://github.com/rapidsai/cudf/pull/12075)) [@wence-](https://github.com/wence-)
- Fix type promotion edge cases in numerical binops ([#12074](https://github.com/rapidsai/cudf/pull/12074)) [@wence-](https://github.com/wence-)
- Force using old fmt in nvbench. ([#12067](https://github.com/rapidsai/cudf/pull/12067)) [@vyasr](https://github.com/vyasr)
- Fixes List offset bug in Nested JSON reader ([#12060](https://github.com/rapidsai/cudf/pull/12060)) [@karthikeyann](https://github.com/karthikeyann)
- Allow falling back to `shim_60.ptx` by default in `strings_udf` ([#12056](https://github.com/rapidsai/cudf/pull/12056)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Force black exclusions for pre-commit. ([#12036](https://github.com/rapidsai/cudf/pull/12036)) [@bdice](https://github.com/bdice)
- Add `memory_usage` & `items` implementation for `Struct` column & dtype ([#12033](https://github.com/rapidsai/cudf/pull/12033)) [@galipremsagar](https://github.com/galipremsagar)
- Reduce/Remove reliance on `**kwargs` and `*args` in `IO` readers & writers ([#12025](https://github.com/rapidsai/cudf/pull/12025)) [@galipremsagar](https://github.com/galipremsagar)
- Fixes bug in csv_reader_options construction in cython ([#12021](https://github.com/rapidsai/cudf/pull/12021)) [@karthikeyann](https://github.com/karthikeyann)
- Fix issues when both `usecols` and `names` options are used in `read_csv` ([#12018](https://github.com/rapidsai/cudf/pull/12018)) [@vuule](https://github.com/vuule)
- Port thrust's pinned_allocator to cudf, since Thrust 1.17 removes the type ([#12004](https://github.com/rapidsai/cudf/pull/12004)) [@robertmaynard](https://github.com/robertmaynard)
- Revert "Replace most of preprocessor usage in nvcomp adapter with `constexpr`" ([#11999](https://github.com/rapidsai/cudf/pull/11999)) [@vuule](https://github.com/vuule)
- Fix bug where `df.loc` resulting in single row could give wrong index ([#11998](https://github.com/rapidsai/cudf/pull/11998)) [@eriknw](https://github.com/eriknw)
- Switch to DISABLE_DEPRECATION_WARNINGS to match other RAPIDS projects ([#11989](https://github.com/rapidsai/cudf/pull/11989)) [@robertmaynard](https://github.com/robertmaynard)
- Fix maximum page size estimate in Parquet writer ([#11962](https://github.com/rapidsai/cudf/pull/11962)) [@vuule](https://github.com/vuule)
- Fix local offset handling in bgzip reader ([#11918](https://github.com/rapidsai/cudf/pull/11918)) [@upsj](https://github.com/upsj)
- Fix an issue reading struct-of-list types in Parquet. ([#11910](https://github.com/rapidsai/cudf/pull/11910)) [@nvdbaranec](https://github.com/nvdbaranec)
- Fix memcheck error in TypeInference.Timestamp gtest ([#11905](https://github.com/rapidsai/cudf/pull/11905)) [@davidwendt](https://github.com/davidwendt)
- Fix type casting in Series.__setitem__ ([#11904](https://github.com/rapidsai/cudf/pull/11904)) [@wence-](https://github.com/wence-)
- Fix memcheck error in get_dremel_data ([#11903](https://github.com/rapidsai/cudf/pull/11903)) [@davidwendt](https://github.com/davidwendt)
- Fixes Unsupported column type error due to empty list columns in Nested JSON reader ([#11897](https://github.com/rapidsai/cudf/pull/11897)) [@karthikeyann](https://github.com/karthikeyann)
- Fix segmented-sort to ignore indices outside the offsets ([#11888](https://github.com/rapidsai/cudf/pull/11888)) [@davidwendt](https://github.com/davidwendt)
- Fix cudf::stable_sorted_order for NaN and -NaN in FLOAT64 columns ([#11874](https://github.com/rapidsai/cudf/pull/11874)) [@davidwendt](https://github.com/davidwendt)
- Fix writing of Parquet files with many fragments ([#11869](https://github.com/rapidsai/cudf/pull/11869)) [@etseidl](https://github.com/etseidl)
- Fix RangeIndex unary operators. ([#11868](https://github.com/rapidsai/cudf/pull/11868)) [@vyasr](https://github.com/vyasr)
- JNI Avoid NPE for reading host binary data ([#11865](https://github.com/rapidsai/cudf/pull/11865)) [@revans2](https://github.com/revans2)
- Fix decimal benchmark input data generation ([#11863](https://github.com/rapidsai/cudf/pull/11863)) [@karthikeyann](https://github.com/karthikeyann)
- Fix pre-commit copyright check ([#11860](https://github.com/rapidsai/cudf/pull/11860)) [@galipremsagar](https://github.com/galipremsagar)
- Fix Parquet support for seconds and milliseconds duration types ([#11854](https://github.com/rapidsai/cudf/pull/11854)) [@vuule](https://github.com/vuule)
- Ensure better compiler cache results between cudf cal-ver branches ([#11835](https://github.com/rapidsai/cudf/pull/11835)) [@robertmaynard](https://github.com/robertmaynard)
- Fix make_column_from_scalar for all-null strings column ([#11807](https://github.com/rapidsai/cudf/pull/11807)) [@davidwendt](https://github.com/davidwendt)
- Tell jitify_preprocess where to search for libnvrtc ([#11787](https://github.com/rapidsai/cudf/pull/11787)) [@robertmaynard](https://github.com/robertmaynard)
- add V2 page header support to parquet reader ([#11778](https://github.com/rapidsai/cudf/pull/11778)) [@etseidl](https://github.com/etseidl)
- Parquet reader: bug fix for a num_rows/skip_rows corner case, w/optimization for nested preprocessing ([#11752](https://github.com/rapidsai/cudf/pull/11752)) [@nvdbaranec](https://github.com/nvdbaranec)
- Determine if Arrow has S3 support at runtime in unit test. ([#11560](https://github.com/rapidsai/cudf/pull/11560)) [@bdice](https://github.com/bdice)
## π Documentation
- Use rapidsai CODE_OF_CONDUCT.md ([#12166](https://github.com/rapidsai/cudf/pull/12166)) [@bdice](https://github.com/bdice)
- Add symlinks to notebooks. ([#12128](https://github.com/rapidsai/cudf/pull/12128)) [@bdice](https://github.com/bdice)
- Add `truncate` API to python doc pages ([#12109](https://github.com/rapidsai/cudf/pull/12109)) [@galipremsagar](https://github.com/galipremsagar)
- Update Numba docs links. ([#12107](https://github.com/rapidsai/cudf/pull/12107)) [@bdice](https://github.com/bdice)
- Remove "Multi-GPU with Dask-cuDF" notebook. ([#12095](https://github.com/rapidsai/cudf/pull/12095)) [@bdice](https://github.com/bdice)
- Fix link to c++ developer guide from `CONTRIBUTING.md` ([#12084](https://github.com/rapidsai/cudf/pull/12084)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Add pivot_table and crosstab to docs. ([#12014](https://github.com/rapidsai/cudf/pull/12014)) [@bdice](https://github.com/bdice)
- Fix doxygen text for cudf::dictionary::encode ([#11991](https://github.com/rapidsai/cudf/pull/11991)) [@davidwendt](https://github.com/davidwendt)
- Replace default_stream_value with get_default_stream in docs. ([#11985](https://github.com/rapidsai/cudf/pull/11985)) [@vyasr](https://github.com/vyasr)
- Add dtype docs pages and docstrings for `cudf` specific dtypes ([#11974](https://github.com/rapidsai/cudf/pull/11974)) [@galipremsagar](https://github.com/galipremsagar)
- Update Unit Testing in libcudf guidelines to code tests outside the cudf::test namespace ([#11959](https://github.com/rapidsai/cudf/pull/11959)) [@davidwendt](https://github.com/davidwendt)
- Rename libcudf++ to libcudf. ([#11953](https://github.com/rapidsai/cudf/pull/11953)) [@bdice](https://github.com/bdice)
- Fix documentation referring to removed as_gpu_matrix method. ([#11937](https://github.com/rapidsai/cudf/pull/11937)) [@bdice](https://github.com/bdice)
- Remove "experimental" warning for struct columns in ORC reader and writer ([#11880](https://github.com/rapidsai/cudf/pull/11880)) [@vuule](https://github.com/vuule)
- Initial draft of policies and guidelines for libcudf usage. ([#11853](https://github.com/rapidsai/cudf/pull/11853)) [@vyasr](https://github.com/vyasr)
- Add clear indication of non-GPU accelerated parameters in read_json docstring ([#11825](https://github.com/rapidsai/cudf/pull/11825)) [@GregoryKimball](https://github.com/GregoryKimball)
- Add developer docs for writing tests ([#11199](https://github.com/rapidsai/cudf/pull/11199)) [@vyasr](https://github.com/vyasr)
## π New Features
- Adds an EventHandler to Java MemoryBuffer to be invoked on close ([#12125](https://github.com/rapidsai/cudf/pull/12125)) [@abellina](https://github.com/abellina)
- Support `+` in `strings_udf` ([#12117](https://github.com/rapidsai/cudf/pull/12117)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Support `upper` and `lower` in `strings_udf` ([#12099](https://github.com/rapidsai/cudf/pull/12099)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Add wheel builds ([#12096](https://github.com/rapidsai/cudf/pull/12096)) [@vyasr](https://github.com/vyasr)
- Allow setting malloc heap size in string udfs ([#12094](https://github.com/rapidsai/cudf/pull/12094)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Support `strip`, `lstrip`, and `rstrip` in `strings_udf` ([#12091](https://github.com/rapidsai/cudf/pull/12091)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Mark nvcomp zstd compression stable ([#12059](https://github.com/rapidsai/cudf/pull/12059)) [@jbrennan333](https://github.com/jbrennan333)
- Add debug-only onAllocated/onDeallocated to RmmEventHandler ([#12054](https://github.com/rapidsai/cudf/pull/12054)) [@abellina](https://github.com/abellina)
- Enable building against the libarrow contained in pyarrow ([#12034](https://github.com/rapidsai/cudf/pull/12034)) [@vyasr](https://github.com/vyasr)
- Add strings `like` jni and native method ([#12032](https://github.com/rapidsai/cudf/pull/12032)) [@cindyyuanjiang](https://github.com/cindyyuanjiang)
- Cleanup common parsing code in JSON, CSV reader ([#12022](https://github.com/rapidsai/cudf/pull/12022)) [@karthikeyann](https://github.com/karthikeyann)
- byte_range support for JSON Lines format ([#12017](https://github.com/rapidsai/cudf/pull/12017)) [@karthikeyann](https://github.com/karthikeyann)
- Minor cleanup of root CMakeLists.txt for better organization ([#11988](https://github.com/rapidsai/cudf/pull/11988)) [@robertmaynard](https://github.com/robertmaynard)
- Add inplace arithmetic operators to `MaskedType` ([#11987](https://github.com/rapidsai/cudf/pull/11987)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Implement JNI for chunked Parquet reader ([#11961](https://github.com/rapidsai/cudf/pull/11961)) [@ttnghia](https://github.com/ttnghia)
- Add method argument to DataFrame.quantile ([#11957](https://github.com/rapidsai/cudf/pull/11957)) [@rjzamora](https://github.com/rjzamora)
- Add gpu memory watermark apis to JNI ([#11950](https://github.com/rapidsai/cudf/pull/11950)) [@abellina](https://github.com/abellina)
- Adds retryCount to RmmEventHandler.onAllocFailure ([#11940](https://github.com/rapidsai/cudf/pull/11940)) [@abellina](https://github.com/abellina)
- Enable returning string data from UDFs used through `apply` ([#11933](https://github.com/rapidsai/cudf/pull/11933)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Switch over to rapids-cmake patches for thrust ([#11921](https://github.com/rapidsai/cudf/pull/11921)) [@robertmaynard](https://github.com/robertmaynard)
- Add strings udf C++ classes and functions for phase II ([#11912](https://github.com/rapidsai/cudf/pull/11912)) [@davidwendt](https://github.com/davidwendt)
- Trim quotes for non-string values in nested json parsing ([#11898](https://github.com/rapidsai/cudf/pull/11898)) [@karthikeyann](https://github.com/karthikeyann)
- Enable CEC for `strings_udf` ([#11884](https://github.com/rapidsai/cudf/pull/11884)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- ArrowIPCTableWriter writes en empty batch in the case of an empty table. ([#11883](https://github.com/rapidsai/cudf/pull/11883)) [@firestarman](https://github.com/firestarman)
- Implement chunked Parquet reader ([#11867](https://github.com/rapidsai/cudf/pull/11867)) [@ttnghia](https://github.com/ttnghia)
- Add `read_orc_metadata` to libcudf ([#11815](https://github.com/rapidsai/cudf/pull/11815)) [@vuule](https://github.com/vuule)
- Support nested types as groupby keys in libcudf ([#11792](https://github.com/rapidsai/cudf/pull/11792)) [@PointKernel](https://github.com/PointKernel)
- Adding feature Truncate to DataFrame and Series ([#11435](https://github.com/rapidsai/cudf/pull/11435)) [@VamsiTallam95](https://github.com/VamsiTallam95)
## π οΈ Improvements
- Reduce number of tests marked `spilling` ([#12197](https://github.com/rapidsai/cudf/pull/12197)) [@madsbk](https://github.com/madsbk)
- Pin `dask` and `distributed` for release ([#12165](https://github.com/rapidsai/cudf/pull/12165)) [@galipremsagar](https://github.com/galipremsagar)
- Don't rely on GNU find in headers_test.sh ([#12164](https://github.com/rapidsai/cudf/pull/12164)) [@wence-](https://github.com/wence-)
- Update cp.clip call ([#12148](https://github.com/rapidsai/cudf/pull/12148)) [@quasiben](https://github.com/quasiben)
- Enable automatic column projection in groupby().agg ([#12124](https://github.com/rapidsai/cudf/pull/12124)) [@rjzamora](https://github.com/rjzamora)
- Refactor `purge_nonempty_nulls` ([#12111](https://github.com/rapidsai/cudf/pull/12111)) [@ttnghia](https://github.com/ttnghia)
- Create an `int8` column in `read_csv` when all elements are missing ([#12110](https://github.com/rapidsai/cudf/pull/12110)) [@vuule](https://github.com/vuule)
- Spilling to host memory ([#12106](https://github.com/rapidsai/cudf/pull/12106)) [@madsbk](https://github.com/madsbk)
- First pass of `pd.read_orc` changes in tests ([#12103](https://github.com/rapidsai/cudf/pull/12103)) [@galipremsagar](https://github.com/galipremsagar)
- Expose engine argument in dask_cudf.read_json ([#12101](https://github.com/rapidsai/cudf/pull/12101)) [@rjzamora](https://github.com/rjzamora)
- Remove CUDA 10 compatibility code. ([#12088](https://github.com/rapidsai/cudf/pull/12088)) [@bdice](https://github.com/bdice)
- Move and update `dask` nigthly install in CI ([#12082](https://github.com/rapidsai/cudf/pull/12082)) [@galipremsagar](https://github.com/galipremsagar)
- Throw an error when libcudf is built without cuFile and `LIBCUDF_CUFILE_POLICY` is set to `"ALWAYS"` ([#12080](https://github.com/rapidsai/cudf/pull/12080)) [@vuule](https://github.com/vuule)
- Remove macros that inspect the contents of exceptions ([#12076](https://github.com/rapidsai/cudf/pull/12076)) [@vyasr](https://github.com/vyasr)
- Fix ingest_raw_data performance issue in Nested JSON reader due to RVO ([#12070](https://github.com/rapidsai/cudf/pull/12070)) [@karthikeyann](https://github.com/karthikeyann)
- Remove overflow error during decimal binops ([#12063](https://github.com/rapidsai/cudf/pull/12063)) [@galipremsagar](https://github.com/galipremsagar)
- Change cudf::detail::tdigest to cudf::tdigest::detail ([#12050](https://github.com/rapidsai/cudf/pull/12050)) [@davidwendt](https://github.com/davidwendt)
- Fix quantile gtests coded in namespace cudf::test ([#12049](https://github.com/rapidsai/cudf/pull/12049)) [@davidwendt](https://github.com/davidwendt)
- Add support for `DataFrame.from_dict`\`to_dict` and `Series.to_dict` ([#12048](https://github.com/rapidsai/cudf/pull/12048)) [@galipremsagar](https://github.com/galipremsagar)
- Refactor Parquet reader ([#12046](https://github.com/rapidsai/cudf/pull/12046)) [@ttnghia](https://github.com/ttnghia)
- Forward merge 22.10 into 22.12 ([#12045](https://github.com/rapidsai/cudf/pull/12045)) [@vyasr](https://github.com/vyasr)
- Standardize newlines at ends of files. ([#12042](https://github.com/rapidsai/cudf/pull/12042)) [@bdice](https://github.com/bdice)
- Trim trailing whitespace from all files. ([#12041](https://github.com/rapidsai/cudf/pull/12041)) [@bdice](https://github.com/bdice)
- Use nosync policy in gather and scatter implementations. ([#12038](https://github.com/rapidsai/cudf/pull/12038)) [@bdice](https://github.com/bdice)
- Remove smart quotes from all docstrings. ([#12035](https://github.com/rapidsai/cudf/pull/12035)) [@bdice](https://github.com/bdice)
- Update cuda-python dependency to 11.7.1 ([#12030](https://github.com/rapidsai/cudf/pull/12030)) [@galipremsagar](https://github.com/galipremsagar)
- Add cython-lint to pre-commit checks. ([#12020](https://github.com/rapidsai/cudf/pull/12020)) [@bdice](https://github.com/bdice)
- Use pragma once ([#12019](https://github.com/rapidsai/cudf/pull/12019)) [@bdice](https://github.com/bdice)
- New GHA to add issues/prs to project board ([#12016](https://github.com/rapidsai/cudf/pull/12016)) [@jarmak-nv](https://github.com/jarmak-nv)
- Add DataFrame.pivot_table. ([#12015](https://github.com/rapidsai/cudf/pull/12015)) [@bdice](https://github.com/bdice)
- Rollback of `DeviceBufferLike` ([#12009](https://github.com/rapidsai/cudf/pull/12009)) [@madsbk](https://github.com/madsbk)
- Remove default parameters for nvtext::detail functions ([#12007](https://github.com/rapidsai/cudf/pull/12007)) [@davidwendt](https://github.com/davidwendt)
- Remove default parameters for cudf::dictionary::detail functions ([#12006](https://github.com/rapidsai/cudf/pull/12006)) [@davidwendt](https://github.com/davidwendt)
- Remove unused `managed_allocator` ([#12005](https://github.com/rapidsai/cudf/pull/12005)) [@vyasr](https://github.com/vyasr)
- Remove default parameters for cudf::strings::detail functions ([#12003](https://github.com/rapidsai/cudf/pull/12003)) [@davidwendt](https://github.com/davidwendt)
- Remove unnecessary code from dask-cudf _Frame ([#12001](https://github.com/rapidsai/cudf/pull/12001)) [@rjzamora](https://github.com/rjzamora)
- Ignore python docs build artifacts ([#12000](https://github.com/rapidsai/cudf/pull/12000)) [@galipremsagar](https://github.com/galipremsagar)
- Use rapids-cmake for google benchmark. ([#11997](https://github.com/rapidsai/cudf/pull/11997)) [@vyasr](https://github.com/vyasr)
- Leverage rapids_cython for more automated RPATH handling ([#11996](https://github.com/rapidsai/cudf/pull/11996)) [@vyasr](https://github.com/vyasr)
- Remove stale labeler ([#11995](https://github.com/rapidsai/cudf/pull/11995)) [@raydouglass](https://github.com/raydouglass)
- Move protobuf compilation to CMake ([#11986](https://github.com/rapidsai/cudf/pull/11986)) [@vyasr](https://github.com/vyasr)
- Replace most of preprocessor usage in nvcomp adapter with `constexpr` ([#11980](https://github.com/rapidsai/cudf/pull/11980)) [@vuule](https://github.com/vuule)
- Add missing noexcepts to column_in_metadata methods ([#11973](https://github.com/rapidsai/cudf/pull/11973)) [@vyasr](https://github.com/vyasr)
- Pass column names to `write_csv` instead of `table_metadata` pointer ([#11972](https://github.com/rapidsai/cudf/pull/11972)) [@vuule](https://github.com/vuule)
- Accelerate libcudf segmented sort with CUB segmented sort ([#11969](https://github.com/rapidsai/cudf/pull/11969)) [@davidwendt](https://github.com/davidwendt)
- Feature/remove default streams ([#11967](https://github.com/rapidsai/cudf/pull/11967)) [@vyasr](https://github.com/vyasr)
- Add pool memory resource to libcudf basic example ([#11966](https://github.com/rapidsai/cudf/pull/11966)) [@davidwendt](https://github.com/davidwendt)
- Fix some libcudf calls to cudf::detail::gather ([#11963](https://github.com/rapidsai/cudf/pull/11963)) [@davidwendt](https://github.com/davidwendt)
- Accept const refs instead of const unique_ptr refs in reduce and scan APIs. ([#11960](https://github.com/rapidsai/cudf/pull/11960)) [@vyasr](https://github.com/vyasr)
- Add deprecation warning for set_allocator. ([#11958](https://github.com/rapidsai/cudf/pull/11958)) [@vyasr](https://github.com/vyasr)
- Fix lists and structs gtests coded in namespace cudf::test ([#11956](https://github.com/rapidsai/cudf/pull/11956)) [@davidwendt](https://github.com/davidwendt)
- Add full page indexes to Parquet writer benchmarks ([#11955](https://github.com/rapidsai/cudf/pull/11955)) [@etseidl](https://github.com/etseidl)
- Use gather-based strings factory in cudf::strings::strip ([#11954](https://github.com/rapidsai/cudf/pull/11954)) [@davidwendt](https://github.com/davidwendt)
- Default to equal NaNs in make_merge_sets_aggregation. ([#11952](https://github.com/rapidsai/cudf/pull/11952)) [@bdice](https://github.com/bdice)
- Add `strip_delimiters` option to `read_text` ([#11946](https://github.com/rapidsai/cudf/pull/11946)) [@upsj](https://github.com/upsj)
- Refactor multibyte_split `output_builder` ([#11945](https://github.com/rapidsai/cudf/pull/11945)) [@upsj](https://github.com/upsj)
- Remove validation that requires introspection ([#11938](https://github.com/rapidsai/cudf/pull/11938)) [@vyasr](https://github.com/vyasr)
- Add `.str.find_multiple` API ([#11928](https://github.com/rapidsai/cudf/pull/11928)) [@galipremsagar](https://github.com/galipremsagar)
- Add regex_program class for use with all regex APIs ([#11927](https://github.com/rapidsai/cudf/pull/11927)) [@davidwendt](https://github.com/davidwendt)
- Enable backend dispatching for Dask-DataFrame creation ([#11920](https://github.com/rapidsai/cudf/pull/11920)) [@rjzamora](https://github.com/rjzamora)
- Performance improvement in JSON Tree traversal ([#11919](https://github.com/rapidsai/cudf/pull/11919)) [@karthikeyann](https://github.com/karthikeyann)
- Fix some gtests incorrectly coded in namespace cudf::test (part I) ([#11917](https://github.com/rapidsai/cudf/pull/11917)) [@davidwendt](https://github.com/davidwendt)
- Refactor pad/zfill functions for reuse with strings udf ([#11914](https://github.com/rapidsai/cudf/pull/11914)) [@davidwendt](https://github.com/davidwendt)
- Add `nanosecond` & `microsecond` to `DatetimeProperties` ([#11911](https://github.com/rapidsai/cudf/pull/11911)) [@galipremsagar](https://github.com/galipremsagar)
- Pin mimesis version in setup.py. ([#11906](https://github.com/rapidsai/cudf/pull/11906)) [@bdice](https://github.com/bdice)
- Error on `ListColumn` or any new unsupported column in `cudf.Index` ([#11902](https://github.com/rapidsai/cudf/pull/11902)) [@galipremsagar](https://github.com/galipremsagar)
- Add thrust output iterator fix (1805) to thrust.patch ([#11900](https://github.com/rapidsai/cudf/pull/11900)) [@davidwendt](https://github.com/davidwendt)
- Relax `codecov` threshold diff ([#11899](https://github.com/rapidsai/cudf/pull/11899)) [@galipremsagar](https://github.com/galipremsagar)
- Use public APIs in STREAM_COMPACTION_NVBENCH ([#11892](https://github.com/rapidsai/cudf/pull/11892)) [@GregoryKimball](https://github.com/GregoryKimball)
- Add coverage for string UDF tests. ([#11891](https://github.com/rapidsai/cudf/pull/11891)) [@vyasr](https://github.com/vyasr)
- Provide `data_chunk_source` wrapper for `datasource` ([#11886](https://github.com/rapidsai/cudf/pull/11886)) [@upsj](https://github.com/upsj)
- Handle `multibyte_split` byte_range out-of-bounds offsets on host ([#11885](https://github.com/rapidsai/cudf/pull/11885)) [@upsj](https://github.com/upsj)
- Add tests ensuring that cudf's default stream is always used ([#11875](https://github.com/rapidsai/cudf/pull/11875)) [@vyasr](https://github.com/vyasr)
- Change expect_strings_empty into expect_column_empty libcudf test utility ([#11873](https://github.com/rapidsai/cudf/pull/11873)) [@davidwendt](https://github.com/davidwendt)
- Add ngroup ([#11871](https://github.com/rapidsai/cudf/pull/11871)) [@shwina](https://github.com/shwina)
- Reduce memory usage in nested JSON parser - tree generation ([#11864](https://github.com/rapidsai/cudf/pull/11864)) [@karthikeyann](https://github.com/karthikeyann)
- Unpin `dask` and `distributed` for development ([#11859](https://github.com/rapidsai/cudf/pull/11859)) [@galipremsagar](https://github.com/galipremsagar)
- Remove unused includes for table/row_operators ([#11857](https://github.com/rapidsai/cudf/pull/11857)) [@GregoryKimball](https://github.com/GregoryKimball)
- Use conda-forge's `pyorc` ([#11855](https://github.com/rapidsai/cudf/pull/11855)) [@jakirkham](https://github.com/jakirkham)
- Add libcudf strings examples ([#11849](https://github.com/rapidsai/cudf/pull/11849)) [@davidwendt](https://github.com/davidwendt)
- Remove `cudf_io` namespace alias ([#11827](https://github.com/rapidsai/cudf/pull/11827)) [@vuule](https://github.com/vuule)
- Test/remove thrust vector usage ([#11813](https://github.com/rapidsai/cudf/pull/11813)) [@vyasr](https://github.com/vyasr)
- Add BGZIP reader to python `read_text` ([#11802](https://github.com/rapidsai/cudf/pull/11802)) [@upsj](https://github.com/upsj)
- Merge branch-22.10 into branch-22.12 ([#11801](https://github.com/rapidsai/cudf/pull/11801)) [@davidwendt](https://github.com/davidwendt)
- Fix compile warning from CUDF_FUNC_RANGE in a member function ([#11798](https://github.com/rapidsai/cudf/pull/11798)) [@davidwendt](https://github.com/davidwendt)
- Update cudf JNI version to 22.12.0-SNAPSHOT ([#11764](https://github.com/rapidsai/cudf/pull/11764)) [@pxLi](https://github.com/pxLi)
- Update flake8 to 5.0.4 and use flake8-force to check Cython. ([#11736](https://github.com/rapidsai/cudf/pull/11736)) [@bdice](https://github.com/bdice)
- Add BGZIP multibyte_split benchmark ([#11723](https://github.com/rapidsai/cudf/pull/11723)) [@upsj](https://github.com/upsj)
- Bifurcate Dependency Lists ([#11674](https://github.com/rapidsai/cudf/pull/11674)) [@bdice](https://github.com/bdice)
- Default to equal NaNs in make_collect_set_aggregation. ([#11621](https://github.com/rapidsai/cudf/pull/11621)) [@bdice](https://github.com/bdice)
- Conform "bench_isin" to match generator column names ([#11549](https://github.com/rapidsai/cudf/pull/11549)) [@GregoryKimball](https://github.com/GregoryKimball)
- Removing int8 column option from parquet byte_array writing ([#11539](https://github.com/rapidsai/cudf/pull/11539)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Add checks for HLG layers in dask-cudf groupby tests ([#10853](https://github.com/rapidsai/cudf/pull/10853)) [@charlesbluca](https://github.com/charlesbluca)
- part1: Simplify BaseIndex to an abstract class ([#10389](https://github.com/rapidsai/cudf/pull/10389)) [@skirui-source](https://github.com/skirui-source)
- Make all `nvcc` warnings into errors ([#8916](https://github.com/rapidsai/cudf/pull/8916)) [@trxcllnt](https://github.com/trxcllnt)
# cuDF 22.10.00 (12 Oct 2022)
## π¨ Breaking Changes
- Disable Zstandard decompression on nvCOMP 2.4 and Pascal GPus ([#11856](https://github.com/rapidsai/cudf/pull/11856)) [@vuule](https://github.com/vuule)
- Disable nvCOMP DEFLATE integration ([#11811](https://github.com/rapidsai/cudf/pull/11811)) [@vuule](https://github.com/vuule)
- Fix return type of `Index.isna` & `Index.notna` ([#11769](https://github.com/rapidsai/cudf/pull/11769)) [@galipremsagar](https://github.com/galipremsagar)
- Remove `kwargs` in `read_csv` & `to_csv` ([#11762](https://github.com/rapidsai/cudf/pull/11762)) [@galipremsagar](https://github.com/galipremsagar)
- Fix `cudf::partition*` APIs that do not return offsets for empty output table ([#11709](https://github.com/rapidsai/cudf/pull/11709)) [@ttnghia](https://github.com/ttnghia)
- Fix regex negated classes to not automatically include new-lines ([#11644](https://github.com/rapidsai/cudf/pull/11644)) [@davidwendt](https://github.com/davidwendt)
- Update zfill to match Python output ([#11634](https://github.com/rapidsai/cudf/pull/11634)) [@davidwendt](https://github.com/davidwendt)
- Upgrade `pandas` to `1.5` ([#11617](https://github.com/rapidsai/cudf/pull/11617)) [@galipremsagar](https://github.com/galipremsagar)
- Change default value of `ordered` to `False` in `CategoricalDtype` ([#11604](https://github.com/rapidsai/cudf/pull/11604)) [@galipremsagar](https://github.com/galipremsagar)
- Move cudf::strings::findall_record to cudf::strings::findall ([#11575](https://github.com/rapidsai/cudf/pull/11575)) [@davidwendt](https://github.com/davidwendt)
- Adding optional parquet reader schema ([#11524](https://github.com/rapidsai/cudf/pull/11524)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Deprecate `skiprows` and `num_rows` in `read_orc` ([#11522](https://github.com/rapidsai/cudf/pull/11522)) [@galipremsagar](https://github.com/galipremsagar)
- Remove support for skip_rows / num_rows options in the parquet reader. ([#11503](https://github.com/rapidsai/cudf/pull/11503)) [@nvdbaranec](https://github.com/nvdbaranec)
- Drop support for `skiprows` and `num_rows` in `cudf.read_parquet` ([#11480](https://github.com/rapidsai/cudf/pull/11480)) [@galipremsagar](https://github.com/galipremsagar)
- Disable Arrow S3 support by default. ([#11470](https://github.com/rapidsai/cudf/pull/11470)) [@bdice](https://github.com/bdice)
- Convert thrust::optional usages to std::optional ([#11455](https://github.com/rapidsai/cudf/pull/11455)) [@robertmaynard](https://github.com/robertmaynard)
- Remove unused is_struct trait. ([#11450](https://github.com/rapidsai/cudf/pull/11450)) [@bdice](https://github.com/bdice)
- Refactor the `Buffer` class ([#11447](https://github.com/rapidsai/cudf/pull/11447)) [@madsbk](https://github.com/madsbk)
- Return empty dataframe when reading an ORC file using empty `columns` option ([#11446](https://github.com/rapidsai/cudf/pull/11446)) [@vuule](https://github.com/vuule)
- Refactor pad_side and strip_type enums into side_type enum ([#11438](https://github.com/rapidsai/cudf/pull/11438)) [@davidwendt](https://github.com/davidwendt)
- Remove HASH_SERIAL_MURMUR3 / serial32BitMurmurHash3 ([#11383](https://github.com/rapidsai/cudf/pull/11383)) [@bdice](https://github.com/bdice)
- Use the new JSON parser when the experimental reader is selected ([#11364](https://github.com/rapidsai/cudf/pull/11364)) [@vuule](https://github.com/vuule)
- Remove deprecated Series.applymap. ([#11031](https://github.com/rapidsai/cudf/pull/11031)) [@bdice](https://github.com/bdice)
- Remove deprecated expand parameter from str.findall. ([#11030](https://github.com/rapidsai/cudf/pull/11030)) [@bdice](https://github.com/bdice)
## π Bug Fixes
- Fixes bug in temporary decompression space estimation before calling nvcomp ([#11879](https://github.com/rapidsai/cudf/pull/11879)) [@abellina](https://github.com/abellina)
- Handle `ptx` file paths during `strings_udf` import ([#11862](https://github.com/rapidsai/cudf/pull/11862)) [@galipremsagar](https://github.com/galipremsagar)
- Disable Zstandard decompression on nvCOMP 2.4 and Pascal GPus ([#11856](https://github.com/rapidsai/cudf/pull/11856)) [@vuule](https://github.com/vuule)
- Reset `strings_udf` CEC and solve several related issues ([#11846](https://github.com/rapidsai/cudf/pull/11846)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Fix bug in new shuffle-based groupby implementation ([#11836](https://github.com/rapidsai/cudf/pull/11836)) [@rjzamora](https://github.com/rjzamora)
- Fix `is_valid` checks in `Scalar._binaryop` ([#11818](https://github.com/rapidsai/cudf/pull/11818)) [@wence-](https://github.com/wence-)
- Fix operator `NotImplemented` issue with `numpy` ([#11816](https://github.com/rapidsai/cudf/pull/11816)) [@galipremsagar](https://github.com/galipremsagar)
- Disable nvCOMP DEFLATE integration ([#11811](https://github.com/rapidsai/cudf/pull/11811)) [@vuule](https://github.com/vuule)
- Build `strings_udf` package with other python packages in nightlies ([#11808](https://github.com/rapidsai/cudf/pull/11808)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Revert problematic shuffle=explicit-comms changes ([#11803](https://github.com/rapidsai/cudf/pull/11803)) [@rjzamora](https://github.com/rjzamora)
- Fix regex out-of-bounds write in strided rows logic ([#11797](https://github.com/rapidsai/cudf/pull/11797)) [@davidwendt](https://github.com/davidwendt)
- Build `cudf` locally before building `strings_udf` conda packages in CI ([#11785](https://github.com/rapidsai/cudf/pull/11785)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Fix an issue in cudf::row_bit_count involving structs and lists at multiple levels. ([#11779](https://github.com/rapidsai/cudf/pull/11779)) [@nvdbaranec](https://github.com/nvdbaranec)
- Fix return type of `Index.isna` & `Index.notna` ([#11769](https://github.com/rapidsai/cudf/pull/11769)) [@galipremsagar](https://github.com/galipremsagar)
- Fix issue with set-item in case of `list` and `struct` types ([#11760](https://github.com/rapidsai/cudf/pull/11760)) [@galipremsagar](https://github.com/galipremsagar)
- Ensure all libcudf APIs run on cudf's default stream ([#11759](https://github.com/rapidsai/cudf/pull/11759)) [@vyasr](https://github.com/vyasr)
- Resolve dask_cudf failures caused by upstream groupby changes ([#11755](https://github.com/rapidsai/cudf/pull/11755)) [@rjzamora](https://github.com/rjzamora)
- Fix ORC string sum statistics ([#11740](https://github.com/rapidsai/cudf/pull/11740)) [@vuule](https://github.com/vuule)
- Add `strings_udf` package for python 3.9 ([#11730](https://github.com/rapidsai/cudf/pull/11730)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Ensure that all tests launch kernels on cudf's default stream ([#11726](https://github.com/rapidsai/cudf/pull/11726)) [@vyasr](https://github.com/vyasr)
- Don't assume stream is a compile-time constant expression ([#11725](https://github.com/rapidsai/cudf/pull/11725)) [@vyasr](https://github.com/vyasr)
- Fix get_thrust.cmake format at patch command ([#11715](https://github.com/rapidsai/cudf/pull/11715)) [@davidwendt](https://github.com/davidwendt)
- Fix `cudf::partition*` APIs that do not return offsets for empty output table ([#11709](https://github.com/rapidsai/cudf/pull/11709)) [@ttnghia](https://github.com/ttnghia)
- Fix cudf::lists::sort_lists for NaN and Infinity values ([#11703](https://github.com/rapidsai/cudf/pull/11703)) [@davidwendt](https://github.com/davidwendt)
- Modify ORC reader timestamp parsing to match the apache reader behavior ([#11699](https://github.com/rapidsai/cudf/pull/11699)) [@vuule](https://github.com/vuule)
- Fix `DataFrame.from_arrow` to preserve type metadata ([#11698](https://github.com/rapidsai/cudf/pull/11698)) [@galipremsagar](https://github.com/galipremsagar)
- Fix compile error due to missing header ([#11697](https://github.com/rapidsai/cudf/pull/11697)) [@ttnghia](https://github.com/ttnghia)
- Default to Snappy compression in `to_orc` when using cuDF or Dask ([#11690](https://github.com/rapidsai/cudf/pull/11690)) [@vuule](https://github.com/vuule)
- Fix an issue related to `Multindex` when `group_keys=True` ([#11689](https://github.com/rapidsai/cudf/pull/11689)) [@galipremsagar](https://github.com/galipremsagar)
- Transfer correct dtype to exploded column ([#11687](https://github.com/rapidsai/cudf/pull/11687)) [@wence-](https://github.com/wence-)
- Ignore protobuf generated files in `mypy` checks ([#11685](https://github.com/rapidsai/cudf/pull/11685)) [@galipremsagar](https://github.com/galipremsagar)
- Maintain the index name after `.loc` ([#11677](https://github.com/rapidsai/cudf/pull/11677)) [@shwina](https://github.com/shwina)
- Fix issue with extracting nested column data & dtype preservation ([#11671](https://github.com/rapidsai/cudf/pull/11671)) [@galipremsagar](https://github.com/galipremsagar)
- Ensure that all cudf tests and benchmarks are conda env aware ([#11666](https://github.com/rapidsai/cudf/pull/11666)) [@robertmaynard](https://github.com/robertmaynard)
- Update to Thrust 1.17.2 to fix cub ODR issues ([#11665](https://github.com/rapidsai/cudf/pull/11665)) [@robertmaynard](https://github.com/robertmaynard)
- Fix multi-file remote datasource bug ([#11655](https://github.com/rapidsai/cudf/pull/11655)) [@rjzamora](https://github.com/rjzamora)
- Fix invalid regex quantifier check to not include alternation ([#11654](https://github.com/rapidsai/cudf/pull/11654)) [@davidwendt](https://github.com/davidwendt)
- Fix bug in `device_write()`: it uses an incorrect size ([#11651](https://github.com/rapidsai/cudf/pull/11651)) [@madsbk](https://github.com/madsbk)
- fixes overflows in benchmarks ([#11649](https://github.com/rapidsai/cudf/pull/11649)) [@elstehle](https://github.com/elstehle)
- Fix regex negated classes to not automatically include new-lines ([#11644](https://github.com/rapidsai/cudf/pull/11644)) [@davidwendt](https://github.com/davidwendt)
- Fix compile error in benchmark nested_json.cpp ([#11637](https://github.com/rapidsai/cudf/pull/11637)) [@davidwendt](https://github.com/davidwendt)
- Update zfill to match Python output ([#11634](https://github.com/rapidsai/cudf/pull/11634)) [@davidwendt](https://github.com/davidwendt)
- Removed converted type for INT32 and INT64 since they do not convert ([#11627](https://github.com/rapidsai/cudf/pull/11627)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Fix host scalars construction of nested types ([#11612](https://github.com/rapidsai/cudf/pull/11612)) [@galipremsagar](https://github.com/galipremsagar)
- Fix compile warning in nested_json_gpu.cu ([#11607](https://github.com/rapidsai/cudf/pull/11607)) [@davidwendt](https://github.com/davidwendt)
- Change default value of `ordered` to `False` in `CategoricalDtype` ([#11604](https://github.com/rapidsai/cudf/pull/11604)) [@galipremsagar](https://github.com/galipremsagar)
- Preserve order if necessary when deduping categoricals internally ([#11597](https://github.com/rapidsai/cudf/pull/11597)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Add is_timestamp test for leap second (60) ([#11594](https://github.com/rapidsai/cudf/pull/11594)) [@davidwendt](https://github.com/davidwendt)
- Fix an issue with `to_arrow` when column name type is not a string ([#11590](https://github.com/rapidsai/cudf/pull/11590)) [@galipremsagar](https://github.com/galipremsagar)
- Fix exception in segmented-reduce benchmark ([#11588](https://github.com/rapidsai/cudf/pull/11588)) [@davidwendt](https://github.com/davidwendt)
- Fix encode/decode of negative timestamps in ORC reader/writer ([#11586](https://github.com/rapidsai/cudf/pull/11586)) [@vuule](https://github.com/vuule)
- Correct distribution data type in `quantiles` benchmark ([#11584](https://github.com/rapidsai/cudf/pull/11584)) [@vuule](https://github.com/vuule)
- Fix multibyte_split benchmark for host buffers ([#11583](https://github.com/rapidsai/cudf/pull/11583)) [@upsj](https://github.com/upsj)
- xfail custreamz display test for now ([#11567](https://github.com/rapidsai/cudf/pull/11567)) [@shwina](https://github.com/shwina)
- Fix JNI for TableWithMeta to use schema_info instead of column_names ([#11566](https://github.com/rapidsai/cudf/pull/11566)) [@jlowe](https://github.com/jlowe)
- Reduce code duplication for `dask` & `distributed` nightly/stable installs ([#11565](https://github.com/rapidsai/cudf/pull/11565)) [@galipremsagar](https://github.com/galipremsagar)
- Fix groupby failures in dask_cudf CI ([#11561](https://github.com/rapidsai/cudf/pull/11561)) [@rjzamora](https://github.com/rjzamora)
- Fix for pivot: error when 'values' is a multicharacter string ([#11538](https://github.com/rapidsai/cudf/pull/11538)) [@shaswat-indian](https://github.com/shaswat-indian)
- find_package(cudf) + arrow9 usable with cudf build directory ([#11535](https://github.com/rapidsai/cudf/pull/11535)) [@robertmaynard](https://github.com/robertmaynard)
- Fixing crash when writing binary nested data in parquet ([#11526](https://github.com/rapidsai/cudf/pull/11526)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Fix for: error when assigning a value to an empty series ([#11523](https://github.com/rapidsai/cudf/pull/11523)) [@shaswat-indian](https://github.com/shaswat-indian)
- Fix invalid results from conditional-left-anti-join in debug build ([#11517](https://github.com/rapidsai/cudf/pull/11517)) [@davidwendt](https://github.com/davidwendt)
- Fix cmake error after upgrading to Arrow 9 ([#11513](https://github.com/rapidsai/cudf/pull/11513)) [@ttnghia](https://github.com/ttnghia)
- Fix reverse binary operators acting on a host value and cudf.Scalar ([#11512](https://github.com/rapidsai/cudf/pull/11512)) [@bdice](https://github.com/bdice)
- Update parquet fuzz tests to drop support for `skiprows` & `num_rows` ([#11505](https://github.com/rapidsai/cudf/pull/11505)) [@galipremsagar](https://github.com/galipremsagar)
- Use rapids-cmake 22.10 best practice for RAPIDS.cmake location ([#11493](https://github.com/rapidsai/cudf/pull/11493)) [@robertmaynard](https://github.com/robertmaynard)
- Handle some zero-sized corner cases in dlpack interop ([#11449](https://github.com/rapidsai/cudf/pull/11449)) [@wence-](https://github.com/wence-)
- Return empty dataframe when reading an ORC file using empty `columns` option ([#11446](https://github.com/rapidsai/cudf/pull/11446)) [@vuule](https://github.com/vuule)
- libcudf c++ example updated to CPM version 0.35.3 ([#11417](https://github.com/rapidsai/cudf/pull/11417)) [@robertmaynard](https://github.com/robertmaynard)
- Fix regex quantifier check to include capture groups ([#11373](https://github.com/rapidsai/cudf/pull/11373)) [@davidwendt](https://github.com/davidwendt)
- Fix read_text when byte_range is aligned with field ([#11371](https://github.com/rapidsai/cudf/pull/11371)) [@upsj](https://github.com/upsj)
- Fix to_timestamps truncated subsecond calculation ([#11367](https://github.com/rapidsai/cudf/pull/11367)) [@davidwendt](https://github.com/davidwendt)
- column: calculate null_count before release()ing the cudf::column ([#11365](https://github.com/rapidsai/cudf/pull/11365)) [@wence-](https://github.com/wence-)
## π Documentation
- Update `guide-to-udfs` notebook ([#11861](https://github.com/rapidsai/cudf/pull/11861)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Update docstring for cudf.read_text ([#11799](https://github.com/rapidsai/cudf/pull/11799)) [@GregoryKimball](https://github.com/GregoryKimball)
- Add doc section for `list` & `struct` handling ([#11770](https://github.com/rapidsai/cudf/pull/11770)) [@galipremsagar](https://github.com/galipremsagar)
- Document that minimum required CMake version is now 3.23.1 ([#11751](https://github.com/rapidsai/cudf/pull/11751)) [@robertmaynard](https://github.com/robertmaynard)
- Update libcudf documentation build command in DOCUMENTATION.md ([#11735](https://github.com/rapidsai/cudf/pull/11735)) [@davidwendt](https://github.com/davidwendt)
- Add docs for use of string data to `DataFrame.apply` and `Series.apply` and update guide to UDFs notebook ([#11733](https://github.com/rapidsai/cudf/pull/11733)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Enable more Pydocstyle rules ([#11582](https://github.com/rapidsai/cudf/pull/11582)) [@bdice](https://github.com/bdice)
- Remove unused cpp/img folder ([#11554](https://github.com/rapidsai/cudf/pull/11554)) [@davidwendt](https://github.com/davidwendt)
- Publish C++ developer docs ([#11475](https://github.com/rapidsai/cudf/pull/11475)) [@vyasr](https://github.com/vyasr)
- Fix a misalignment in `cudf.get_dummies` docstring ([#11443](https://github.com/rapidsai/cudf/pull/11443)) [@galipremsagar](https://github.com/galipremsagar)
- Update contributing doc to include links to the developer guides ([#11390](https://github.com/rapidsai/cudf/pull/11390)) [@davidwendt](https://github.com/davidwendt)
- Fix table_view_base doxygen format ([#11340](https://github.com/rapidsai/cudf/pull/11340)) [@davidwendt](https://github.com/davidwendt)
- Create main developer guide for Python ([#11235](https://github.com/rapidsai/cudf/pull/11235)) [@vyasr](https://github.com/vyasr)
- Add developer documentation for benchmarking ([#11122](https://github.com/rapidsai/cudf/pull/11122)) [@vyasr](https://github.com/vyasr)
- cuDF error handling document ([#7917](https://github.com/rapidsai/cudf/pull/7917)) [@isVoid](https://github.com/isVoid)
## π New Features
- Add hasNull statistic reading ability to ORC ([#11747](https://github.com/rapidsai/cudf/pull/11747)) [@devavret](https://github.com/devavret)
- Add `istitle` to string UDFs ([#11738](https://github.com/rapidsai/cudf/pull/11738)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- JSON Column creation in GPU ([#11714](https://github.com/rapidsai/cudf/pull/11714)) [@karthikeyann](https://github.com/karthikeyann)
- Adds option to take explicit nested schema for nested JSON reader ([#11682](https://github.com/rapidsai/cudf/pull/11682)) [@elstehle](https://github.com/elstehle)
- Add BGZIP `data_chunk_reader` ([#11652](https://github.com/rapidsai/cudf/pull/11652)) [@upsj](https://github.com/upsj)
- Support DECIMAL order-by for RANGE window functions ([#11645](https://github.com/rapidsai/cudf/pull/11645)) [@mythrocks](https://github.com/mythrocks)
- changing version of cmake to 3.23.3 ([#11619](https://github.com/rapidsai/cudf/pull/11619)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Generate unique keys table in java JNI `contiguousSplitGroups` ([#11614](https://github.com/rapidsai/cudf/pull/11614)) [@res-life](https://github.com/res-life)
- Generic type casting to support the new nested JSON reader ([#11613](https://github.com/rapidsai/cudf/pull/11613)) [@elstehle](https://github.com/elstehle)
- JSON tree traversal ([#11610](https://github.com/rapidsai/cudf/pull/11610)) [@karthikeyann](https://github.com/karthikeyann)
- Add casting operators to masked UDFs ([#11578](https://github.com/rapidsai/cudf/pull/11578)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Adds type inference and type conversion for leaf-columns to the nested JSON parser ([#11574](https://github.com/rapidsai/cudf/pull/11574)) [@elstehle](https://github.com/elstehle)
- Add strings 'like' function ([#11558](https://github.com/rapidsai/cudf/pull/11558)) [@davidwendt](https://github.com/davidwendt)
- Handle hyphen as literal for regex cclass when incomplete range ([#11557](https://github.com/rapidsai/cudf/pull/11557)) [@davidwendt](https://github.com/davidwendt)
- Enable ZSTD compression in ORC and Parquet writers ([#11551](https://github.com/rapidsai/cudf/pull/11551)) [@vuule](https://github.com/vuule)
- Adds support for json lines format to the nested JSON reader ([#11534](https://github.com/rapidsai/cudf/pull/11534)) [@elstehle](https://github.com/elstehle)
- Adding optional parquet reader schema ([#11524](https://github.com/rapidsai/cudf/pull/11524)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Adds GPU implementation of JSON-token-stream to JSON-tree ([#11518](https://github.com/rapidsai/cudf/pull/11518)) [@karthikeyann](https://github.com/karthikeyann)
- Add `gdb` pretty-printers for simple types ([#11499](https://github.com/rapidsai/cudf/pull/11499)) [@upsj](https://github.com/upsj)
- Add `create_random_column` function to the data generator ([#11490](https://github.com/rapidsai/cudf/pull/11490)) [@vuule](https://github.com/vuule)
- Add fluent API builder to `data_profile` ([#11479](https://github.com/rapidsai/cudf/pull/11479)) [@vuule](https://github.com/vuule)
- Adds Nested Json benchmark ([#11466](https://github.com/rapidsai/cudf/pull/11466)) [@karthikeyann](https://github.com/karthikeyann)
- Convert thrust::optional usages to std::optional ([#11455](https://github.com/rapidsai/cudf/pull/11455)) [@robertmaynard](https://github.com/robertmaynard)
- Python API for the future experimental JSON reader ([#11426](https://github.com/rapidsai/cudf/pull/11426)) [@vuule](https://github.com/vuule)
- Return schema info from JSON reader ([#11419](https://github.com/rapidsai/cudf/pull/11419)) [@vuule](https://github.com/vuule)
- Add regex ASCII flag support for matching builtin character classes ([#11404](https://github.com/rapidsai/cudf/pull/11404)) [@davidwendt](https://github.com/davidwendt)
- Truncate parquet column indexes ([#11403](https://github.com/rapidsai/cudf/pull/11403)) [@etseidl](https://github.com/etseidl)
- Adds the end-to-end JSON parser implementation ([#11388](https://github.com/rapidsai/cudf/pull/11388)) [@elstehle](https://github.com/elstehle)
- Use the new JSON parser when the experimental reader is selected ([#11364](https://github.com/rapidsai/cudf/pull/11364)) [@vuule](https://github.com/vuule)
- Add placeholder for the experimental JSON reader ([#11334](https://github.com/rapidsai/cudf/pull/11334)) [@vuule](https://github.com/vuule)
- Add read-only functions on string dtypes to `DataFrame.apply` and `Series.apply` ([#11319](https://github.com/rapidsai/cudf/pull/11319)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Added 'crosstab' and 'pivot_table' features ([#11314](https://github.com/rapidsai/cudf/pull/11314)) [@shaswat-indian](https://github.com/shaswat-indian)
- Quickly error out when trying to build with unsupported nvcc versions ([#11297](https://github.com/rapidsai/cudf/pull/11297)) [@robertmaynard](https://github.com/robertmaynard)
- Adds JSON tokenizer ([#11264](https://github.com/rapidsai/cudf/pull/11264)) [@elstehle](https://github.com/elstehle)
- List lexicographic comparator ([#11129](https://github.com/rapidsai/cudf/pull/11129)) [@devavret](https://github.com/devavret)
- Add generic type inference for cuIO ([#11121](https://github.com/rapidsai/cudf/pull/11121)) [@PointKernel](https://github.com/PointKernel)
- Fully support nested types in `cudf::contains` ([#10656](https://github.com/rapidsai/cudf/pull/10656)) [@ttnghia](https://github.com/ttnghia)
- Support nested types in `lists::contains` ([#10548](https://github.com/rapidsai/cudf/pull/10548)) [@ttnghia](https://github.com/ttnghia)
## π οΈ Improvements
- Pin `dask` and `distributed` for release ([#11822](https://github.com/rapidsai/cudf/pull/11822)) [@galipremsagar](https://github.com/galipremsagar)
- Add examples for Nested JSON reader ([#11814](https://github.com/rapidsai/cudf/pull/11814)) [@GregoryKimball](https://github.com/GregoryKimball)
- Support shuffle-based groupby aggregations in dask_cudf ([#11800](https://github.com/rapidsai/cudf/pull/11800)) [@rjzamora](https://github.com/rjzamora)
- Update strings udf version updater script ([#11772](https://github.com/rapidsai/cudf/pull/11772)) [@galipremsagar](https://github.com/galipremsagar)
- Remove `kwargs` in `read_csv` & `to_csv` ([#11762](https://github.com/rapidsai/cudf/pull/11762)) [@galipremsagar](https://github.com/galipremsagar)
- Pass `dtype` param to avoid `pd.Series` warnings ([#11761](https://github.com/rapidsai/cudf/pull/11761)) [@galipremsagar](https://github.com/galipremsagar)
- Enable `schema_element` & `keep_quotes` support in json reader ([#11746](https://github.com/rapidsai/cudf/pull/11746)) [@galipremsagar](https://github.com/galipremsagar)
- Add ability to construct `ListColumn` when size is `None` ([#11745](https://github.com/rapidsai/cudf/pull/11745)) [@galipremsagar](https://github.com/galipremsagar)
- Reduces memory requirements in JSON parser and adds bytes/s and peak memory usage to benchmarks ([#11732](https://github.com/rapidsai/cudf/pull/11732)) [@elstehle](https://github.com/elstehle)
- Add missing copyright headers. ([#11712](https://github.com/rapidsai/cudf/pull/11712)) [@bdice](https://github.com/bdice)
- Fix copyright check issues in pre-commit ([#11711](https://github.com/rapidsai/cudf/pull/11711)) [@bdice](https://github.com/bdice)
- Include decimal in supported types for range window order-by columns ([#11710](https://github.com/rapidsai/cudf/pull/11710)) [@mythrocks](https://github.com/mythrocks)
- Disable very large column gtest for contiguous-split ([#11706](https://github.com/rapidsai/cudf/pull/11706)) [@davidwendt](https://github.com/davidwendt)
- Drop split_out=None test from groupby.agg ([#11704](https://github.com/rapidsai/cudf/pull/11704)) [@wence-](https://github.com/wence-)
- Use CubinLinker for CUDA Minor Version Compatibility ([#11701](https://github.com/rapidsai/cudf/pull/11701)) [@gmarkall](https://github.com/gmarkall)
- Add regex capture-group parameter to auto convert to non-capture groups ([#11695](https://github.com/rapidsai/cudf/pull/11695)) [@davidwendt](https://github.com/davidwendt)
- Add a `__dataframe__` method to the protocol dataframe object ([#11692](https://github.com/rapidsai/cudf/pull/11692)) [@rgommers](https://github.com/rgommers)
- Special-case multibyte_split for single-byte delimiter ([#11681](https://github.com/rapidsai/cudf/pull/11681)) [@upsj](https://github.com/upsj)
- Remove isort exclusions ([#11680](https://github.com/rapidsai/cudf/pull/11680)) [@bdice](https://github.com/bdice)
- Refactor CSV reader benchmarks with nvbench ([#11678](https://github.com/rapidsai/cudf/pull/11678)) [@PointKernel](https://github.com/PointKernel)
- Check conda recipe headers with pre-commit ([#11669](https://github.com/rapidsai/cudf/pull/11669)) [@bdice](https://github.com/bdice)
- Remove redundant style check for clang-format. ([#11668](https://github.com/rapidsai/cudf/pull/11668)) [@bdice](https://github.com/bdice)
- Add support for `group_keys` in `groupby` ([#11659](https://github.com/rapidsai/cudf/pull/11659)) [@galipremsagar](https://github.com/galipremsagar)
- Fix pandoc pinning. ([#11658](https://github.com/rapidsai/cudf/pull/11658)) [@bdice](https://github.com/bdice)
- Revert removal of skip_rows / num_rows options from the Parquet reader. ([#11657](https://github.com/rapidsai/cudf/pull/11657)) [@nvdbaranec](https://github.com/nvdbaranec)
- Update git metadata ([#11647](https://github.com/rapidsai/cudf/pull/11647)) [@bdice](https://github.com/bdice)
- Call set_null_count on a returning column if null-count is known ([#11646](https://github.com/rapidsai/cudf/pull/11646)) [@davidwendt](https://github.com/davidwendt)
- Fix some libcudf detail calls not passing the stream variable ([#11642](https://github.com/rapidsai/cudf/pull/11642)) [@davidwendt](https://github.com/davidwendt)
- Update to mypy 0.971 ([#11640](https://github.com/rapidsai/cudf/pull/11640)) [@wence-](https://github.com/wence-)
- Refactor strings strip functor to details header ([#11635](https://github.com/rapidsai/cudf/pull/11635)) [@davidwendt](https://github.com/davidwendt)
- Fix incorrect `nullCount` in `get_json_object` ([#11633](https://github.com/rapidsai/cudf/pull/11633)) [@trxcllnt](https://github.com/trxcllnt)
- Simplify `hostdevice_vector` ([#11631](https://github.com/rapidsai/cudf/pull/11631)) [@upsj](https://github.com/upsj)
- Refactor parquet writer benchmarks with nvbench ([#11623](https://github.com/rapidsai/cudf/pull/11623)) [@PointKernel](https://github.com/PointKernel)
- Rework contains_scalar to check nulls at runtime ([#11622](https://github.com/rapidsai/cudf/pull/11622)) [@davidwendt](https://github.com/davidwendt)
- Fix incorrect memory resource used in rolling temp columns ([#11618](https://github.com/rapidsai/cudf/pull/11618)) [@mythrocks](https://github.com/mythrocks)
- Upgrade `pandas` to `1.5` ([#11617](https://github.com/rapidsai/cudf/pull/11617)) [@galipremsagar](https://github.com/galipremsagar)
- Move type-dispatcher calls from traits.hpp to traits.cpp ([#11616](https://github.com/rapidsai/cudf/pull/11616)) [@davidwendt](https://github.com/davidwendt)
- Refactor parquet reader benchmarks with nvbench ([#11611](https://github.com/rapidsai/cudf/pull/11611)) [@PointKernel](https://github.com/PointKernel)
- Forward-merge branch-22.08 to branch-22.10 ([#11608](https://github.com/rapidsai/cudf/pull/11608)) [@bdice](https://github.com/bdice)
- Use stream in Java API. ([#11601](https://github.com/rapidsai/cudf/pull/11601)) [@bdice](https://github.com/bdice)
- Refactors of public/detail APIs, CUDF_FUNC_RANGE, stream handling. ([#11600](https://github.com/rapidsai/cudf/pull/11600)) [@bdice](https://github.com/bdice)
- Improve ORC writer benchmark with nvbench ([#11598](https://github.com/rapidsai/cudf/pull/11598)) [@PointKernel](https://github.com/PointKernel)
- Tune multibyte_split kernel ([#11587](https://github.com/rapidsai/cudf/pull/11587)) [@upsj](https://github.com/upsj)
- Move split_utils.cuh to strings/detail ([#11585](https://github.com/rapidsai/cudf/pull/11585)) [@davidwendt](https://github.com/davidwendt)
- Fix warnings due to compiler regression with `if constexpr` ([#11581](https://github.com/rapidsai/cudf/pull/11581)) [@ttnghia](https://github.com/ttnghia)
- Add full 24-bit dictionary support to Parquet writer ([#11580](https://github.com/rapidsai/cudf/pull/11580)) [@etseidl](https://github.com/etseidl)
- Expose "explicit-comms" option in shuffle-based dask_cudf functions ([#11576](https://github.com/rapidsai/cudf/pull/11576)) [@rjzamora](https://github.com/rjzamora)
- Move cudf::strings::findall_record to cudf::strings::findall ([#11575](https://github.com/rapidsai/cudf/pull/11575)) [@davidwendt](https://github.com/davidwendt)
- Refactor dask_cudf groupby to use apply_concat_apply ([#11571](https://github.com/rapidsai/cudf/pull/11571)) [@rjzamora](https://github.com/rjzamora)
- Add ability to write `list(struct)` columns as `map` type in orc writer ([#11568](https://github.com/rapidsai/cudf/pull/11568)) [@galipremsagar](https://github.com/galipremsagar)
- Add byte_range to multibyte_split benchmark + NVBench refactor ([#11562](https://github.com/rapidsai/cudf/pull/11562)) [@upsj](https://github.com/upsj)
- JNI support for writing binary columns in parquet ([#11556](https://github.com/rapidsai/cudf/pull/11556)) [@revans2](https://github.com/revans2)
- Support additional dictionary bit widths in Parquet writer ([#11547](https://github.com/rapidsai/cudf/pull/11547)) [@etseidl](https://github.com/etseidl)
- Refactor string/numeric conversion utilities ([#11545](https://github.com/rapidsai/cudf/pull/11545)) [@davidwendt](https://github.com/davidwendt)
- Removing unnecessary asserts in parquet tests ([#11544](https://github.com/rapidsai/cudf/pull/11544)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Clean up ORC reader benchmarks with NVBench ([#11543](https://github.com/rapidsai/cudf/pull/11543)) [@PointKernel](https://github.com/PointKernel)
- Reuse MurmurHash3_32 in Parquet page data. ([#11528](https://github.com/rapidsai/cudf/pull/11528)) [@bdice](https://github.com/bdice)
- Add hexadecimal value separators ([#11527](https://github.com/rapidsai/cudf/pull/11527)) [@bdice](https://github.com/bdice)
- Deprecate `skiprows` and `num_rows` in `read_orc` ([#11522](https://github.com/rapidsai/cudf/pull/11522)) [@galipremsagar](https://github.com/galipremsagar)
- Struct support for `NULL_EQUALS` binary operation ([#11520](https://github.com/rapidsai/cudf/pull/11520)) [@rwlee](https://github.com/rwlee)
- Bump hadoop-common from 3.2.3 to 3.2.4 in /java ([#11516](https://github.com/rapidsai/cudf/pull/11516)) [@dependabot[bot]](https://github.com/dependabot[bot])
- Fix Feather test warning. ([#11511](https://github.com/rapidsai/cudf/pull/11511)) [@bdice](https://github.com/bdice)
- copy_range ballot_syncs to have no execution dependency ([#11508](https://github.com/rapidsai/cudf/pull/11508)) [@robertmaynard](https://github.com/robertmaynard)
- Upgrade to `arrow-9.x` ([#11507](https://github.com/rapidsai/cudf/pull/11507)) [@galipremsagar](https://github.com/galipremsagar)
- Remove support for skip_rows / num_rows options in the parquet reader. ([#11503](https://github.com/rapidsai/cudf/pull/11503)) [@nvdbaranec](https://github.com/nvdbaranec)
- Single-pass `multibyte_split` ([#11500](https://github.com/rapidsai/cudf/pull/11500)) [@upsj](https://github.com/upsj)
- Sanitize percentile_approx() output for empty input ([#11498](https://github.com/rapidsai/cudf/pull/11498)) [@SrikarVanavasam](https://github.com/SrikarVanavasam)
- Unpin `dask` and `distributed` for development ([#11492](https://github.com/rapidsai/cudf/pull/11492)) [@galipremsagar](https://github.com/galipremsagar)
- Move SparkMurmurHash3_32 functor. ([#11489](https://github.com/rapidsai/cudf/pull/11489)) [@bdice](https://github.com/bdice)
- Refactor group_nunique.cu to use nullate::DYNAMIC for reduce-by-key functor ([#11482](https://github.com/rapidsai/cudf/pull/11482)) [@davidwendt](https://github.com/davidwendt)
- Drop support for `skiprows` and `num_rows` in `cudf.read_parquet` ([#11480](https://github.com/rapidsai/cudf/pull/11480)) [@galipremsagar](https://github.com/galipremsagar)
- Add reduction `distinct_count` benchmark ([#11473](https://github.com/rapidsai/cudf/pull/11473)) [@ttnghia](https://github.com/ttnghia)
- Add groupby `nunique` aggregation benchmark ([#11472](https://github.com/rapidsai/cudf/pull/11472)) [@ttnghia](https://github.com/ttnghia)
- Disable Arrow S3 support by default. ([#11470](https://github.com/rapidsai/cudf/pull/11470)) [@bdice](https://github.com/bdice)
- Add groupby `max` aggregation benchmark ([#11464](https://github.com/rapidsai/cudf/pull/11464)) [@ttnghia](https://github.com/ttnghia)
- Extract Dremel encoding code from Parquet ([#11461](https://github.com/rapidsai/cudf/pull/11461)) [@vyasr](https://github.com/vyasr)
- Add missing Thrust #includes. ([#11457](https://github.com/rapidsai/cudf/pull/11457)) [@bdice](https://github.com/bdice)
- Make CMake hooks verbose ([#11456](https://github.com/rapidsai/cudf/pull/11456)) [@vyasr](https://github.com/vyasr)
- Control Parquet page size through Python API ([#11454](https://github.com/rapidsai/cudf/pull/11454)) [@etseidl](https://github.com/etseidl)
- Add control of Parquet column index creation to python ([#11453](https://github.com/rapidsai/cudf/pull/11453)) [@etseidl](https://github.com/etseidl)
- Remove unused is_struct trait. ([#11450](https://github.com/rapidsai/cudf/pull/11450)) [@bdice](https://github.com/bdice)
- Refactor the `Buffer` class ([#11447](https://github.com/rapidsai/cudf/pull/11447)) [@madsbk](https://github.com/madsbk)
- Refactor pad_side and strip_type enums into side_type enum ([#11438](https://github.com/rapidsai/cudf/pull/11438)) [@davidwendt](https://github.com/davidwendt)
- Update to Thrust 1.17.0 ([#11437](https://github.com/rapidsai/cudf/pull/11437)) [@bdice](https://github.com/bdice)
- Add in JNI for parsing JSON data and getting the metadata back too. ([#11431](https://github.com/rapidsai/cudf/pull/11431)) [@revans2](https://github.com/revans2)
- Convert byte_array_view to use std::byte ([#11424](https://github.com/rapidsai/cudf/pull/11424)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Deprecate unflatten_nested_columns ([#11421](https://github.com/rapidsai/cudf/pull/11421)) [@SrikarVanavasam](https://github.com/SrikarVanavasam)
- Remove HASH_SERIAL_MURMUR3 / serial32BitMurmurHash3 ([#11383](https://github.com/rapidsai/cudf/pull/11383)) [@bdice](https://github.com/bdice)
- Add Spark list hashing Java tests ([#11379](https://github.com/rapidsai/cudf/pull/11379)) [@bdice](https://github.com/bdice)
- Move cmake to the build section. ([#11376](https://github.com/rapidsai/cudf/pull/11376)) [@vyasr](https://github.com/vyasr)
- Remove use of CUDA driver API calls from libcudf ([#11370](https://github.com/rapidsai/cudf/pull/11370)) [@shwina](https://github.com/shwina)
- Add column constructor from device_uvector&& ([#11356](https://github.com/rapidsai/cudf/pull/11356)) [@SrikarVanavasam](https://github.com/SrikarVanavasam)
- Remove unused custreamz thirdparty directory ([#11343](https://github.com/rapidsai/cudf/pull/11343)) [@vyasr](https://github.com/vyasr)
- Update jni version to 22.10.0-SNAPSHOT ([#11338](https://github.com/rapidsai/cudf/pull/11338)) [@pxLi](https://github.com/pxLi)
- Enable using upstream jitify2 ([#11287](https://github.com/rapidsai/cudf/pull/11287)) [@shwina](https://github.com/shwina)
- Cache cudf.Scalar ([#11246](https://github.com/rapidsai/cudf/pull/11246)) [@shwina](https://github.com/shwina)
- Remove deprecated Series.applymap. ([#11031](https://github.com/rapidsai/cudf/pull/11031)) [@bdice](https://github.com/bdice)
- Remove deprecated expand parameter from str.findall. ([#11030](https://github.com/rapidsai/cudf/pull/11030)) [@bdice](https://github.com/bdice)
# cuDF 22.08.00 (17 Aug 2022)
## π¨ Breaking Changes
- Remove legacy join APIs ([#11274](https://github.com/rapidsai/cudf/pull/11274)) [@vyasr](https://github.com/vyasr)
- Remove `lists::drop_list_duplicates` ([#11236](https://github.com/rapidsai/cudf/pull/11236)) [@ttnghia](https://github.com/ttnghia)
- Remove Index.replace API ([#11131](https://github.com/rapidsai/cudf/pull/11131)) [@vyasr](https://github.com/vyasr)
- Remove deprecated Index methods from Frame ([#11073](https://github.com/rapidsai/cudf/pull/11073)) [@vyasr](https://github.com/vyasr)
- Remove public API of cudf.merge_sorted. ([#11032](https://github.com/rapidsai/cudf/pull/11032)) [@bdice](https://github.com/bdice)
- Drop python `3.7` in code-base ([#11029](https://github.com/rapidsai/cudf/pull/11029)) [@galipremsagar](https://github.com/galipremsagar)
- Return empty dataframe when reading a Parquet file using empty `columns` option ([#11018](https://github.com/rapidsai/cudf/pull/11018)) [@vuule](https://github.com/vuule)
- Remove Arrow CUDA IPC code ([#10995](https://github.com/rapidsai/cudf/pull/10995)) [@shwina](https://github.com/shwina)
- Buffer: make `.ptr` read-only ([#10872](https://github.com/rapidsai/cudf/pull/10872)) [@madsbk](https://github.com/madsbk)
## π Bug Fixes
- Fix `distributed` error related to `loop_in_thread` ([#11428](https://github.com/rapidsai/cudf/pull/11428)) [@galipremsagar](https://github.com/galipremsagar)
- Relax arrow pinning to just 8.x and remove cuda build dependency from cudf recipe ([#11412](https://github.com/rapidsai/cudf/pull/11412)) [@kkraus14](https://github.com/kkraus14)
- Revert "Allow CuPy 11" ([#11409](https://github.com/rapidsai/cudf/pull/11409)) [@jakirkham](https://github.com/jakirkham)
- Fix `moto` timeouts ([#11369](https://github.com/rapidsai/cudf/pull/11369)) [@galipremsagar](https://github.com/galipremsagar)
- Set `+/-infinity` as the `identity` values for floating-point numbers in device operators `min` and `max` ([#11357](https://github.com/rapidsai/cudf/pull/11357)) [@ttnghia](https://github.com/ttnghia)
- Fix memory_usage() for `ListSeries` ([#11355](https://github.com/rapidsai/cudf/pull/11355)) [@thomcom](https://github.com/thomcom)
- Fix constructing Column from column_view with expired mask ([#11354](https://github.com/rapidsai/cudf/pull/11354)) [@shwina](https://github.com/shwina)
- Handle parquet corner case: Columns with more rows than are in the row group. ([#11353](https://github.com/rapidsai/cudf/pull/11353)) [@nvdbaranec](https://github.com/nvdbaranec)
- Fix `DatetimeIndex` & `TimedeltaIndex` constructors ([#11342](https://github.com/rapidsai/cudf/pull/11342)) [@galipremsagar](https://github.com/galipremsagar)
- Fix unsigned-compare compile warning in IntPow binops ([#11339](https://github.com/rapidsai/cudf/pull/11339)) [@davidwendt](https://github.com/davidwendt)
- Fix performance issue and add a new code path to `cudf::detail::contains` ([#11330](https://github.com/rapidsai/cudf/pull/11330)) [@ttnghia](https://github.com/ttnghia)
- Pin `pytorch` to temporarily unblock from `libcupti` errors ([#11289](https://github.com/rapidsai/cudf/pull/11289)) [@galipremsagar](https://github.com/galipremsagar)
- Workaround for nvcomp zstd overwriting blocks for orc due to underestimate of sizes ([#11288](https://github.com/rapidsai/cudf/pull/11288)) [@jbrennan333](https://github.com/jbrennan333)
- Fix inconsistency when hashing two tables in `cudf::detail::contains` ([#11284](https://github.com/rapidsai/cudf/pull/11284)) [@ttnghia](https://github.com/ttnghia)
- Fix issue related to numpy array and `category` dtype ([#11282](https://github.com/rapidsai/cudf/pull/11282)) [@galipremsagar](https://github.com/galipremsagar)
- Add NotImplementedError when on is specified in DataFrame.join. ([#11275](https://github.com/rapidsai/cudf/pull/11275)) [@vyasr](https://github.com/vyasr)
- Fix invalid allocate_like() and empty_like() tests. ([#11268](https://github.com/rapidsai/cudf/pull/11268)) [@nvdbaranec](https://github.com/nvdbaranec)
- Returns DataFrame When Concatenating Along Axis 1 ([#11263](https://github.com/rapidsai/cudf/pull/11263)) [@isVoid](https://github.com/isVoid)
- Fix compile error due to missing header ([#11257](https://github.com/rapidsai/cudf/pull/11257)) [@ttnghia](https://github.com/ttnghia)
- Fix a memory aliasing/crash issue in scatter for lists. ([#11254](https://github.com/rapidsai/cudf/pull/11254)) [@nvdbaranec](https://github.com/nvdbaranec)
- Fix `tests/rolling/empty_input_test` ([#11238](https://github.com/rapidsai/cudf/pull/11238)) [@ttnghia](https://github.com/ttnghia)
- Fix const qualifier when using `host_span<bitmask_type const*>` ([#11220](https://github.com/rapidsai/cudf/pull/11220)) [@ttnghia](https://github.com/ttnghia)
- Avoid using `nvcompBatchedDeflateDecompressGetTempSizeEx` in cuIO ([#11213](https://github.com/rapidsai/cudf/pull/11213)) [@vuule](https://github.com/vuule)
- Generate benchmark data with correct run length regardless of cardinality ([#11205](https://github.com/rapidsai/cudf/pull/11205)) [@vuule](https://github.com/vuule)
- Fix cumulative count index behavior ([#11188](https://github.com/rapidsai/cudf/pull/11188)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Fix assertion in dask_cudf test_struct_explode ([#11170](https://github.com/rapidsai/cudf/pull/11170)) [@rjzamora](https://github.com/rjzamora)
- Provides a method for the user to remove the hook and re-register the hook in a custom shutdown hook manager ([#11161](https://github.com/rapidsai/cudf/pull/11161)) [@res-life](https://github.com/res-life)
- Fix compatibility issues with pandas 1.4.3 ([#11152](https://github.com/rapidsai/cudf/pull/11152)) [@vyasr](https://github.com/vyasr)
- Ensure cuco export set is installed in cmake build ([#11147](https://github.com/rapidsai/cudf/pull/11147)) [@jlowe](https://github.com/jlowe)
- Avoid redundant deepcopy in `cudf.from_pandas` ([#11142](https://github.com/rapidsai/cudf/pull/11142)) [@galipremsagar](https://github.com/galipremsagar)
- Fix compile error due to missing header ([#11126](https://github.com/rapidsai/cudf/pull/11126)) [@ttnghia](https://github.com/ttnghia)
- Fix `__cuda_array_interface__` failures ([#11113](https://github.com/rapidsai/cudf/pull/11113)) [@galipremsagar](https://github.com/galipremsagar)
- Support octal and hex within regex character class pattern ([#11112](https://github.com/rapidsai/cudf/pull/11112)) [@davidwendt](https://github.com/davidwendt)
- Fix split_re matching logic for word boundaries ([#11106](https://github.com/rapidsai/cudf/pull/11106)) [@davidwendt](https://github.com/davidwendt)
- Handle multiple files metadata in `read_parquet` ([#11105](https://github.com/rapidsai/cudf/pull/11105)) [@galipremsagar](https://github.com/galipremsagar)
- Fix index alignment for Series objects with repeated index ([#11103](https://github.com/rapidsai/cudf/pull/11103)) [@shwina](https://github.com/shwina)
- FindcuFile now searches in the current CUDA Toolkit location ([#11101](https://github.com/rapidsai/cudf/pull/11101)) [@robertmaynard](https://github.com/robertmaynard)
- Fix regex word boundary logic to include underline ([#11099](https://github.com/rapidsai/cudf/pull/11099)) [@davidwendt](https://github.com/davidwendt)
- Exclude CudaFatalTest when selecting all Java tests ([#11083](https://github.com/rapidsai/cudf/pull/11083)) [@jlowe](https://github.com/jlowe)
- Fix duplicate `cudatoolkit` pinning issue ([#11070](https://github.com/rapidsai/cudf/pull/11070)) [@galipremsagar](https://github.com/galipremsagar)
- Maintain the input index in the result of a groupby-transform ([#11068](https://github.com/rapidsai/cudf/pull/11068)) [@shwina](https://github.com/shwina)
- Fix bug with row count comparison for expect_columns_equivalent(). ([#11059](https://github.com/rapidsai/cudf/pull/11059)) [@nvdbaranec](https://github.com/nvdbaranec)
- Fix BPE uninitialized size value for null and empty input strings ([#11054](https://github.com/rapidsai/cudf/pull/11054)) [@davidwendt](https://github.com/davidwendt)
- Include missing header for usage of `get_current_device_resource()` ([#11047](https://github.com/rapidsai/cudf/pull/11047)) [@AtlantaPepsi](https://github.com/AtlantaPepsi)
- Fix warn_unused_result error in parquet test ([#11026](https://github.com/rapidsai/cudf/pull/11026)) [@karthikeyann](https://github.com/karthikeyann)
- Return empty dataframe when reading a Parquet file using empty `columns` option ([#11018](https://github.com/rapidsai/cudf/pull/11018)) [@vuule](https://github.com/vuule)
- Fix small error in page row count limiting ([#10991](https://github.com/rapidsai/cudf/pull/10991)) [@etseidl](https://github.com/etseidl)
- Fix a row index entry error in ORC writer issue ([#10989](https://github.com/rapidsai/cudf/pull/10989)) [@vuule](https://github.com/vuule)
- Fix grouped covariance to require both values to be convertible to double. ([#10891](https://github.com/rapidsai/cudf/pull/10891)) [@bdice](https://github.com/bdice)
## π Documentation
- Fix issues with day & night modes in python docs ([#11400](https://github.com/rapidsai/cudf/pull/11400)) [@galipremsagar](https://github.com/galipremsagar)
- Update missing data handling APIs in docs ([#11345](https://github.com/rapidsai/cudf/pull/11345)) [@galipremsagar](https://github.com/galipremsagar)
- Add lists filtering APIs to doxygen group. ([#11336](https://github.com/rapidsai/cudf/pull/11336)) [@bdice](https://github.com/bdice)
- Remove unused import in README sample ([#11318](https://github.com/rapidsai/cudf/pull/11318)) [@vyasr](https://github.com/vyasr)
- Note null behavior in `where` docs ([#11276](https://github.com/rapidsai/cudf/pull/11276)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Update docstring for spans in `get_row_data_range` ([#11271](https://github.com/rapidsai/cudf/pull/11271)) [@vyasr](https://github.com/vyasr)
- Update nvCOMP integration table ([#11231](https://github.com/rapidsai/cudf/pull/11231)) [@vuule](https://github.com/vuule)
- Add dev docs for documentation writing ([#11217](https://github.com/rapidsai/cudf/pull/11217)) [@vyasr](https://github.com/vyasr)
- Documentation fix for concatenate ([#11187](https://github.com/rapidsai/cudf/pull/11187)) [@dagardner-nv](https://github.com/dagardner-nv)
- Fix unresolved links in markdown ([#11173](https://github.com/rapidsai/cudf/pull/11173)) [@karthikeyann](https://github.com/karthikeyann)
- Fix cudf version in README.md install commands ([#11164](https://github.com/rapidsai/cudf/pull/11164)) [@jvanstraten](https://github.com/jvanstraten)
- Switch `language` from `None` to `"en"` in docs build ([#11133](https://github.com/rapidsai/cudf/pull/11133)) [@galipremsagar](https://github.com/galipremsagar)
- Remove docs mentioning scalar_view since no such class exists. ([#11132](https://github.com/rapidsai/cudf/pull/11132)) [@bdice](https://github.com/bdice)
- Add docstring entry for `DataFrame.value_counts` ([#11039](https://github.com/rapidsai/cudf/pull/11039)) [@galipremsagar](https://github.com/galipremsagar)
- Add docs to rolling var, std, count. ([#11035](https://github.com/rapidsai/cudf/pull/11035)) [@bdice](https://github.com/bdice)
- Fix docs for Numba UDFs. ([#11020](https://github.com/rapidsai/cudf/pull/11020)) [@bdice](https://github.com/bdice)
- Replace column comparison utilities functions with macros ([#11007](https://github.com/rapidsai/cudf/pull/11007)) [@karthikeyann](https://github.com/karthikeyann)
- Fix Doxygen warnings in multiple headers files ([#11003](https://github.com/rapidsai/cudf/pull/11003)) [@karthikeyann](https://github.com/karthikeyann)
- Fix doxygen warnings in utilities/ headers ([#10974](https://github.com/rapidsai/cudf/pull/10974)) [@karthikeyann](https://github.com/karthikeyann)
- Fix Doxygen warnings in table header files ([#10964](https://github.com/rapidsai/cudf/pull/10964)) [@karthikeyann](https://github.com/karthikeyann)
- Fix Doxygen warnings in column header files ([#10963](https://github.com/rapidsai/cudf/pull/10963)) [@karthikeyann](https://github.com/karthikeyann)
- Fix Doxygen warnings in strings / header files ([#10937](https://github.com/rapidsai/cudf/pull/10937)) [@karthikeyann](https://github.com/karthikeyann)
- Generate Doxygen Tag File for Libcudf ([#10932](https://github.com/rapidsai/cudf/pull/10932)) [@isVoid](https://github.com/isVoid)
- Fix doxygen warnings in structs, lists headers ([#10923](https://github.com/rapidsai/cudf/pull/10923)) [@karthikeyann](https://github.com/karthikeyann)
- Fix doxygen warnings in fixed_point.hpp ([#10922](https://github.com/rapidsai/cudf/pull/10922)) [@karthikeyann](https://github.com/karthikeyann)
- Fix doxygen warnings in ast/, rolling, tdigest/, wrappers/, dictionary/ headers ([#10921](https://github.com/rapidsai/cudf/pull/10921)) [@karthikeyann](https://github.com/karthikeyann)
- fix doxygen warnings in cudf/io/types.hpp, other header files ([#10913](https://github.com/rapidsai/cudf/pull/10913)) [@karthikeyann](https://github.com/karthikeyann)
- fix doxygen warnings in cudf/io/ avro, csv, json, orc, parquet header files ([#10912](https://github.com/rapidsai/cudf/pull/10912)) [@karthikeyann](https://github.com/karthikeyann)
- Fix doxygen warnings in cudf/*.hpp ([#10896](https://github.com/rapidsai/cudf/pull/10896)) [@karthikeyann](https://github.com/karthikeyann)
- Add missing documentation in aggregation.hpp ([#10887](https://github.com/rapidsai/cudf/pull/10887)) [@karthikeyann](https://github.com/karthikeyann)
- Revise PR template. ([#10774](https://github.com/rapidsai/cudf/pull/10774)) [@bdice](https://github.com/bdice)
## π New Features
- Change cmake to allow controlling Arrow version via cmake variable ([#11429](https://github.com/rapidsai/cudf/pull/11429)) [@kkraus14](https://github.com/kkraus14)
- Adding support for list<int8> columns to be written as byte arrays in parquet ([#11328](https://github.com/rapidsai/cudf/pull/11328)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Adding byte array view structure ([#11322](https://github.com/rapidsai/cudf/pull/11322)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Adding byte_array statistics ([#11303](https://github.com/rapidsai/cudf/pull/11303)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Add column indexes to Parquet writer ([#11302](https://github.com/rapidsai/cudf/pull/11302)) [@etseidl](https://github.com/etseidl)
- Provide an Option for Default Integer and Floating Bitwidth ([#11272](https://github.com/rapidsai/cudf/pull/11272)) [@isVoid](https://github.com/isVoid)
- FST benchmark ([#11243](https://github.com/rapidsai/cudf/pull/11243)) [@karthikeyann](https://github.com/karthikeyann)
- Adds the Finite-State Transducer algorithm ([#11242](https://github.com/rapidsai/cudf/pull/11242)) [@elstehle](https://github.com/elstehle)
- Refactor `collect_set` to use `cudf::distinct` and `cudf::lists::distinct` ([#11228](https://github.com/rapidsai/cudf/pull/11228)) [@ttnghia](https://github.com/ttnghia)
- Treat zstd as stable in nvcomp releases 2.3.2 and later ([#11226](https://github.com/rapidsai/cudf/pull/11226)) [@jbrennan333](https://github.com/jbrennan333)
- Add 24 bit dictionary support to Parquet writer ([#11216](https://github.com/rapidsai/cudf/pull/11216)) [@devavret](https://github.com/devavret)
- Enable positive group indices for extractAllRecord on JNI ([#11215](https://github.com/rapidsai/cudf/pull/11215)) [@anthony-chang](https://github.com/anthony-chang)
- JNI bindings for NTH_ELEMENT window aggregation ([#11201](https://github.com/rapidsai/cudf/pull/11201)) [@mythrocks](https://github.com/mythrocks)
- Add JNI bindings for extractAllRecord ([#11196](https://github.com/rapidsai/cudf/pull/11196)) [@anthony-chang](https://github.com/anthony-chang)
- Add `cudf.options` ([#11193](https://github.com/rapidsai/cudf/pull/11193)) [@isVoid](https://github.com/isVoid)
- Add thrift support for parquet column and offset indexes ([#11178](https://github.com/rapidsai/cudf/pull/11178)) [@etseidl](https://github.com/etseidl)
- Adding binary read/write as options for parquet ([#11160](https://github.com/rapidsai/cudf/pull/11160)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Support `nth_element` for window functions ([#11158](https://github.com/rapidsai/cudf/pull/11158)) [@mythrocks](https://github.com/mythrocks)
- Implement `lists::distinct` and `cudf::detail::stable_distinct` ([#11149](https://github.com/rapidsai/cudf/pull/11149)) [@ttnghia](https://github.com/ttnghia)
- Implement Groupby pct_change ([#11144](https://github.com/rapidsai/cudf/pull/11144)) [@skirui-source](https://github.com/skirui-source)
- Add JNI for set operations ([#11143](https://github.com/rapidsai/cudf/pull/11143)) [@ttnghia](https://github.com/ttnghia)
- Remove deprecated PER_THREAD_DEFAULT_STREAM ([#11134](https://github.com/rapidsai/cudf/pull/11134)) [@jbrennan333](https://github.com/jbrennan333)
- Added a Java method to check the existence of a list of keys in a map ([#11128](https://github.com/rapidsai/cudf/pull/11128)) [@razajafri](https://github.com/razajafri)
- Feature/python benchmarking ([#11125](https://github.com/rapidsai/cudf/pull/11125)) [@vyasr](https://github.com/vyasr)
- Support `nan_equality` in `cudf::distinct` ([#11118](https://github.com/rapidsai/cudf/pull/11118)) [@ttnghia](https://github.com/ttnghia)
- Added JNI for getMapValueForKeys ([#11104](https://github.com/rapidsai/cudf/pull/11104)) [@razajafri](https://github.com/razajafri)
- Refactor `semi_anti_join` ([#11100](https://github.com/rapidsai/cudf/pull/11100)) [@ttnghia](https://github.com/ttnghia)
- Replace remaining instances of rmm::cuda_stream_default with cudf::default_stream_value ([#11082](https://github.com/rapidsai/cudf/pull/11082)) [@jbrennan333](https://github.com/jbrennan333)
- Adds the Logical Stack algorithm ([#11078](https://github.com/rapidsai/cudf/pull/11078)) [@elstehle](https://github.com/elstehle)
- Add doxygen-check pre-commit hook ([#11076](https://github.com/rapidsai/cudf/pull/11076)) [@karthikeyann](https://github.com/karthikeyann)
- Use new nvCOMP API to optimize the decompression temp memory size ([#11064](https://github.com/rapidsai/cudf/pull/11064)) [@vuule](https://github.com/vuule)
- Add Doxygen CI check ([#11057](https://github.com/rapidsai/cudf/pull/11057)) [@karthikeyann](https://github.com/karthikeyann)
- Support `duplicate_keep_option` in `cudf::distinct` ([#11052](https://github.com/rapidsai/cudf/pull/11052)) [@ttnghia](https://github.com/ttnghia)
- Support set operations ([#11043](https://github.com/rapidsai/cudf/pull/11043)) [@ttnghia](https://github.com/ttnghia)
- Support for ZLIB compression in ORC writer ([#11036](https://github.com/rapidsai/cudf/pull/11036)) [@vuule](https://github.com/vuule)
- Adding feature swaplevels ([#11027](https://github.com/rapidsai/cudf/pull/11027)) [@VamsiTallam95](https://github.com/VamsiTallam95)
- Use nvCOMP for ZLIB decompression in ORC reader ([#11024](https://github.com/rapidsai/cudf/pull/11024)) [@vuule](https://github.com/vuule)
- Function for bfill, ffill #9591 ([#11022](https://github.com/rapidsai/cudf/pull/11022)) [@Sreekiran096](https://github.com/Sreekiran096)
- Generate group offsets from element labels ([#11017](https://github.com/rapidsai/cudf/pull/11017)) [@ttnghia](https://github.com/ttnghia)
- Feature axes ([#10979](https://github.com/rapidsai/cudf/pull/10979)) [@VamsiTallam95](https://github.com/VamsiTallam95)
- Generate group labels from offsets ([#10945](https://github.com/rapidsai/cudf/pull/10945)) [@ttnghia](https://github.com/ttnghia)
- Add missing cuIO benchmark coverage for duration types ([#10933](https://github.com/rapidsai/cudf/pull/10933)) [@vuule](https://github.com/vuule)
- Dask-cuDF cumulative groupby ops ([#10889](https://github.com/rapidsai/cudf/pull/10889)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Reindex Improvements ([#10815](https://github.com/rapidsai/cudf/pull/10815)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Implement value_counts for DataFrame ([#10813](https://github.com/rapidsai/cudf/pull/10813)) [@martinfalisse](https://github.com/martinfalisse)
## π οΈ Improvements
- Pin `dask` & `distributed` for release ([#11433](https://github.com/rapidsai/cudf/pull/11433)) [@galipremsagar](https://github.com/galipremsagar)
- Use documented header template for `doxygen` ([#11430](https://github.com/rapidsai/cudf/pull/11430)) [@galipremsagar](https://github.com/galipremsagar)
- Relax arrow version in dev env ([#11418](https://github.com/rapidsai/cudf/pull/11418)) [@galipremsagar](https://github.com/galipremsagar)
- Allow CuPy 11 ([#11393](https://github.com/rapidsai/cudf/pull/11393)) [@jakirkham](https://github.com/jakirkham)
- Improve multibyte_split performance ([#11347](https://github.com/rapidsai/cudf/pull/11347)) [@cwharris](https://github.com/cwharris)
- Switch death test to use explicit trap. ([#11326](https://github.com/rapidsai/cudf/pull/11326)) [@vyasr](https://github.com/vyasr)
- Add --output-on-failure to ctest args. ([#11321](https://github.com/rapidsai/cudf/pull/11321)) [@vyasr](https://github.com/vyasr)
- Consolidate remaining DataFrame/Series APIs ([#11315](https://github.com/rapidsai/cudf/pull/11315)) [@vyasr](https://github.com/vyasr)
- Add JNI support for the join_strings API ([#11309](https://github.com/rapidsai/cudf/pull/11309)) [@revans2](https://github.com/revans2)
- Add cupy version to setup.py install_requires ([#11306](https://github.com/rapidsai/cudf/pull/11306)) [@vyasr](https://github.com/vyasr)
- removing some unused code ([#11305](https://github.com/rapidsai/cudf/pull/11305)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Add test of wildcard selection ([#11300](https://github.com/rapidsai/cudf/pull/11300)) [@vyasr](https://github.com/vyasr)
- Update parquet reader to take stream parameter ([#11294](https://github.com/rapidsai/cudf/pull/11294)) [@PointKernel](https://github.com/PointKernel)
- Spark list hashing ([#11292](https://github.com/rapidsai/cudf/pull/11292)) [@bdice](https://github.com/bdice)
- Remove legacy join APIs ([#11274](https://github.com/rapidsai/cudf/pull/11274)) [@vyasr](https://github.com/vyasr)
- Fix `cudf` recipes syntax ([#11273](https://github.com/rapidsai/cudf/pull/11273)) [@ajschmidt8](https://github.com/ajschmidt8)
- Fix `cudf` recipe ([#11267](https://github.com/rapidsai/cudf/pull/11267)) [@ajschmidt8](https://github.com/ajschmidt8)
- Cleanup config files ([#11266](https://github.com/rapidsai/cudf/pull/11266)) [@vyasr](https://github.com/vyasr)
- Run mypy on all packages ([#11265](https://github.com/rapidsai/cudf/pull/11265)) [@vyasr](https://github.com/vyasr)
- Update to isort 5.10.1. ([#11262](https://github.com/rapidsai/cudf/pull/11262)) [@vyasr](https://github.com/vyasr)
- Consolidate flake8 and pydocstyle configuration ([#11260](https://github.com/rapidsai/cudf/pull/11260)) [@vyasr](https://github.com/vyasr)
- Remove redundant black config specifications. ([#11258](https://github.com/rapidsai/cudf/pull/11258)) [@vyasr](https://github.com/vyasr)
- Ensure DeprecationWarnings are not introduced via pre-commit ([#11255](https://github.com/rapidsai/cudf/pull/11255)) [@wence-](https://github.com/wence-)
- Optimization to gpu::PreprocessColumnData in parquet reader. ([#11252](https://github.com/rapidsai/cudf/pull/11252)) [@nvdbaranec](https://github.com/nvdbaranec)
- Move rolling impl details to detail/ directory. ([#11250](https://github.com/rapidsai/cudf/pull/11250)) [@mythrocks](https://github.com/mythrocks)
- Remove `lists::drop_list_duplicates` ([#11236](https://github.com/rapidsai/cudf/pull/11236)) [@ttnghia](https://github.com/ttnghia)
- Use `cudf::lists::distinct` in Python binding ([#11234](https://github.com/rapidsai/cudf/pull/11234)) [@ttnghia](https://github.com/ttnghia)
- Use `cudf::lists::distinct` in Java binding ([#11233](https://github.com/rapidsai/cudf/pull/11233)) [@ttnghia](https://github.com/ttnghia)
- Use `cudf::distinct` in Java binding ([#11232](https://github.com/rapidsai/cudf/pull/11232)) [@ttnghia](https://github.com/ttnghia)
- Pin `dask-cuda` in dev environment ([#11229](https://github.com/rapidsai/cudf/pull/11229)) [@galipremsagar](https://github.com/galipremsagar)
- Remove cruft in map_lookup ([#11221](https://github.com/rapidsai/cudf/pull/11221)) [@mythrocks](https://github.com/mythrocks)
- Deprecate `skiprows` & `num_rows` in parquet reader ([#11218](https://github.com/rapidsai/cudf/pull/11218)) [@galipremsagar](https://github.com/galipremsagar)
- Remove Frame._index ([#11210](https://github.com/rapidsai/cudf/pull/11210)) [@vyasr](https://github.com/vyasr)
- Improve performance for `cudf::contains` when searching for a scalar ([#11202](https://github.com/rapidsai/cudf/pull/11202)) [@ttnghia](https://github.com/ttnghia)
- Document why Development component is needing for CMake. ([#11200](https://github.com/rapidsai/cudf/pull/11200)) [@vyasr](https://github.com/vyasr)
- cleanup unused code in rolling_test.hpp ([#11195](https://github.com/rapidsai/cudf/pull/11195)) [@karthikeyann](https://github.com/karthikeyann)
- Standardize join internals around DataFrame ([#11184](https://github.com/rapidsai/cudf/pull/11184)) [@vyasr](https://github.com/vyasr)
- Move character case table declarations from src to detail ([#11183](https://github.com/rapidsai/cudf/pull/11183)) [@davidwendt](https://github.com/davidwendt)
- Remove usage of Frame in StringMethods ([#11181](https://github.com/rapidsai/cudf/pull/11181)) [@vyasr](https://github.com/vyasr)
- Expose get_json_object_options to Python ([#11180](https://github.com/rapidsai/cudf/pull/11180)) [@SrikarVanavasam](https://github.com/SrikarVanavasam)
- Fix decimal128 stats in parquet writer ([#11179](https://github.com/rapidsai/cudf/pull/11179)) [@etseidl](https://github.com/etseidl)
- Modify CheckPageRows in parquet_test to use datasources ([#11177](https://github.com/rapidsai/cudf/pull/11177)) [@etseidl](https://github.com/etseidl)
- Pin max version of `cuda-python` to `11.7.0` ([#11174](https://github.com/rapidsai/cudf/pull/11174)) [@Ethyling](https://github.com/Ethyling)
- Refactor and optimize Frame.where ([#11168](https://github.com/rapidsai/cudf/pull/11168)) [@vyasr](https://github.com/vyasr)
- Add npos const static member to cudf::string_view ([#11166](https://github.com/rapidsai/cudf/pull/11166)) [@davidwendt](https://github.com/davidwendt)
- Move _drop_rows_by_label from Frame to IndexedFrame ([#11157](https://github.com/rapidsai/cudf/pull/11157)) [@vyasr](https://github.com/vyasr)
- Clean up _copy_type_metadata ([#11156](https://github.com/rapidsai/cudf/pull/11156)) [@vyasr](https://github.com/vyasr)
- Add `nvcc` conda package in dev environment ([#11154](https://github.com/rapidsai/cudf/pull/11154)) [@galipremsagar](https://github.com/galipremsagar)
- Struct binary comparison op functionality for spark rapids ([#11153](https://github.com/rapidsai/cudf/pull/11153)) [@rwlee](https://github.com/rwlee)
- Refactor inline conditionals. ([#11151](https://github.com/rapidsai/cudf/pull/11151)) [@bdice](https://github.com/bdice)
- Refactor Spark hashing tests ([#11145](https://github.com/rapidsai/cudf/pull/11145)) [@bdice](https://github.com/bdice)
- Add new `_from_data_like_self` factory ([#11140](https://github.com/rapidsai/cudf/pull/11140)) [@vyasr](https://github.com/vyasr)
- Update get_cucollections to use rapids-cmake ([#11139](https://github.com/rapidsai/cudf/pull/11139)) [@vyasr](https://github.com/vyasr)
- Remove unnecessary extra function for libcudacxx detection ([#11138](https://github.com/rapidsai/cudf/pull/11138)) [@vyasr](https://github.com/vyasr)
- Allow initial value for cudf::reduce and cudf::segmented_reduce. ([#11137](https://github.com/rapidsai/cudf/pull/11137)) [@SrikarVanavasam](https://github.com/SrikarVanavasam)
- Remove Index.replace API ([#11131](https://github.com/rapidsai/cudf/pull/11131)) [@vyasr](https://github.com/vyasr)
- Move char-type table function declarations from src to detail ([#11127](https://github.com/rapidsai/cudf/pull/11127)) [@davidwendt](https://github.com/davidwendt)
- Clean up repo root ([#11124](https://github.com/rapidsai/cudf/pull/11124)) [@bdice](https://github.com/bdice)
- Improve print formatting of strings containing newline characters. ([#11108](https://github.com/rapidsai/cudf/pull/11108)) [@nvdbaranec](https://github.com/nvdbaranec)
- Fix cudf::string_view::find() to return pos for empty string argument ([#11107](https://github.com/rapidsai/cudf/pull/11107)) [@davidwendt](https://github.com/davidwendt)
- Forward-merge branch-22.06 to branch-22.08 ([#11086](https://github.com/rapidsai/cudf/pull/11086)) [@bdice](https://github.com/bdice)
- Take iterators by value in clamp.cu. ([#11084](https://github.com/rapidsai/cudf/pull/11084)) [@bdice](https://github.com/bdice)
- Performance improvements for row to column conversions ([#11075](https://github.com/rapidsai/cudf/pull/11075)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Remove deprecated Index methods from Frame ([#11073](https://github.com/rapidsai/cudf/pull/11073)) [@vyasr](https://github.com/vyasr)
- Use per-page max compressed size estimate for compression ([#11066](https://github.com/rapidsai/cudf/pull/11066)) [@devavret](https://github.com/devavret)
- column to row refactor for performance ([#11063](https://github.com/rapidsai/cudf/pull/11063)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Include `skbuild` directory into `build.sh` `clean` operation ([#11060](https://github.com/rapidsai/cudf/pull/11060)) [@galipremsagar](https://github.com/galipremsagar)
- Unpin `dask` & `distributed` for development ([#11058](https://github.com/rapidsai/cudf/pull/11058)) [@galipremsagar](https://github.com/galipremsagar)
- Add support for `Series.between` ([#11051](https://github.com/rapidsai/cudf/pull/11051)) [@galipremsagar](https://github.com/galipremsagar)
- Fix groupby include ([#11046](https://github.com/rapidsai/cudf/pull/11046)) [@bwyogatama](https://github.com/bwyogatama)
- Regex cleanup internal reclass and reclass_device classes ([#11045](https://github.com/rapidsai/cudf/pull/11045)) [@davidwendt](https://github.com/davidwendt)
- Remove public API of cudf.merge_sorted. ([#11032](https://github.com/rapidsai/cudf/pull/11032)) [@bdice](https://github.com/bdice)
- Drop python `3.7` in code-base ([#11029](https://github.com/rapidsai/cudf/pull/11029)) [@galipremsagar](https://github.com/galipremsagar)
- Addition & integration of the integer power operator ([#11025](https://github.com/rapidsai/cudf/pull/11025)) [@AtlantaPepsi](https://github.com/AtlantaPepsi)
- Refactor `lists::contains` ([#11019](https://github.com/rapidsai/cudf/pull/11019)) [@ttnghia](https://github.com/ttnghia)
- Change build.sh to find C++ library by default and avoid shadowing CMAKE_ARGS ([#11013](https://github.com/rapidsai/cudf/pull/11013)) [@vyasr](https://github.com/vyasr)
- Clean up parquet unit test ([#11005](https://github.com/rapidsai/cudf/pull/11005)) [@PointKernel](https://github.com/PointKernel)
- Add missing #pragma once to header files ([#11004](https://github.com/rapidsai/cudf/pull/11004)) [@karthikeyann](https://github.com/karthikeyann)
- Cleanup `iterator.cuh` and add fixed point support for `scalar_optional_accessor` ([#10999](https://github.com/rapidsai/cudf/pull/10999)) [@ttnghia](https://github.com/ttnghia)
- Refactor `cudf::contains` ([#10997](https://github.com/rapidsai/cudf/pull/10997)) [@ttnghia](https://github.com/ttnghia)
- Remove Arrow CUDA IPC code ([#10995](https://github.com/rapidsai/cudf/pull/10995)) [@shwina](https://github.com/shwina)
- Change file extension for groupby benchmark ([#10985](https://github.com/rapidsai/cudf/pull/10985)) [@ttnghia](https://github.com/ttnghia)
- Sort recipe include checks. ([#10984](https://github.com/rapidsai/cudf/pull/10984)) [@bdice](https://github.com/bdice)
- Update cuCollections for thrust upgrade ([#10983](https://github.com/rapidsai/cudf/pull/10983)) [@PointKernel](https://github.com/PointKernel)
- Expose row-group size options in cudf ParquetWriter ([#10980](https://github.com/rapidsai/cudf/pull/10980)) [@rjzamora](https://github.com/rjzamora)
- Cleanup cudf::strings::detail::regex_parser class source ([#10975](https://github.com/rapidsai/cudf/pull/10975)) [@davidwendt](https://github.com/davidwendt)
- Handle missing fields as nulls in get_json_object() ([#10970](https://github.com/rapidsai/cudf/pull/10970)) [@SrikarVanavasam](https://github.com/SrikarVanavasam)
- Fix license families to match all-caps expected by conda-verify. ([#10931](https://github.com/rapidsai/cudf/pull/10931)) [@bdice](https://github.com/bdice)
- Include <optional> for GCC 11 compatibility. ([#10927](https://github.com/rapidsai/cudf/pull/10927)) [@bdice](https://github.com/bdice)
- Enable builds with scikit-build ([#10919](https://github.com/rapidsai/cudf/pull/10919)) [@vyasr](https://github.com/vyasr)
- Improve `distinct` by using `cuco::static_map::retrieve_all` ([#10916](https://github.com/rapidsai/cudf/pull/10916)) [@PointKernel](https://github.com/PointKernel)
- update cudfjni to 22.08.0-SNAPSHOT ([#10910](https://github.com/rapidsai/cudf/pull/10910)) [@pxLi](https://github.com/pxLi)
- Improve the capture of fatal cuda error ([#10884](https://github.com/rapidsai/cudf/pull/10884)) [@sperlingxx](https://github.com/sperlingxx)
- Cleanup regex compiler operators and operands source ([#10879](https://github.com/rapidsai/cudf/pull/10879)) [@davidwendt](https://github.com/davidwendt)
- Buffer: make `.ptr` read-only ([#10872](https://github.com/rapidsai/cudf/pull/10872)) [@madsbk](https://github.com/madsbk)
- Configurable NaN handling in device_row_comparators ([#10870](https://github.com/rapidsai/cudf/pull/10870)) [@rwlee](https://github.com/rwlee)
- Register `cudf.core.groupby.Grouper` objects to dask `grouper_dispatch` ([#10838](https://github.com/rapidsai/cudf/pull/10838)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Upgrade to `arrow-8` ([#10816](https://github.com/rapidsai/cudf/pull/10816)) [@galipremsagar](https://github.com/galipremsagar)
- Remove _getattr_ method in RangeIndex class ([#10538](https://github.com/rapidsai/cudf/pull/10538)) [@skirui-source](https://github.com/skirui-source)
- Adding bins to value counts ([#8247](https://github.com/rapidsai/cudf/pull/8247)) [@marlenezw](https://github.com/marlenezw)
# cuDF 22.06.00 (7 Jun 2022)
## π¨ Breaking Changes
- Enable Zstandard decompression only when all nvcomp integrations are enabled ([#10944](https://github.com/rapidsai/cudf/pull/10944)) [@vuule](https://github.com/vuule)
- Rename `sliced_child` to `get_sliced_child`. ([#10885](https://github.com/rapidsai/cudf/pull/10885)) [@bdice](https://github.com/bdice)
- Add parameters to control page size in Parquet writer ([#10882](https://github.com/rapidsai/cudf/pull/10882)) [@etseidl](https://github.com/etseidl)
- Make cudf::test::expect_columns_equal() to fail when comparing unsanitary lists. ([#10880](https://github.com/rapidsai/cudf/pull/10880)) [@nvdbaranec](https://github.com/nvdbaranec)
- Cleanup regex compiler fixed quantifiers source ([#10843](https://github.com/rapidsai/cudf/pull/10843)) [@davidwendt](https://github.com/davidwendt)
- Refactor `cudf::contains`, renaming and switching parameters role ([#10802](https://github.com/rapidsai/cudf/pull/10802)) [@ttnghia](https://github.com/ttnghia)
- Generic serialization of all column types ([#10784](https://github.com/rapidsai/cudf/pull/10784)) [@wence-](https://github.com/wence-)
- Return per-file metadata from readers ([#10782](https://github.com/rapidsai/cudf/pull/10782)) [@vuule](https://github.com/vuule)
- HostColumnVectoreCore#isNull should return true for out-of-range rows ([#10779](https://github.com/rapidsai/cudf/pull/10779)) [@gerashegalov](https://github.com/gerashegalov)
- Update `groupby::hash` to use new row operators for keys ([#10770](https://github.com/rapidsai/cudf/pull/10770)) [@PointKernel](https://github.com/PointKernel)
- update mangle_dupe_cols behavior in csv reader to match pandas 1.4.0 behavior ([#10749](https://github.com/rapidsai/cudf/pull/10749)) [@karthikeyann](https://github.com/karthikeyann)
- Rename CUDA_TRY macro to CUDF_CUDA_TRY, rename CHECK_CUDA macro to CUDF_CHECK_CUDA. ([#10589](https://github.com/rapidsai/cudf/pull/10589)) [@bdice](https://github.com/bdice)
- Upgrade `cudf` to support `pandas` 1.4.x versions ([#10584](https://github.com/rapidsai/cudf/pull/10584)) [@galipremsagar](https://github.com/galipremsagar)
- Move binop methods from Frame to IndexedFrame and standardize the docstring ([#10576](https://github.com/rapidsai/cudf/pull/10576)) [@vyasr](https://github.com/vyasr)
- Add default= kwarg to .list.get() accessor method ([#10547](https://github.com/rapidsai/cudf/pull/10547)) [@shwina](https://github.com/shwina)
- Remove deprecated `decimal_cols_as_float` in the ORC reader ([#10515](https://github.com/rapidsai/cudf/pull/10515)) [@vuule](https://github.com/vuule)
- Support nvComp 2.3 if local, otherwise use nvcomp 2.2 ([#10513](https://github.com/rapidsai/cudf/pull/10513)) [@robertmaynard](https://github.com/robertmaynard)
- Fix findall_record to return empty list for no matches ([#10491](https://github.com/rapidsai/cudf/pull/10491)) [@davidwendt](https://github.com/davidwendt)
- Namespace/Docstring Fixes for Reduction ([#10471](https://github.com/rapidsai/cudf/pull/10471)) [@isVoid](https://github.com/isVoid)
- Additional refactoring of hash functions ([#10462](https://github.com/rapidsai/cudf/pull/10462)) [@bdice](https://github.com/bdice)
- Fix default value of str.split expand parameter. ([#10457](https://github.com/rapidsai/cudf/pull/10457)) [@bdice](https://github.com/bdice)
- Remove deprecated code. ([#10450](https://github.com/rapidsai/cudf/pull/10450)) [@vyasr](https://github.com/vyasr)
## π Bug Fixes
- Fix single column `MultiIndex` issue in `sort_index` ([#10957](https://github.com/rapidsai/cudf/pull/10957)) [@galipremsagar](https://github.com/galipremsagar)
- Make SerializedTableHeader(numRows) public ([#10949](https://github.com/rapidsai/cudf/pull/10949)) [@gerashegalov](https://github.com/gerashegalov)
- Fix `gcc_linux` version pinning in dev environment ([#10943](https://github.com/rapidsai/cudf/pull/10943)) [@galipremsagar](https://github.com/galipremsagar)
- Fix an issue with reading raw string in `cudf.read_json` ([#10924](https://github.com/rapidsai/cudf/pull/10924)) [@galipremsagar](https://github.com/galipremsagar)
- Make cudf::test::expect_columns_equal() to fail when comparing unsanitary lists. ([#10880](https://github.com/rapidsai/cudf/pull/10880)) [@nvdbaranec](https://github.com/nvdbaranec)
- Fix segmented_reduce on empty column with non-empty offsets ([#10876](https://github.com/rapidsai/cudf/pull/10876)) [@davidwendt](https://github.com/davidwendt)
- Fix dask-cudf groupby handling when grouping by all columns ([#10866](https://github.com/rapidsai/cudf/pull/10866)) [@charlesbluca](https://github.com/charlesbluca)
- Fix a bug in `distinct`: using nested nulls logic ([#10848](https://github.com/rapidsai/cudf/pull/10848)) [@PointKernel](https://github.com/PointKernel)
- Fix constness / references in weak ordering operator() signatures. ([#10846](https://github.com/rapidsai/cudf/pull/10846)) [@bdice](https://github.com/bdice)
- Suppress sizeof-array-div warnings in thrust found by gcc-11 ([#10840](https://github.com/rapidsai/cudf/pull/10840)) [@robertmaynard](https://github.com/robertmaynard)
- Add handling for string by-columns in dask-cudf groupby ([#10830](https://github.com/rapidsai/cudf/pull/10830)) [@charlesbluca](https://github.com/charlesbluca)
- Fix compile warning in search.cu ([#10827](https://github.com/rapidsai/cudf/pull/10827)) [@davidwendt](https://github.com/davidwendt)
- Fix element access const correctness in `hostdevice_vector` ([#10804](https://github.com/rapidsai/cudf/pull/10804)) [@vuule](https://github.com/vuule)
- Update `cuco` git tag ([#10788](https://github.com/rapidsai/cudf/pull/10788)) [@PointKernel](https://github.com/PointKernel)
- HostColumnVectoreCore#isNull should return true for out-of-range rows ([#10779](https://github.com/rapidsai/cudf/pull/10779)) [@gerashegalov](https://github.com/gerashegalov)
- Fixing deprecation warnings in test_orc.py ([#10772](https://github.com/rapidsai/cudf/pull/10772)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Enable writing to `s3` storage in chunked parquet writer ([#10769](https://github.com/rapidsai/cudf/pull/10769)) [@galipremsagar](https://github.com/galipremsagar)
- Fix construction of nested structs with EMPTY child ([#10761](https://github.com/rapidsai/cudf/pull/10761)) [@shwina](https://github.com/shwina)
- Fix replace error when regex has only zero match quantifiers ([#10760](https://github.com/rapidsai/cudf/pull/10760)) [@davidwendt](https://github.com/davidwendt)
- Fix an issue with one_level_list schemas in parquet reader. ([#10750](https://github.com/rapidsai/cudf/pull/10750)) [@nvdbaranec](https://github.com/nvdbaranec)
- update mangle_dupe_cols behavior in csv reader to match pandas 1.4.0 behavior ([#10749](https://github.com/rapidsai/cudf/pull/10749)) [@karthikeyann](https://github.com/karthikeyann)
- Fix `cupy` function in notebook ([#10737](https://github.com/rapidsai/cudf/pull/10737)) [@ajschmidt8](https://github.com/ajschmidt8)
- Fix `fillna` to retain `columns` when it is `MultiIndex` ([#10729](https://github.com/rapidsai/cudf/pull/10729)) [@galipremsagar](https://github.com/galipremsagar)
- Fix scatter for all-empty-string column case ([#10724](https://github.com/rapidsai/cudf/pull/10724)) [@davidwendt](https://github.com/davidwendt)
- Retain series name in `Series.apply` ([#10716](https://github.com/rapidsai/cudf/pull/10716)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Correct build dir `cudf-config` dependency issues for static builds ([#10704](https://github.com/rapidsai/cudf/pull/10704)) [@robertmaynard](https://github.com/robertmaynard)
- Fix list of testing requirements in setup.py. ([#10678](https://github.com/rapidsai/cudf/pull/10678)) [@bdice](https://github.com/bdice)
- Fix rounding to zero error in stod on very small float numbers ([#10672](https://github.com/rapidsai/cudf/pull/10672)) [@davidwendt](https://github.com/davidwendt)
- cuco isn't a cudf dependency when we are built shared ([#10662](https://github.com/rapidsai/cudf/pull/10662)) [@robertmaynard](https://github.com/robertmaynard)
- Fix to_timestamps to support Z for %z format specifier ([#10617](https://github.com/rapidsai/cudf/pull/10617)) [@davidwendt](https://github.com/davidwendt)
- Verify compression type in Parquet reader ([#10610](https://github.com/rapidsai/cudf/pull/10610)) [@vuule](https://github.com/vuule)
- Fix struct row comparator's exception on empty structs ([#10604](https://github.com/rapidsai/cudf/pull/10604)) [@sperlingxx](https://github.com/sperlingxx)
- Fix strings strip() to accept only str Scalar for to_strip parameter ([#10597](https://github.com/rapidsai/cudf/pull/10597)) [@davidwendt](https://github.com/davidwendt)
- Fix has_atomic_support check in can_use_hash_groupby() ([#10588](https://github.com/rapidsai/cudf/pull/10588)) [@jbrennan333](https://github.com/jbrennan333)
- Revert Thrust 1.16 to Thrust 1.15 ([#10586](https://github.com/rapidsai/cudf/pull/10586)) [@bdice](https://github.com/bdice)
- Fix missing RMM_STATIC_CUDART define when compiling JNI with static CUDA runtime ([#10585](https://github.com/rapidsai/cudf/pull/10585)) [@jlowe](https://github.com/jlowe)
- pin more cmake versions ([#10570](https://github.com/rapidsai/cudf/pull/10570)) [@robertmaynard](https://github.com/robertmaynard)
- Re-enable Build Metrics Report ([#10562](https://github.com/rapidsai/cudf/pull/10562)) [@davidwendt](https://github.com/davidwendt)
- Remove statically linked CUDA runtime check in Java build ([#10532](https://github.com/rapidsai/cudf/pull/10532)) [@jlowe](https://github.com/jlowe)
- Fix temp data cleanup in `test_text.py` ([#10524](https://github.com/rapidsai/cudf/pull/10524)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Update pre-commit to run black 22.3.0 ([#10523](https://github.com/rapidsai/cudf/pull/10523)) [@vyasr](https://github.com/vyasr)
- Remove deprecated `decimal_cols_as_float` in the ORC reader ([#10515](https://github.com/rapidsai/cudf/pull/10515)) [@vuule](https://github.com/vuule)
- Fix findall_record to return empty list for no matches ([#10491](https://github.com/rapidsai/cudf/pull/10491)) [@davidwendt](https://github.com/davidwendt)
- Allow users to specify data types for a subset of columns in `read_csv` ([#10484](https://github.com/rapidsai/cudf/pull/10484)) [@vuule](https://github.com/vuule)
- Fix default value of str.split expand parameter. ([#10457](https://github.com/rapidsai/cudf/pull/10457)) [@bdice](https://github.com/bdice)
- Improve coverage of dask-cudf's groupby aggregation, add tests for `dropna` support ([#10449](https://github.com/rapidsai/cudf/pull/10449)) [@charlesbluca](https://github.com/charlesbluca)
- Allow string aggs for `dask_cudf.CudfDataFrameGroupBy.aggregate` ([#10222](https://github.com/rapidsai/cudf/pull/10222)) [@charlesbluca](https://github.com/charlesbluca)
- In-place updates with loc or iloc don't work correctly when the LHS has more than one column ([#9918](https://github.com/rapidsai/cudf/pull/9918)) [@skirui-source](https://github.com/skirui-source)
## π Documentation
- Clarify append deprecation notice. ([#10930](https://github.com/rapidsai/cudf/pull/10930)) [@bdice](https://github.com/bdice)
- Use full name of GPUDirect Storage SDK in docs ([#10904](https://github.com/rapidsai/cudf/pull/10904)) [@vuule](https://github.com/vuule)
- Update Dask + Pandas to Dask + cuDF path ([#10897](https://github.com/rapidsai/cudf/pull/10897)) [@miguelusque](https://github.com/miguelusque)
- Add missing documentation in cudf/types.hpp ([#10895](https://github.com/rapidsai/cudf/pull/10895)) [@karthikeyann](https://github.com/karthikeyann)
- Add strong index iterator docs. ([#10888](https://github.com/rapidsai/cudf/pull/10888)) [@bdice](https://github.com/bdice)
- spell check fixes ([#10865](https://github.com/rapidsai/cudf/pull/10865)) [@karthikeyann](https://github.com/karthikeyann)
- Add missing documentation in scalar/ headers ([#10861](https://github.com/rapidsai/cudf/pull/10861)) [@karthikeyann](https://github.com/karthikeyann)
- Remove typo in ngram documentation ([#10859](https://github.com/rapidsai/cudf/pull/10859)) [@miguelusque](https://github.com/miguelusque)
- fix doxygen warnings ([#10842](https://github.com/rapidsai/cudf/pull/10842)) [@karthikeyann](https://github.com/karthikeyann)
- Add a library_design.md file documenting the core Python data structures and their relationship ([#10817](https://github.com/rapidsai/cudf/pull/10817)) [@vyasr](https://github.com/vyasr)
- Add NumPy to intersphinx references. ([#10809](https://github.com/rapidsai/cudf/pull/10809)) [@bdice](https://github.com/bdice)
- Add a section to the docs that compares cuDF with Pandas ([#10796](https://github.com/rapidsai/cudf/pull/10796)) [@shwina](https://github.com/shwina)
- Mention 2 cpp-reviewer requirement in pull request template ([#10768](https://github.com/rapidsai/cudf/pull/10768)) [@davidwendt](https://github.com/davidwendt)
- Enable pydocstyle for all packages. ([#10759](https://github.com/rapidsai/cudf/pull/10759)) [@bdice](https://github.com/bdice)
- Enable pydocstyle rules involving quotes ([#10748](https://github.com/rapidsai/cudf/pull/10748)) [@vyasr](https://github.com/vyasr)
- Revise 10 minutes notebook. ([#10738](https://github.com/rapidsai/cudf/pull/10738)) [@bdice](https://github.com/bdice)
- Reorganize cuDF Python docs ([#10691](https://github.com/rapidsai/cudf/pull/10691)) [@shwina](https://github.com/shwina)
- Fix sphinx/jupyter heading issue in UDF notebook ([#10690](https://github.com/rapidsai/cudf/pull/10690)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Migrated user guide notebooks to MyST-NB and added sphinx extension ([#10685](https://github.com/rapidsai/cudf/pull/10685)) [@mmccarty](https://github.com/mmccarty)
- add data generation to benchmark documentation ([#10677](https://github.com/rapidsai/cudf/pull/10677)) [@karthikeyann](https://github.com/karthikeyann)
- Fix some docs build warnings ([#10674](https://github.com/rapidsai/cudf/pull/10674)) [@galipremsagar](https://github.com/galipremsagar)
- Update UDF notebook in User Guide. ([#10668](https://github.com/rapidsai/cudf/pull/10668)) [@bdice](https://github.com/bdice)
- Improve User Guide docs ([#10663](https://github.com/rapidsai/cudf/pull/10663)) [@bdice](https://github.com/bdice)
- Fix some docstrings formatting ([#10660](https://github.com/rapidsai/cudf/pull/10660)) [@galipremsagar](https://github.com/galipremsagar)
- Remove implementation details from `apply` docstrings ([#10651](https://github.com/rapidsai/cudf/pull/10651)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Revise CONTRIBUTING.md ([#10644](https://github.com/rapidsai/cudf/pull/10644)) [@bdice](https://github.com/bdice)
- Add missing APIs to documentation. ([#10643](https://github.com/rapidsai/cudf/pull/10643)) [@bdice](https://github.com/bdice)
- Use cudf.read_json as documented API name. ([#10640](https://github.com/rapidsai/cudf/pull/10640)) [@bdice](https://github.com/bdice)
- Fix docstring section headings. ([#10639](https://github.com/rapidsai/cudf/pull/10639)) [@bdice](https://github.com/bdice)
- Document cudf.read_text and cudf.read_avro. ([#10638](https://github.com/rapidsai/cudf/pull/10638)) [@bdice](https://github.com/bdice)
- Fix type-o in docstring for json_reader_options ([#10627](https://github.com/rapidsai/cudf/pull/10627)) [@dagardner-nv](https://github.com/dagardner-nv)
- Update guide to UDFs with notes about `Series.applymap` deprecation and related changes ([#10607](https://github.com/rapidsai/cudf/pull/10607)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Fix doxygen Modules page for cudf::lists::sequences ([#10561](https://github.com/rapidsai/cudf/pull/10561)) [@davidwendt](https://github.com/davidwendt)
- Add Replace Backreferences section to Regex Features page ([#10560](https://github.com/rapidsai/cudf/pull/10560)) [@davidwendt](https://github.com/davidwendt)
- Introduce deprecation policy to developer guide. ([#10252](https://github.com/rapidsai/cudf/pull/10252)) [@vyasr](https://github.com/vyasr)
## π New Features
- Enable Zstandard decompression only when all nvcomp integrations are enabled ([#10944](https://github.com/rapidsai/cudf/pull/10944)) [@vuule](https://github.com/vuule)
- Handle nested types in cudf::concatenate_rows() ([#10890](https://github.com/rapidsai/cudf/pull/10890)) [@nvdbaranec](https://github.com/nvdbaranec)
- Strong index types for equality comparator ([#10883](https://github.com/rapidsai/cudf/pull/10883)) [@ttnghia](https://github.com/ttnghia)
- Add parameters to control page size in Parquet writer ([#10882](https://github.com/rapidsai/cudf/pull/10882)) [@etseidl](https://github.com/etseidl)
- Support for Zstandard decompression in ORC reader ([#10873](https://github.com/rapidsai/cudf/pull/10873)) [@vuule](https://github.com/vuule)
- Use pre-built nvcomp 2.3 binaries by default ([#10851](https://github.com/rapidsai/cudf/pull/10851)) [@robertmaynard](https://github.com/robertmaynard)
- Support for Zstandard decompression in Parquet reader ([#10847](https://github.com/rapidsai/cudf/pull/10847)) [@vuule](https://github.com/vuule)
- Add JNI support for apply_boolean_mask ([#10812](https://github.com/rapidsai/cudf/pull/10812)) [@res-life](https://github.com/res-life)
- Segmented Min/Max for Fixed Point Types ([#10794](https://github.com/rapidsai/cudf/pull/10794)) [@isVoid](https://github.com/isVoid)
- Return per-file metadata from readers ([#10782](https://github.com/rapidsai/cudf/pull/10782)) [@vuule](https://github.com/vuule)
- Segmented `apply_boolean_mask` for `LIST` columns ([#10773](https://github.com/rapidsai/cudf/pull/10773)) [@mythrocks](https://github.com/mythrocks)
- Update `groupby::hash` to use new row operators for keys ([#10770](https://github.com/rapidsai/cudf/pull/10770)) [@PointKernel](https://github.com/PointKernel)
- Support purging non-empty null elements from LIST/STRING columns ([#10701](https://github.com/rapidsai/cudf/pull/10701)) [@mythrocks](https://github.com/mythrocks)
- Add `detail::hash_join` ([#10695](https://github.com/rapidsai/cudf/pull/10695)) [@PointKernel](https://github.com/PointKernel)
- Persist string statistics data across multiple calls to orc chunked write ([#10694](https://github.com/rapidsai/cudf/pull/10694)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Add `.list.astype()` to cast list leaves to specified dtype ([#10693](https://github.com/rapidsai/cudf/pull/10693)) [@shwina](https://github.com/shwina)
- JNI: Add generateListOffsets API ([#10683](https://github.com/rapidsai/cudf/pull/10683)) [@sperlingxx](https://github.com/sperlingxx)
- Support `args` in groupby apply ([#10682](https://github.com/rapidsai/cudf/pull/10682)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Enable segmented_gather in Java package ([#10669](https://github.com/rapidsai/cudf/pull/10669)) [@sperlingxx](https://github.com/sperlingxx)
- Add row hasher with nested column support ([#10641](https://github.com/rapidsai/cudf/pull/10641)) [@devavret](https://github.com/devavret)
- Add support for numeric_only in DataFrame._reduce ([#10629](https://github.com/rapidsai/cudf/pull/10629)) [@martinfalisse](https://github.com/martinfalisse)
- First step toward statistics in ORC files with chunked writes ([#10567](https://github.com/rapidsai/cudf/pull/10567)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Add support for struct columns to the random table generator ([#10566](https://github.com/rapidsai/cudf/pull/10566)) [@vuule](https://github.com/vuule)
- Enable passing a sequence for the `index` argument to `.list.get()` ([#10564](https://github.com/rapidsai/cudf/pull/10564)) [@shwina](https://github.com/shwina)
- Add python bindings for cudf::list::index_of ([#10549](https://github.com/rapidsai/cudf/pull/10549)) [@ChrisJar](https://github.com/ChrisJar)
- Add default= kwarg to .list.get() accessor method ([#10547](https://github.com/rapidsai/cudf/pull/10547)) [@shwina](https://github.com/shwina)
- Add `cudf.DataFrame.applymap` ([#10542](https://github.com/rapidsai/cudf/pull/10542)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Support nvComp 2.3 if local, otherwise use nvcomp 2.2 ([#10513](https://github.com/rapidsai/cudf/pull/10513)) [@robertmaynard](https://github.com/robertmaynard)
- Add column field ID control in parquet writer ([#10504](https://github.com/rapidsai/cudf/pull/10504)) [@PointKernel](https://github.com/PointKernel)
- Deprecate `Series.applymap` ([#10497](https://github.com/rapidsai/cudf/pull/10497)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Add option to drop cache in cuIO benchmarks ([#10488](https://github.com/rapidsai/cudf/pull/10488)) [@vuule](https://github.com/vuule)
- move benchmark input generation in device in reduction nvbench ([#10486](https://github.com/rapidsai/cudf/pull/10486)) [@karthikeyann](https://github.com/karthikeyann)
- Support Segmented Min/Max Reduction on String Type ([#10447](https://github.com/rapidsai/cudf/pull/10447)) [@isVoid](https://github.com/isVoid)
- List element Equality comparator ([#10289](https://github.com/rapidsai/cudf/pull/10289)) [@devavret](https://github.com/devavret)
- Implement all methods of groupby rank aggregation in libcudf, python ([#9569](https://github.com/rapidsai/cudf/pull/9569)) [@karthikeyann](https://github.com/karthikeyann)
- Implement DataFrame.eval using libcudf ASTs ([#8022](https://github.com/rapidsai/cudf/pull/8022)) [@vyasr](https://github.com/vyasr)
## π οΈ Improvements
- Use `conda` compilers in env file ([#10915](https://github.com/rapidsai/cudf/pull/10915)) [@galipremsagar](https://github.com/galipremsagar)
- Remove C style artifacts in cuIO ([#10886](https://github.com/rapidsai/cudf/pull/10886)) [@vuule](https://github.com/vuule)
- Rename `sliced_child` to `get_sliced_child`. ([#10885](https://github.com/rapidsai/cudf/pull/10885)) [@bdice](https://github.com/bdice)
- Replace defaulted stream value for libcudf APIs that use NVCOMP ([#10877](https://github.com/rapidsai/cudf/pull/10877)) [@jbrennan333](https://github.com/jbrennan333)
- Add more unit tests for `cudf::distinct` for nested types with sliced input ([#10860](https://github.com/rapidsai/cudf/pull/10860)) [@ttnghia](https://github.com/ttnghia)
- Changing `list_view.cuh` to `list_view.hpp` ([#10854](https://github.com/rapidsai/cudf/pull/10854)) [@ttnghia](https://github.com/ttnghia)
- More error checking in `from_dlpack` ([#10850](https://github.com/rapidsai/cudf/pull/10850)) [@wence-](https://github.com/wence-)
- Cleanup regex compiler fixed quantifiers source ([#10843](https://github.com/rapidsai/cudf/pull/10843)) [@davidwendt](https://github.com/davidwendt)
- Adds the JNI call for Cuda.deviceSynchronize ([#10839](https://github.com/rapidsai/cudf/pull/10839)) [@abellina](https://github.com/abellina)
- Add missing cuda-python dependency to cudf ([#10833](https://github.com/rapidsai/cudf/pull/10833)) [@bdice](https://github.com/bdice)
- Change std::string parameters in cudf::strings APIs to std::string_view ([#10832](https://github.com/rapidsai/cudf/pull/10832)) [@davidwendt](https://github.com/davidwendt)
- Split up search.cu to improve compile time ([#10831](https://github.com/rapidsai/cudf/pull/10831)) [@davidwendt](https://github.com/davidwendt)
- Add tests for null scalar binaryops ([#10828](https://github.com/rapidsai/cudf/pull/10828)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Cleanup regex compile optimize functions ([#10825](https://github.com/rapidsai/cudf/pull/10825)) [@davidwendt](https://github.com/davidwendt)
- Use `ThreadedMotoServer` instead of `subprocess` in spinning up `s3` server ([#10822](https://github.com/rapidsai/cudf/pull/10822)) [@galipremsagar](https://github.com/galipremsagar)
- Import `NA` from `missing` rather than using `cudf.NA` everywhere ([#10821](https://github.com/rapidsai/cudf/pull/10821)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Refactor regex builtin character-class identifiers ([#10814](https://github.com/rapidsai/cudf/pull/10814)) [@davidwendt](https://github.com/davidwendt)
- Change pattern parameter for regex APIs from std::string to std::string_view ([#10810](https://github.com/rapidsai/cudf/pull/10810)) [@davidwendt](https://github.com/davidwendt)
- Make the JNI API to get list offsets as a view public. ([#10807](https://github.com/rapidsai/cudf/pull/10807)) [@revans2](https://github.com/revans2)
- Add cudf JNI docker build github action ([#10806](https://github.com/rapidsai/cudf/pull/10806)) [@pxLi](https://github.com/pxLi)
- Removed `mr` parameter from inplace bitmask operations ([#10805](https://github.com/rapidsai/cudf/pull/10805)) [@AtlantaPepsi](https://github.com/AtlantaPepsi)
- Refactor `cudf::contains`, renaming and switching parameters role ([#10802](https://github.com/rapidsai/cudf/pull/10802)) [@ttnghia](https://github.com/ttnghia)
- Handle closed property in IntervalDtype.from_pandas ([#10798](https://github.com/rapidsai/cudf/pull/10798)) [@wence-](https://github.com/wence-)
- Return weak orderings from `device_row_comparator`. ([#10793](https://github.com/rapidsai/cudf/pull/10793)) [@rwlee](https://github.com/rwlee)
- Rework `Scalar` imports ([#10791](https://github.com/rapidsai/cudf/pull/10791)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Enable ccache for cudfjni build in Docker ([#10790](https://github.com/rapidsai/cudf/pull/10790)) [@gerashegalov](https://github.com/gerashegalov)
- Generic serialization of all column types ([#10784](https://github.com/rapidsai/cudf/pull/10784)) [@wence-](https://github.com/wence-)
- simplifying skiprows test in test_orc.py ([#10783](https://github.com/rapidsai/cudf/pull/10783)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Use column_views instead of column_device_views in binary operations. ([#10780](https://github.com/rapidsai/cudf/pull/10780)) [@bdice](https://github.com/bdice)
- Add struct utility functions. ([#10776](https://github.com/rapidsai/cudf/pull/10776)) [@bdice](https://github.com/bdice)
- Add multiple rows to subword tokenizer benchmark ([#10767](https://github.com/rapidsai/cudf/pull/10767)) [@davidwendt](https://github.com/davidwendt)
- Refactor host decompression in ORC reader ([#10764](https://github.com/rapidsai/cudf/pull/10764)) [@vuule](https://github.com/vuule)
- Flush output streams before creating a process to drop caches ([#10762](https://github.com/rapidsai/cudf/pull/10762)) [@vuule](https://github.com/vuule)
- Refactor binaryop/compiled/util.cpp ([#10756](https://github.com/rapidsai/cudf/pull/10756)) [@bdice](https://github.com/bdice)
- Use warp per string for long strings in cudf::strings::contains() ([#10739](https://github.com/rapidsai/cudf/pull/10739)) [@davidwendt](https://github.com/davidwendt)
- Use generator expressions in any/all functions. ([#10736](https://github.com/rapidsai/cudf/pull/10736)) [@bdice](https://github.com/bdice)
- Use canonical "magic methods" (replace `x.__repr__()` with `repr(x)`). ([#10735](https://github.com/rapidsai/cudf/pull/10735)) [@bdice](https://github.com/bdice)
- Improve use of isinstance. ([#10734](https://github.com/rapidsai/cudf/pull/10734)) [@bdice](https://github.com/bdice)
- Rename tests from multiIndex to multiindex. ([#10732](https://github.com/rapidsai/cudf/pull/10732)) [@bdice](https://github.com/bdice)
- Two-table comparators with strong index types ([#10730](https://github.com/rapidsai/cudf/pull/10730)) [@bdice](https://github.com/bdice)
- Replace std::make_pair with std::pair (C++17 CTAD) ([#10727](https://github.com/rapidsai/cudf/pull/10727)) [@karthikeyann](https://github.com/karthikeyann)
- Use structured bindings instead of std::tie ([#10726](https://github.com/rapidsai/cudf/pull/10726)) [@karthikeyann](https://github.com/karthikeyann)
- Missing `f` prefix on f-strings fix ([#10721](https://github.com/rapidsai/cudf/pull/10721)) [@code-review-doctor](https://github.com/code-review-doctor)
- Add `max_file_size` parameter to chunked parquet dataset writer ([#10718](https://github.com/rapidsai/cudf/pull/10718)) [@galipremsagar](https://github.com/galipremsagar)
- Deprecate `merge_sorted`, change dask cudf usage to internal method ([#10713](https://github.com/rapidsai/cudf/pull/10713)) [@isVoid](https://github.com/isVoid)
- Prepare dask_cudf test_parquet.py for upcoming API changes ([#10709](https://github.com/rapidsai/cudf/pull/10709)) [@rjzamora](https://github.com/rjzamora)
- Remove or simplify various utility functions ([#10705](https://github.com/rapidsai/cudf/pull/10705)) [@vyasr](https://github.com/vyasr)
- Allow building arrow with parquet and not python ([#10702](https://github.com/rapidsai/cudf/pull/10702)) [@revans2](https://github.com/revans2)
- Partial cuIO GPU decompression refactor ([#10699](https://github.com/rapidsai/cudf/pull/10699)) [@vuule](https://github.com/vuule)
- Cython API refactor: `merge.pyx` ([#10698](https://github.com/rapidsai/cudf/pull/10698)) [@isVoid](https://github.com/isVoid)
- Fix random string data length to become variable ([#10697](https://github.com/rapidsai/cudf/pull/10697)) [@galipremsagar](https://github.com/galipremsagar)
- Add bindings for index_of with column search key ([#10696](https://github.com/rapidsai/cudf/pull/10696)) [@ChrisJar](https://github.com/ChrisJar)
- Deprecate index merging ([#10689](https://github.com/rapidsai/cudf/pull/10689)) [@vyasr](https://github.com/vyasr)
- Remove cudf::strings::string namespace ([#10684](https://github.com/rapidsai/cudf/pull/10684)) [@davidwendt](https://github.com/davidwendt)
- Standardize imports. ([#10680](https://github.com/rapidsai/cudf/pull/10680)) [@bdice](https://github.com/bdice)
- Standardize usage of collections.abc. ([#10679](https://github.com/rapidsai/cudf/pull/10679)) [@bdice](https://github.com/bdice)
- Cython API Refactor: `transpose.pyx`, `sort.pyx` ([#10675](https://github.com/rapidsai/cudf/pull/10675)) [@isVoid](https://github.com/isVoid)
- Add device_memory_resource parameter to create_string_vector_from_column ([#10673](https://github.com/rapidsai/cudf/pull/10673)) [@davidwendt](https://github.com/davidwendt)
- Split up mixed-join kernels source files ([#10671](https://github.com/rapidsai/cudf/pull/10671)) [@davidwendt](https://github.com/davidwendt)
- Use `std::filesystem` for temporary directory location and deletion ([#10664](https://github.com/rapidsai/cudf/pull/10664)) [@vuule](https://github.com/vuule)
- cleanup benchmark includes ([#10661](https://github.com/rapidsai/cudf/pull/10661)) [@karthikeyann](https://github.com/karthikeyann)
- Use upstream clang-format pre-commit hook. ([#10659](https://github.com/rapidsai/cudf/pull/10659)) [@bdice](https://github.com/bdice)
- Clean up C++ includes to use <> instead of "". ([#10658](https://github.com/rapidsai/cudf/pull/10658)) [@bdice](https://github.com/bdice)
- Handle RuntimeError thrown by CUDA Python in `validate_setup` ([#10653](https://github.com/rapidsai/cudf/pull/10653)) [@shwina](https://github.com/shwina)
- Rework JNI CMake to leverage rapids_find_package ([#10649](https://github.com/rapidsai/cudf/pull/10649)) [@jlowe](https://github.com/jlowe)
- Use conda to build python packages during GPU tests ([#10648](https://github.com/rapidsai/cudf/pull/10648)) [@Ethyling](https://github.com/Ethyling)
- Deprecate various functions that don't need to be defined for Index. ([#10647](https://github.com/rapidsai/cudf/pull/10647)) [@vyasr](https://github.com/vyasr)
- Update pinning to allow newer CMake versions. ([#10646](https://github.com/rapidsai/cudf/pull/10646)) [@vyasr](https://github.com/vyasr)
- Bump hadoop-common from 3.1.4 to 3.2.3 in /java ([#10645](https://github.com/rapidsai/cudf/pull/10645)) [@dependabot[bot]](https://github.com/dependabot[bot])
- Remove `concurrent_unordered_multimap`. ([#10642](https://github.com/rapidsai/cudf/pull/10642)) [@bdice](https://github.com/bdice)
- Improve parquet dictionary encoding ([#10635](https://github.com/rapidsai/cudf/pull/10635)) [@PointKernel](https://github.com/PointKernel)
- Improve cudf::cuda_error ([#10630](https://github.com/rapidsai/cudf/pull/10630)) [@sperlingxx](https://github.com/sperlingxx)
- Add support for null and non-numeric types in Series.diff and DataFrame.diff ([#10625](https://github.com/rapidsai/cudf/pull/10625)) [@Matt711](https://github.com/Matt711)
- Branch 22.06 merge 22.04 ([#10624](https://github.com/rapidsai/cudf/pull/10624)) [@vyasr](https://github.com/vyasr)
- Unpin `dask` & `distributed` for development ([#10623](https://github.com/rapidsai/cudf/pull/10623)) [@galipremsagar](https://github.com/galipremsagar)
- Slightly improve accuracy of stod in to_floats ([#10622](https://github.com/rapidsai/cudf/pull/10622)) [@davidwendt](https://github.com/davidwendt)
- Allow libcudfjni to be built as a static library ([#10619](https://github.com/rapidsai/cudf/pull/10619)) [@jlowe](https://github.com/jlowe)
- Change stack-based regex state data to use global memory ([#10600](https://github.com/rapidsai/cudf/pull/10600)) [@davidwendt](https://github.com/davidwendt)
- Resolve Forward merging of `branch-22.04` into `branch-22.06` ([#10598](https://github.com/rapidsai/cudf/pull/10598)) [@galipremsagar](https://github.com/galipremsagar)
- KvikIO as an alternative GDS backend ([#10593](https://github.com/rapidsai/cudf/pull/10593)) [@madsbk](https://github.com/madsbk)
- Rename CUDA_TRY macro to CUDF_CUDA_TRY, rename CHECK_CUDA macro to CUDF_CHECK_CUDA. ([#10589](https://github.com/rapidsai/cudf/pull/10589)) [@bdice](https://github.com/bdice)
- Upgrade `cudf` to support `pandas` 1.4.x versions ([#10584](https://github.com/rapidsai/cudf/pull/10584)) [@galipremsagar](https://github.com/galipremsagar)
- Refactor binary ops for timedelta and datetime columns ([#10581](https://github.com/rapidsai/cudf/pull/10581)) [@vyasr](https://github.com/vyasr)
- Refactor cudf::strings::count_re API to use count_matches utility ([#10580](https://github.com/rapidsai/cudf/pull/10580)) [@davidwendt](https://github.com/davidwendt)
- Update `Programming Language :: Python` Versions to 3.8 & 3.9 ([#10579](https://github.com/rapidsai/cudf/pull/10579)) [@madsbk](https://github.com/madsbk)
- Automate Java cudf jar build with statically linked dependencies ([#10578](https://github.com/rapidsai/cudf/pull/10578)) [@gerashegalov](https://github.com/gerashegalov)
- Add patch for thrust-cub 1.16 to fix sort compile times ([#10577](https://github.com/rapidsai/cudf/pull/10577)) [@davidwendt](https://github.com/davidwendt)
- Move binop methods from Frame to IndexedFrame and standardize the docstring ([#10576](https://github.com/rapidsai/cudf/pull/10576)) [@vyasr](https://github.com/vyasr)
- Cleanup libcudf strings regex classes ([#10573](https://github.com/rapidsai/cudf/pull/10573)) [@davidwendt](https://github.com/davidwendt)
- Simplify preprocessing of arguments for DataFrame binops ([#10563](https://github.com/rapidsai/cudf/pull/10563)) [@vyasr](https://github.com/vyasr)
- Reduce kernel calls to build strings findall results ([#10559](https://github.com/rapidsai/cudf/pull/10559)) [@davidwendt](https://github.com/davidwendt)
- Forward-merge branch-22.04 to branch-22.06 ([#10557](https://github.com/rapidsai/cudf/pull/10557)) [@bdice](https://github.com/bdice)
- Update strings contains benchmark to measure varying match rates ([#10555](https://github.com/rapidsai/cudf/pull/10555)) [@davidwendt](https://github.com/davidwendt)
- JNI: throw CUDA errors more specifically ([#10551](https://github.com/rapidsai/cudf/pull/10551)) [@sperlingxx](https://github.com/sperlingxx)
- Enable building static libs ([#10545](https://github.com/rapidsai/cudf/pull/10545)) [@trxcllnt](https://github.com/trxcllnt)
- Remove pip requirements files. ([#10543](https://github.com/rapidsai/cudf/pull/10543)) [@bdice](https://github.com/bdice)
- Remove Click pinnings that are unnecessary after upgrading black. ([#10541](https://github.com/rapidsai/cudf/pull/10541)) [@vyasr](https://github.com/vyasr)
- Refactor `memory_usage` to improve performance ([#10537](https://github.com/rapidsai/cudf/pull/10537)) [@galipremsagar](https://github.com/galipremsagar)
- Adjust the valid range of group index for replace_with_backrefs ([#10530](https://github.com/rapidsai/cudf/pull/10530)) [@sperlingxx](https://github.com/sperlingxx)
- add accidentally removed comment. ([#10526](https://github.com/rapidsai/cudf/pull/10526)) [@vyasr](https://github.com/vyasr)
- Update conda environment. ([#10525](https://github.com/rapidsai/cudf/pull/10525)) [@vyasr](https://github.com/vyasr)
- Remove ColumnBase.__getitem__ ([#10516](https://github.com/rapidsai/cudf/pull/10516)) [@vyasr](https://github.com/vyasr)
- Optimize `left_semi_join` by materializing the gather mask ([#10511](https://github.com/rapidsai/cudf/pull/10511)) [@cheinger](https://github.com/cheinger)
- Define proper binary operation APIs for columns ([#10509](https://github.com/rapidsai/cudf/pull/10509)) [@vyasr](https://github.com/vyasr)
- Upgrade `arrow-cpp` & `pyarrow` to `7.0.0` ([#10503](https://github.com/rapidsai/cudf/pull/10503)) [@galipremsagar](https://github.com/galipremsagar)
- Update to Thrust 1.16 ([#10489](https://github.com/rapidsai/cudf/pull/10489)) [@bdice](https://github.com/bdice)
- Namespace/Docstring Fixes for Reduction ([#10471](https://github.com/rapidsai/cudf/pull/10471)) [@isVoid](https://github.com/isVoid)
- Update cudfjni 22.06.0-SNAPSHOT ([#10467](https://github.com/rapidsai/cudf/pull/10467)) [@pxLi](https://github.com/pxLi)
- Use Lists of Columns for Various Files ([#10463](https://github.com/rapidsai/cudf/pull/10463)) [@isVoid](https://github.com/isVoid)
- Additional refactoring of hash functions ([#10462](https://github.com/rapidsai/cudf/pull/10462)) [@bdice](https://github.com/bdice)
- Fix Series.str.findall behavior for expand=False. ([#10459](https://github.com/rapidsai/cudf/pull/10459)) [@bdice](https://github.com/bdice)
- Remove deprecated code. ([#10450](https://github.com/rapidsai/cudf/pull/10450)) [@vyasr](https://github.com/vyasr)
- Update cmake-format version. ([#10440](https://github.com/rapidsai/cudf/pull/10440)) [@vyasr](https://github.com/vyasr)
- Consolidate C++ `conda` recipes and add `libcudf-tests` package ([#10326](https://github.com/rapidsai/cudf/pull/10326)) [@ajschmidt8](https://github.com/ajschmidt8)
- Use conda compilers ([#10275](https://github.com/rapidsai/cudf/pull/10275)) [@Ethyling](https://github.com/Ethyling)
- Add row bitmask as a `detail::hash_join` member ([#10248](https://github.com/rapidsai/cudf/pull/10248)) [@PointKernel](https://github.com/PointKernel)
# cuDF 22.04.00 (6 Apr 2022)
## π¨ Breaking Changes
- Drop unsupported method argument from nunique and distinct_count. ([#10411](https://github.com/rapidsai/cudf/pull/10411)) [@bdice](https://github.com/bdice)
- Refactor stream compaction APIs ([#10370](https://github.com/rapidsai/cudf/pull/10370)) [@PointKernel](https://github.com/PointKernel)
- Add scan_aggregation and reduce_aggregation derived types. ([#10357](https://github.com/rapidsai/cudf/pull/10357)) [@nvdbaranec](https://github.com/nvdbaranec)
- Avoid `decimal` type narrowing for decimal binops ([#10299](https://github.com/rapidsai/cudf/pull/10299)) [@galipremsagar](https://github.com/galipremsagar)
- Rewrites `sample` API ([#10262](https://github.com/rapidsai/cudf/pull/10262)) [@isVoid](https://github.com/isVoid)
- Remove probe-time null equality parameters in `cudf::hash_join` ([#10260](https://github.com/rapidsai/cudf/pull/10260)) [@PointKernel](https://github.com/PointKernel)
- Enable proper `Index` round-tripping in `orc` reader and writer ([#10170](https://github.com/rapidsai/cudf/pull/10170)) [@galipremsagar](https://github.com/galipremsagar)
- Add JNI for `strings::split_re` and `strings::split_record_re` ([#10139](https://github.com/rapidsai/cudf/pull/10139)) [@ttnghia](https://github.com/ttnghia)
- Change cudf::strings::find_multiple to return a lists column ([#10134](https://github.com/rapidsai/cudf/pull/10134)) [@davidwendt](https://github.com/davidwendt)
- Remove the option to completely disable decimal128 columns in the ORC reader ([#10127](https://github.com/rapidsai/cudf/pull/10127)) [@vuule](https://github.com/vuule)
- Remove deprecated code ([#10124](https://github.com/rapidsai/cudf/pull/10124)) [@vyasr](https://github.com/vyasr)
- Update gpu_utils.py to reflect current CUDA support. ([#10113](https://github.com/rapidsai/cudf/pull/10113)) [@bdice](https://github.com/bdice)
- Optimize compaction operations ([#10030](https://github.com/rapidsai/cudf/pull/10030)) [@PointKernel](https://github.com/PointKernel)
- Remove deprecated method Series.set_index. ([#9945](https://github.com/rapidsai/cudf/pull/9945)) [@bdice](https://github.com/bdice)
- Add cudf::strings::findall_record API ([#9911](https://github.com/rapidsai/cudf/pull/9911)) [@davidwendt](https://github.com/davidwendt)
- Upgrade `arrow` & `pyarrow` to `6.0.1` ([#9686](https://github.com/rapidsai/cudf/pull/9686)) [@galipremsagar](https://github.com/galipremsagar)
## π Bug Fixes
- Fix an issue with tdigest merge aggregations. ([#10506](https://github.com/rapidsai/cudf/pull/10506)) [@nvdbaranec](https://github.com/nvdbaranec)
- Batch of fixes for index overflows in grid stride loops. ([#10448](https://github.com/rapidsai/cudf/pull/10448)) [@nvdbaranec](https://github.com/nvdbaranec)
- Update dask_cudf imports to be compatible with latest dask ([#10442](https://github.com/rapidsai/cudf/pull/10442)) [@rlratzel](https://github.com/rlratzel)
- Fix for integer overflow in contiguous-split ([#10437](https://github.com/rapidsai/cudf/pull/10437)) [@jbrennan333](https://github.com/jbrennan333)
- Fix has_null predicate for drop_list_duplicates on nested structs ([#10436](https://github.com/rapidsai/cudf/pull/10436)) [@sperlingxx](https://github.com/sperlingxx)
- Fix empty reduce with List output and non-List input ([#10435](https://github.com/rapidsai/cudf/pull/10435)) [@sperlingxx](https://github.com/sperlingxx)
- Fix `list` and `struct` meta generation issue in `dask-cudf` ([#10434](https://github.com/rapidsai/cudf/pull/10434)) [@galipremsagar](https://github.com/galipremsagar)
- Fix error in `cudf.to_numeric` when a `bool` input is passed ([#10431](https://github.com/rapidsai/cudf/pull/10431)) [@galipremsagar](https://github.com/galipremsagar)
- Support cupy array in `quantile` input ([#10429](https://github.com/rapidsai/cudf/pull/10429)) [@galipremsagar](https://github.com/galipremsagar)
- Fix benchmarks to work with new aggregation types ([#10428](https://github.com/rapidsai/cudf/pull/10428)) [@davidwendt](https://github.com/davidwendt)
- Fix cudf::shift to handle offset greater than column size ([#10414](https://github.com/rapidsai/cudf/pull/10414)) [@davidwendt](https://github.com/davidwendt)
- Fix lifespan of the temporary directory that holds cuFile configuration file ([#10403](https://github.com/rapidsai/cudf/pull/10403)) [@vuule](https://github.com/vuule)
- Fix error thrown in compiled-binaryop benchmark ([#10398](https://github.com/rapidsai/cudf/pull/10398)) [@davidwendt](https://github.com/davidwendt)
- Limiting async allocator using alignment of 512 ([#10395](https://github.com/rapidsai/cudf/pull/10395)) [@rongou](https://github.com/rongou)
- Include <optional> in multibyte split. ([#10385](https://github.com/rapidsai/cudf/pull/10385)) [@bdice](https://github.com/bdice)
- Fix issue with column and scalar re-assignment ([#10377](https://github.com/rapidsai/cudf/pull/10377)) [@galipremsagar](https://github.com/galipremsagar)
- Fix floating point data generation in benchmarks ([#10372](https://github.com/rapidsai/cudf/pull/10372)) [@vuule](https://github.com/vuule)
- Avoid overflow in fused_concatenate_kernel output_index ([#10344](https://github.com/rapidsai/cudf/pull/10344)) [@abellina](https://github.com/abellina)
- Remove is_relationally_comparable for table device views ([#10342](https://github.com/rapidsai/cudf/pull/10342)) [@davidwendt](https://github.com/davidwendt)
- Fix debug compile error in device_span to column_view conversion ([#10331](https://github.com/rapidsai/cudf/pull/10331)) [@davidwendt](https://github.com/davidwendt)
- Add Pascal support to JCUDF transcode (row_conversion) ([#10329](https://github.com/rapidsai/cudf/pull/10329)) [@mythrocks](https://github.com/mythrocks)
- Fix `std::bad_alloc` exception due to JIT reserving a huge buffer ([#10317](https://github.com/rapidsai/cudf/pull/10317)) [@ttnghia](https://github.com/ttnghia)
- Fixes up the overflowed fixed-point round on nullable column ([#10316](https://github.com/rapidsai/cudf/pull/10316)) [@sperlingxx](https://github.com/sperlingxx)
- Fix DataFrame slicing issues for empty cases ([#10310](https://github.com/rapidsai/cudf/pull/10310)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Fix documentation issues ([#10307](https://github.com/rapidsai/cudf/pull/10307)) [@ajschmidt8](https://github.com/ajschmidt8)
- Allow Java bindings to use default decimal precisions when writing columns ([#10276](https://github.com/rapidsai/cudf/pull/10276)) [@sperlingxx](https://github.com/sperlingxx)
- Fix incorrect slicing of GDS read/write calls ([#10274](https://github.com/rapidsai/cudf/pull/10274)) [@vuule](https://github.com/vuule)
- Fix out-of-memory error in compiled-binaryop benchmark ([#10269](https://github.com/rapidsai/cudf/pull/10269)) [@davidwendt](https://github.com/davidwendt)
- Add tests of reflected ufuncs and fix behavior of logical reflected ufuncs ([#10261](https://github.com/rapidsai/cudf/pull/10261)) [@vyasr](https://github.com/vyasr)
- Remove probe-time null equality parameters in `cudf::hash_join` ([#10260](https://github.com/rapidsai/cudf/pull/10260)) [@PointKernel](https://github.com/PointKernel)
- Fix out-of-memory error in UrlDecode benchmark ([#10258](https://github.com/rapidsai/cudf/pull/10258)) [@davidwendt](https://github.com/davidwendt)
- Fix groupby reductions that perform operations on source type instead of target type ([#10250](https://github.com/rapidsai/cudf/pull/10250)) [@ttnghia](https://github.com/ttnghia)
- Fix small leak in explode ([#10245](https://github.com/rapidsai/cudf/pull/10245)) [@revans2](https://github.com/revans2)
- Yet another small JNI memory leak ([#10238](https://github.com/rapidsai/cudf/pull/10238)) [@revans2](https://github.com/revans2)
- Fix regex octal parsing to limit to 3 characters ([#10233](https://github.com/rapidsai/cudf/pull/10233)) [@davidwendt](https://github.com/davidwendt)
- Fix string to decimal128 conversion handling large exponents ([#10231](https://github.com/rapidsai/cudf/pull/10231)) [@davidwendt](https://github.com/davidwendt)
- Fix JNI leak on copy to device ([#10229](https://github.com/rapidsai/cudf/pull/10229)) [@revans2](https://github.com/revans2)
- Fix the data generator element size for decimal types ([#10225](https://github.com/rapidsai/cudf/pull/10225)) [@vuule](https://github.com/vuule)
- Fix `decimal` metadata in parquet writer ([#10224](https://github.com/rapidsai/cudf/pull/10224)) [@galipremsagar](https://github.com/galipremsagar)
- Fix strings handling of hex in regex pattern ([#10220](https://github.com/rapidsai/cudf/pull/10220)) [@davidwendt](https://github.com/davidwendt)
- Fix docs builds ([#10216](https://github.com/rapidsai/cudf/pull/10216)) [@ajschmidt8](https://github.com/ajschmidt8)
- Fix a leftover _has_nulls change from Nullate ([#10211](https://github.com/rapidsai/cudf/pull/10211)) [@devavret](https://github.com/devavret)
- Fix bitmask of the output for JNI of `lists::drop_list_duplicates` ([#10210](https://github.com/rapidsai/cudf/pull/10210)) [@ttnghia](https://github.com/ttnghia)
- Fix compile error in `binaryop/compiled/util.cpp` ([#10209](https://github.com/rapidsai/cudf/pull/10209)) [@ttnghia](https://github.com/ttnghia)
- Skip ORC and Parquet readers' benchmark cases that are not currently supported ([#10194](https://github.com/rapidsai/cudf/pull/10194)) [@vuule](https://github.com/vuule)
- Fix JNI leak of a cudf::column_view native class. ([#10171](https://github.com/rapidsai/cudf/pull/10171)) [@revans2](https://github.com/revans2)
- Enable proper `Index` round-tripping in `orc` reader and writer ([#10170](https://github.com/rapidsai/cudf/pull/10170)) [@galipremsagar](https://github.com/galipremsagar)
- Convert Column Name to String Before Using Struct Column Factory ([#10156](https://github.com/rapidsai/cudf/pull/10156)) [@isVoid](https://github.com/isVoid)
- Preserve the correct `ListDtype` while creating an identical empty column ([#10151](https://github.com/rapidsai/cudf/pull/10151)) [@galipremsagar](https://github.com/galipremsagar)
- benchmark fixture - static object pointer fix ([#10145](https://github.com/rapidsai/cudf/pull/10145)) [@karthikeyann](https://github.com/karthikeyann)
- Fix UDF Caching ([#10133](https://github.com/rapidsai/cudf/pull/10133)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Raise duplicate column error in `DataFrame.rename` ([#10120](https://github.com/rapidsai/cudf/pull/10120)) [@galipremsagar](https://github.com/galipremsagar)
- Fix flaky memory usage test by guaranteeing array size. ([#10114](https://github.com/rapidsai/cudf/pull/10114)) [@vyasr](https://github.com/vyasr)
- Encode values from python callback for C++ ([#10103](https://github.com/rapidsai/cudf/pull/10103)) [@jdye64](https://github.com/jdye64)
- Add check for regex instructions causing an infinite-loop ([#10095](https://github.com/rapidsai/cudf/pull/10095)) [@davidwendt](https://github.com/davidwendt)
- Remove metadata singleton from nvtext normalizer ([#10090](https://github.com/rapidsai/cudf/pull/10090)) [@davidwendt](https://github.com/davidwendt)
- Column equality testing fixes ([#10011](https://github.com/rapidsai/cudf/pull/10011)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Pin libcudf runtime dependency for cudf / libcudf-kafka nightlies ([#9847](https://github.com/rapidsai/cudf/pull/9847)) [@charlesbluca](https://github.com/charlesbluca)
## π Documentation
- Fix documentation for DataFrame.corr and Series.corr. ([#10493](https://github.com/rapidsai/cudf/pull/10493)) [@bdice](https://github.com/bdice)
- Add `cut` to API docs ([#10479](https://github.com/rapidsai/cudf/pull/10479)) [@shwina](https://github.com/shwina)
- Remove documentation for methods removed in #10124. ([#10366](https://github.com/rapidsai/cudf/pull/10366)) [@bdice](https://github.com/bdice)
- Fix documentation issues ([#10306](https://github.com/rapidsai/cudf/pull/10306)) [@ajschmidt8](https://github.com/ajschmidt8)
- Fix `fixed_point` binary operation documentation ([#10198](https://github.com/rapidsai/cudf/pull/10198)) [@codereport](https://github.com/codereport)
- Remove cleaned up methods from docs ([#10189](https://github.com/rapidsai/cudf/pull/10189)) [@galipremsagar](https://github.com/galipremsagar)
- Update developer guide to recommend no default stream parameter. ([#10136](https://github.com/rapidsai/cudf/pull/10136)) [@bdice](https://github.com/bdice)
- Update benchmarking guide to use NVBench. ([#10093](https://github.com/rapidsai/cudf/pull/10093)) [@bdice](https://github.com/bdice)
## π New Features
- Add StringIO support to read_text ([#10465](https://github.com/rapidsai/cudf/pull/10465)) [@cwharris](https://github.com/cwharris)
- Add support for tdigest and merge_tdigest aggregations through cudf::reduce ([#10433](https://github.com/rapidsai/cudf/pull/10433)) [@nvdbaranec](https://github.com/nvdbaranec)
- JNI support for Collect Ops in Reduction ([#10427](https://github.com/rapidsai/cudf/pull/10427)) [@sperlingxx](https://github.com/sperlingxx)
- Enable read_text with dask_cudf using byte_range ([#10407](https://github.com/rapidsai/cudf/pull/10407)) [@ChrisJar](https://github.com/ChrisJar)
- Add `cudf::stable_sort_by_key` ([#10387](https://github.com/rapidsai/cudf/pull/10387)) [@PointKernel](https://github.com/PointKernel)
- Implement `maps_column_view` abstraction over `LIST<STRUCT<K,V>>` ([#10380](https://github.com/rapidsai/cudf/pull/10380)) [@mythrocks](https://github.com/mythrocks)
- Support Java bindings for Avro reader ([#10373](https://github.com/rapidsai/cudf/pull/10373)) [@HaoYang670](https://github.com/HaoYang670)
- Refactor stream compaction APIs ([#10370](https://github.com/rapidsai/cudf/pull/10370)) [@PointKernel](https://github.com/PointKernel)
- Support collect aggregations in reduction ([#10353](https://github.com/rapidsai/cudf/pull/10353)) [@sperlingxx](https://github.com/sperlingxx)
- Refactor array_ufunc for Index and unify across all classes ([#10346](https://github.com/rapidsai/cudf/pull/10346)) [@vyasr](https://github.com/vyasr)
- Add JNI for extract_list_element with index column ([#10341](https://github.com/rapidsai/cudf/pull/10341)) [@firestarman](https://github.com/firestarman)
- Support `min` and `max` operations for structs in rolling window ([#10332](https://github.com/rapidsai/cudf/pull/10332)) [@ttnghia](https://github.com/ttnghia)
- Add device create_sequence_table for benchmarks ([#10300](https://github.com/rapidsai/cudf/pull/10300)) [@karthikeyann](https://github.com/karthikeyann)
- Enable numpy ufuncs for DataFrame ([#10287](https://github.com/rapidsai/cudf/pull/10287)) [@vyasr](https://github.com/vyasr)
- move input generation for json benchmark to device ([#10281](https://github.com/rapidsai/cudf/pull/10281)) [@karthikeyann](https://github.com/karthikeyann)
- move input generation for type dispatcher benchmark to device ([#10280](https://github.com/rapidsai/cudf/pull/10280)) [@karthikeyann](https://github.com/karthikeyann)
- move input generation for copy benchmark to device ([#10279](https://github.com/rapidsai/cudf/pull/10279)) [@karthikeyann](https://github.com/karthikeyann)
- generate url decode benchmark input in device ([#10278](https://github.com/rapidsai/cudf/pull/10278)) [@karthikeyann](https://github.com/karthikeyann)
- device input generation in join bench ([#10277](https://github.com/rapidsai/cudf/pull/10277)) [@karthikeyann](https://github.com/karthikeyann)
- Add nvtext::byte_pair_encoding API ([#10270](https://github.com/rapidsai/cudf/pull/10270)) [@davidwendt](https://github.com/davidwendt)
- Prevent internal usage of expensive APIs ([#10263](https://github.com/rapidsai/cudf/pull/10263)) [@vyasr](https://github.com/vyasr)
- Column to JCUDF row for tables with strings ([#10235](https://github.com/rapidsai/cudf/pull/10235)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Support `percent_rank()` aggregation ([#10227](https://github.com/rapidsai/cudf/pull/10227)) [@mythrocks](https://github.com/mythrocks)
- Refactor Series.__array_ufunc__ ([#10217](https://github.com/rapidsai/cudf/pull/10217)) [@vyasr](https://github.com/vyasr)
- Reduce pytest runtime ([#10203](https://github.com/rapidsai/cudf/pull/10203)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Add regex flags parameter to python cudf strings split ([#10185](https://github.com/rapidsai/cudf/pull/10185)) [@davidwendt](https://github.com/davidwendt)
- Support for `MOD`, `PMOD` and `PYMOD` for `decimal32/64/128` ([#10179](https://github.com/rapidsai/cudf/pull/10179)) [@codereport](https://github.com/codereport)
- Adding string row size iterator for row to column and column to row conversion ([#10157](https://github.com/rapidsai/cudf/pull/10157)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Add file size counter to cuIO benchmarks ([#10154](https://github.com/rapidsai/cudf/pull/10154)) [@vuule](https://github.com/vuule)
- byte_range support for multibyte_split/read_text ([#10150](https://github.com/rapidsai/cudf/pull/10150)) [@cwharris](https://github.com/cwharris)
- Add JNI for `strings::split_re` and `strings::split_record_re` ([#10139](https://github.com/rapidsai/cudf/pull/10139)) [@ttnghia](https://github.com/ttnghia)
- Add `maxSplit` parameter to Java binding for `strings:split` ([#10137](https://github.com/rapidsai/cudf/pull/10137)) [@ttnghia](https://github.com/ttnghia)
- Add libcudf strings split API that accepts regex pattern ([#10128](https://github.com/rapidsai/cudf/pull/10128)) [@davidwendt](https://github.com/davidwendt)
- generate benchmark input in device ([#10109](https://github.com/rapidsai/cudf/pull/10109)) [@karthikeyann](https://github.com/karthikeyann)
- Avoid `nan_as_null` op if `nan_count` is 0 ([#10082](https://github.com/rapidsai/cudf/pull/10082)) [@galipremsagar](https://github.com/galipremsagar)
- Add Dataframe and Index nunique ([#10077](https://github.com/rapidsai/cudf/pull/10077)) [@martinfalisse](https://github.com/martinfalisse)
- Support nanosecond timestamps in parquet ([#10063](https://github.com/rapidsai/cudf/pull/10063)) [@PointKernel](https://github.com/PointKernel)
- Java bindings for mixed semi and anti joins ([#10040](https://github.com/rapidsai/cudf/pull/10040)) [@jlowe](https://github.com/jlowe)
- Implement mixed equality/conditional semi/anti joins ([#10037](https://github.com/rapidsai/cudf/pull/10037)) [@vyasr](https://github.com/vyasr)
- Optimize compaction operations ([#10030](https://github.com/rapidsai/cudf/pull/10030)) [@PointKernel](https://github.com/PointKernel)
- Support `args=` in `Series.apply` ([#9982](https://github.com/rapidsai/cudf/pull/9982)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Add cudf::strings::findall_record API ([#9911](https://github.com/rapidsai/cudf/pull/9911)) [@davidwendt](https://github.com/davidwendt)
- Add covariance for sort groupby (python) ([#9889](https://github.com/rapidsai/cudf/pull/9889)) [@mayankanand007](https://github.com/mayankanand007)
- Implement DataFrame diff() ([#9817](https://github.com/rapidsai/cudf/pull/9817)) [@skirui-source](https://github.com/skirui-source)
- Implement DataFrame pct_change ([#9805](https://github.com/rapidsai/cudf/pull/9805)) [@skirui-source](https://github.com/skirui-source)
- Support segmented reductions and null mask reductions ([#9621](https://github.com/rapidsai/cudf/pull/9621)) [@isVoid](https://github.com/isVoid)
- Add 'spearman' correlation method for `dataframe.corr` and `series.corr` ([#7141](https://github.com/rapidsai/cudf/pull/7141)) [@dominicshanshan](https://github.com/dominicshanshan)
## π οΈ Improvements
- Add `scipy` skip for a test ([#10502](https://github.com/rapidsai/cudf/pull/10502)) [@galipremsagar](https://github.com/galipremsagar)
- Temporarily disable new `ops-bot` functionality ([#10496](https://github.com/rapidsai/cudf/pull/10496)) [@ajschmidt8](https://github.com/ajschmidt8)
- Include <cstddef> to fix compilation of parquet reader on GCC 11. ([#10483](https://github.com/rapidsai/cudf/pull/10483)) [@bdice](https://github.com/bdice)
- Pin `dask` and `distributed` ([#10481](https://github.com/rapidsai/cudf/pull/10481)) [@galipremsagar](https://github.com/galipremsagar)
- MD5 refactoring. ([#10445](https://github.com/rapidsai/cudf/pull/10445)) [@bdice](https://github.com/bdice)
- Remove or split up Frame methods that use the index ([#10439](https://github.com/rapidsai/cudf/pull/10439)) [@vyasr](https://github.com/vyasr)
- Centralization of tdigest aggregation code. ([#10422](https://github.com/rapidsai/cudf/pull/10422)) [@nvdbaranec](https://github.com/nvdbaranec)
- Simplify column binary operations ([#10421](https://github.com/rapidsai/cudf/pull/10421)) [@vyasr](https://github.com/vyasr)
- Add `.github/ops-bot.yaml` config file ([#10420](https://github.com/rapidsai/cudf/pull/10420)) [@ajschmidt8](https://github.com/ajschmidt8)
- Use list of columns for methods in `Groupby.pyx` ([#10419](https://github.com/rapidsai/cudf/pull/10419)) [@isVoid](https://github.com/isVoid)
- Remove warnings in `test_timedelta.py` ([#10418](https://github.com/rapidsai/cudf/pull/10418)) [@galipremsagar](https://github.com/galipremsagar)
- Fix some warnings in `test_parquet.py` ([#10416](https://github.com/rapidsai/cudf/pull/10416)) [@galipremsagar](https://github.com/galipremsagar)
- JNI support for segmented reduce ([#10413](https://github.com/rapidsai/cudf/pull/10413)) [@revans2](https://github.com/revans2)
- Clean up null mask after purging null entries ([#10412](https://github.com/rapidsai/cudf/pull/10412)) [@sperlingxx](https://github.com/sperlingxx)
- Drop unsupported method argument from nunique and distinct_count. ([#10411](https://github.com/rapidsai/cudf/pull/10411)) [@bdice](https://github.com/bdice)
- Use str instead of builtins.str. ([#10410](https://github.com/rapidsai/cudf/pull/10410)) [@bdice](https://github.com/bdice)
- Fix warnings in `test_rolling` ([#10405](https://github.com/rapidsai/cudf/pull/10405)) [@bdice](https://github.com/bdice)
- Enable `codecov` github-check in CI ([#10404](https://github.com/rapidsai/cudf/pull/10404)) [@galipremsagar](https://github.com/galipremsagar)
- Fix warnings in test_cuda_apply, test_numerical, test_pickling, test_unaops. ([#10402](https://github.com/rapidsai/cudf/pull/10402)) [@bdice](https://github.com/bdice)
- Set column names in `_from_columns_like_self` factory ([#10400](https://github.com/rapidsai/cudf/pull/10400)) [@isVoid](https://github.com/isVoid)
- Refactor `nvtx` annotations in `cudf` & `dask-cudf` ([#10396](https://github.com/rapidsai/cudf/pull/10396)) [@galipremsagar](https://github.com/galipremsagar)
- Consolidate .cov and .corr for sort groupby ([#10386](https://github.com/rapidsai/cudf/pull/10386)) [@skirui-source](https://github.com/skirui-source)
- Consolidate some Frame APIs ([#10381](https://github.com/rapidsai/cudf/pull/10381)) [@vyasr](https://github.com/vyasr)
- Refactor hash functions and `hash_combine` ([#10379](https://github.com/rapidsai/cudf/pull/10379)) [@bdice](https://github.com/bdice)
- Add `nvtx` annotations for `Series` and `Index` ([#10374](https://github.com/rapidsai/cudf/pull/10374)) [@galipremsagar](https://github.com/galipremsagar)
- Refactor `filling.repeat` API ([#10371](https://github.com/rapidsai/cudf/pull/10371)) [@isVoid](https://github.com/isVoid)
- Move standalone UTF8 functions from string_view.hpp to utf8.hpp ([#10369](https://github.com/rapidsai/cudf/pull/10369)) [@davidwendt](https://github.com/davidwendt)
- Remove doc for deprecated function `one_hot_encoding` ([#10367](https://github.com/rapidsai/cudf/pull/10367)) [@isVoid](https://github.com/isVoid)
- Refactor array function ([#10364](https://github.com/rapidsai/cudf/pull/10364)) [@vyasr](https://github.com/vyasr)
- Fix warnings in test_csv.py. ([#10362](https://github.com/rapidsai/cudf/pull/10362)) [@bdice](https://github.com/bdice)
- Implement a mixin for binops ([#10360](https://github.com/rapidsai/cudf/pull/10360)) [@vyasr](https://github.com/vyasr)
- Refactor cython interface: `copying.pyx` ([#10359](https://github.com/rapidsai/cudf/pull/10359)) [@isVoid](https://github.com/isVoid)
- Implement a mixin for scans ([#10358](https://github.com/rapidsai/cudf/pull/10358)) [@vyasr](https://github.com/vyasr)
- Add scan_aggregation and reduce_aggregation derived types. ([#10357](https://github.com/rapidsai/cudf/pull/10357)) [@nvdbaranec](https://github.com/nvdbaranec)
- Add cleanup of python artifacts ([#10355](https://github.com/rapidsai/cudf/pull/10355)) [@galipremsagar](https://github.com/galipremsagar)
- Fix warnings in test_categorical.py. ([#10354](https://github.com/rapidsai/cudf/pull/10354)) [@bdice](https://github.com/bdice)
- Create a dispatcher for invoking regex kernel functions ([#10349](https://github.com/rapidsai/cudf/pull/10349)) [@davidwendt](https://github.com/davidwendt)
- Fix `codecov` in CI ([#10347](https://github.com/rapidsai/cudf/pull/10347)) [@galipremsagar](https://github.com/galipremsagar)
- Enable caching for `memory_usage` calculation in `Column` ([#10345](https://github.com/rapidsai/cudf/pull/10345)) [@galipremsagar](https://github.com/galipremsagar)
- C++17 cleanup: traits replace std::enable_if<>::type with std::enable_if_t ([#10343](https://github.com/rapidsai/cudf/pull/10343)) [@karthikeyann](https://github.com/karthikeyann)
- JNI: Support appending DECIMAL128 into ColumnBuilder in terms of byte array ([#10338](https://github.com/rapidsai/cudf/pull/10338)) [@sperlingxx](https://github.com/sperlingxx)
- multibyte_split test improvements ([#10328](https://github.com/rapidsai/cudf/pull/10328)) [@vuule](https://github.com/vuule)
- Fix warnings in test_binops.py. ([#10327](https://github.com/rapidsai/cudf/pull/10327)) [@bdice](https://github.com/bdice)
- Fix warnings from pandas in test_array_ufunc.py. ([#10324](https://github.com/rapidsai/cudf/pull/10324)) [@bdice](https://github.com/bdice)
- Update upload script ([#10321](https://github.com/rapidsai/cudf/pull/10321)) [@ajschmidt8](https://github.com/ajschmidt8)
- Move hash type declarations to hashing.hpp ([#10320](https://github.com/rapidsai/cudf/pull/10320)) [@davidwendt](https://github.com/davidwendt)
- C++17 cleanup: traits replace `::value` with `_v` ([#10319](https://github.com/rapidsai/cudf/pull/10319)) [@karthikeyann](https://github.com/karthikeyann)
- Remove internal columns usage ([#10315](https://github.com/rapidsai/cudf/pull/10315)) [@vyasr](https://github.com/vyasr)
- Remove extraneous `build.sh` parameter ([#10313](https://github.com/rapidsai/cudf/pull/10313)) [@ajschmidt8](https://github.com/ajschmidt8)
- Add const qualifier to MurmurHash3_32::hash_combine ([#10311](https://github.com/rapidsai/cudf/pull/10311)) [@davidwendt](https://github.com/davidwendt)
- Remove `TODO` in `libcudf_kafka` recipe ([#10309](https://github.com/rapidsai/cudf/pull/10309)) [@ajschmidt8](https://github.com/ajschmidt8)
- Add conversions between column_view and device_span<T const>. ([#10302](https://github.com/rapidsai/cudf/pull/10302)) [@bdice](https://github.com/bdice)
- Avoid `decimal` type narrowing for decimal binops ([#10299](https://github.com/rapidsai/cudf/pull/10299)) [@galipremsagar](https://github.com/galipremsagar)
- Deprecate `DataFrame.iteritems` and introduce `.items` ([#10298](https://github.com/rapidsai/cudf/pull/10298)) [@galipremsagar](https://github.com/galipremsagar)
- Explicitly request CMake use `gnu++17` over `c++17` ([#10297](https://github.com/rapidsai/cudf/pull/10297)) [@robertmaynard](https://github.com/robertmaynard)
- Add copyright check as pre-commit hook. ([#10290](https://github.com/rapidsai/cudf/pull/10290)) [@vyasr](https://github.com/vyasr)
- DataFrame `insert` and creation optimizations ([#10285](https://github.com/rapidsai/cudf/pull/10285)) [@galipremsagar](https://github.com/galipremsagar)
- Improve hash join detail functions ([#10273](https://github.com/rapidsai/cudf/pull/10273)) [@PointKernel](https://github.com/PointKernel)
- Replace custom `cached_property` implementation with functools ([#10272](https://github.com/rapidsai/cudf/pull/10272)) [@shwina](https://github.com/shwina)
- Rewrites `sample` API ([#10262](https://github.com/rapidsai/cudf/pull/10262)) [@isVoid](https://github.com/isVoid)
- Bump hadoop-common from 3.1.0 to 3.1.4 in /java ([#10259](https://github.com/rapidsai/cudf/pull/10259)) [@dependabot[bot]](https://github.com/dependabot[bot])
- Remove making redundant `copy` across code-base ([#10257](https://github.com/rapidsai/cudf/pull/10257)) [@galipremsagar](https://github.com/galipremsagar)
- Add more `nvtx` annotations ([#10256](https://github.com/rapidsai/cudf/pull/10256)) [@galipremsagar](https://github.com/galipremsagar)
- Add `copyright` check in `cudf` ([#10253](https://github.com/rapidsai/cudf/pull/10253)) [@galipremsagar](https://github.com/galipremsagar)
- Remove redundant copies in `fillna` to improve performance ([#10241](https://github.com/rapidsai/cudf/pull/10241)) [@galipremsagar](https://github.com/galipremsagar)
- Remove `std::numeric_limit` specializations for timestamp & durations ([#10239](https://github.com/rapidsai/cudf/pull/10239)) [@codereport](https://github.com/codereport)
- Optimize `DataFrame` creation across code-base ([#10236](https://github.com/rapidsai/cudf/pull/10236)) [@galipremsagar](https://github.com/galipremsagar)
- Change pytest distribution algorithm and increase parallelism in CI ([#10232](https://github.com/rapidsai/cudf/pull/10232)) [@galipremsagar](https://github.com/galipremsagar)
- Add environment variables for I/O thread pool and slice sizes ([#10218](https://github.com/rapidsai/cudf/pull/10218)) [@vuule](https://github.com/vuule)
- Add regex flags to strings findall functions ([#10208](https://github.com/rapidsai/cudf/pull/10208)) [@davidwendt](https://github.com/davidwendt)
- Update dask-cudf parquet tests to reflect upstream bugfixes to `_metadata` ([#10206](https://github.com/rapidsai/cudf/pull/10206)) [@charlesbluca](https://github.com/charlesbluca)
- Remove unnecessary nunique function in Series. ([#10205](https://github.com/rapidsai/cudf/pull/10205)) [@martinfalisse](https://github.com/martinfalisse)
- Refactor DataFrame tests. ([#10204](https://github.com/rapidsai/cudf/pull/10204)) [@bdice](https://github.com/bdice)
- Rewrites `column.__setitem__`, Use `boolean_mask_scatter` ([#10202](https://github.com/rapidsai/cudf/pull/10202)) [@isVoid](https://github.com/isVoid)
- Java utilities to aid in accelerating aggregations on 128-bit types ([#10201](https://github.com/rapidsai/cudf/pull/10201)) [@jlowe](https://github.com/jlowe)
- Fix docstrings alignment in `Frame` methods ([#10199](https://github.com/rapidsai/cudf/pull/10199)) [@galipremsagar](https://github.com/galipremsagar)
- Fix cuco pair issue in hash join ([#10195](https://github.com/rapidsai/cudf/pull/10195)) [@PointKernel](https://github.com/PointKernel)
- Replace `dask` groupby `.index` usages with `.by` ([#10193](https://github.com/rapidsai/cudf/pull/10193)) [@galipremsagar](https://github.com/galipremsagar)
- Add regex flags to strings extract function ([#10192](https://github.com/rapidsai/cudf/pull/10192)) [@davidwendt](https://github.com/davidwendt)
- Forward-merge branch-22.02 to branch-22.04 ([#10191](https://github.com/rapidsai/cudf/pull/10191)) [@bdice](https://github.com/bdice)
- Add CMake `install` rule for tests ([#10190](https://github.com/rapidsai/cudf/pull/10190)) [@ajschmidt8](https://github.com/ajschmidt8)
- Unpin `dask` & `distributed` ([#10182](https://github.com/rapidsai/cudf/pull/10182)) [@galipremsagar](https://github.com/galipremsagar)
- Add comments to explain test validation ([#10176](https://github.com/rapidsai/cudf/pull/10176)) [@galipremsagar](https://github.com/galipremsagar)
- Reduce warnings in pytest output ([#10168](https://github.com/rapidsai/cudf/pull/10168)) [@bdice](https://github.com/bdice)
- Some consolidation of indexed frame methods ([#10167](https://github.com/rapidsai/cudf/pull/10167)) [@vyasr](https://github.com/vyasr)
- Refactor isin implementations ([#10165](https://github.com/rapidsai/cudf/pull/10165)) [@vyasr](https://github.com/vyasr)
- Faster struct row comparator ([#10164](https://github.com/rapidsai/cudf/pull/10164)) [@devavret](https://github.com/devavret)
- Refactor groupby::get_groups. ([#10161](https://github.com/rapidsai/cudf/pull/10161)) [@bdice](https://github.com/bdice)
- Deprecate `decimal_cols_as_float` in ORC reader (C++ layer) ([#10152](https://github.com/rapidsai/cudf/pull/10152)) [@vuule](https://github.com/vuule)
- Replace `ccache` with `sccache` ([#10146](https://github.com/rapidsai/cudf/pull/10146)) [@ajschmidt8](https://github.com/ajschmidt8)
- Murmur3 hash kernel cleanup ([#10143](https://github.com/rapidsai/cudf/pull/10143)) [@rwlee](https://github.com/rwlee)
- Deprecate `decimal_cols_as_float` in ORC reader ([#10142](https://github.com/rapidsai/cudf/pull/10142)) [@galipremsagar](https://github.com/galipremsagar)
- Run pyupgrade 2.31.0. ([#10141](https://github.com/rapidsai/cudf/pull/10141)) [@bdice](https://github.com/bdice)
- Remove `drop_nan` from internal `IndexedFrame._drop_na_rows`. ([#10140](https://github.com/rapidsai/cudf/pull/10140)) [@bdice](https://github.com/bdice)
- Change cudf::strings::find_multiple to return a lists column ([#10134](https://github.com/rapidsai/cudf/pull/10134)) [@davidwendt](https://github.com/davidwendt)
- Update cmake-format script for branch 22.04. ([#10132](https://github.com/rapidsai/cudf/pull/10132)) [@bdice](https://github.com/bdice)
- Accept r-value references in convert_table_for_return(): ([#10131](https://github.com/rapidsai/cudf/pull/10131)) [@mythrocks](https://github.com/mythrocks)
- Remove the option to completely disable decimal128 columns in the ORC reader ([#10127](https://github.com/rapidsai/cudf/pull/10127)) [@vuule](https://github.com/vuule)
- Remove deprecated code ([#10124](https://github.com/rapidsai/cudf/pull/10124)) [@vyasr](https://github.com/vyasr)
- Update gpu_utils.py to reflect current CUDA support. ([#10113](https://github.com/rapidsai/cudf/pull/10113)) [@bdice](https://github.com/bdice)
- Remove benchmarks suffix ([#10112](https://github.com/rapidsai/cudf/pull/10112)) [@bdice](https://github.com/bdice)
- Update cudf java binding version to 22.04.0-SNAPSHOT ([#10084](https://github.com/rapidsai/cudf/pull/10084)) [@pxLi](https://github.com/pxLi)
- Remove unnecessary docker files. ([#10069](https://github.com/rapidsai/cudf/pull/10069)) [@vyasr](https://github.com/vyasr)
- Limit benchmark iterations using environment variable ([#10060](https://github.com/rapidsai/cudf/pull/10060)) [@karthikeyann](https://github.com/karthikeyann)
- Add timing chart for libcudf build metrics report page ([#10038](https://github.com/rapidsai/cudf/pull/10038)) [@davidwendt](https://github.com/davidwendt)
- JNI: Rewrite growBuffersAndRows to accelerate the HostColumnBuilder ([#10025](https://github.com/rapidsai/cudf/pull/10025)) [@sperlingxx](https://github.com/sperlingxx)
- Reduce redundant code in CUDF JNI ([#10019](https://github.com/rapidsai/cudf/pull/10019)) [@mythrocks](https://github.com/mythrocks)
- Make snappy decompress check more efficient ([#9995](https://github.com/rapidsai/cudf/pull/9995)) [@cheinger](https://github.com/cheinger)
- Remove deprecated method Series.set_index. ([#9945](https://github.com/rapidsai/cudf/pull/9945)) [@bdice](https://github.com/bdice)
- Implement a mixin for reductions ([#9925](https://github.com/rapidsai/cudf/pull/9925)) [@vyasr](https://github.com/vyasr)
- JNI: Push back decimal utils from spark-rapids ([#9907](https://github.com/rapidsai/cudf/pull/9907)) [@sperlingxx](https://github.com/sperlingxx)
- Add `assert_column_memory_*` ([#9882](https://github.com/rapidsai/cudf/pull/9882)) [@isVoid](https://github.com/isVoid)
- Add CUDF_UNREACHABLE macro. ([#9727](https://github.com/rapidsai/cudf/pull/9727)) [@bdice](https://github.com/bdice)
- Upgrade `arrow` & `pyarrow` to `6.0.1` ([#9686](https://github.com/rapidsai/cudf/pull/9686)) [@galipremsagar](https://github.com/galipremsagar)
# cuDF 22.02.00 (2 Feb 2022)
## π¨ Breaking Changes
- ORC writer API changes for granular statistics ([#10058](https://github.com/rapidsai/cudf/pull/10058)) [@mythrocks](https://github.com/mythrocks)
- `decimal128` Support for `to/from_arrow` ([#9986](https://github.com/rapidsai/cudf/pull/9986)) [@codereport](https://github.com/codereport)
- Remove deprecated method `one_hot_encoding` ([#9977](https://github.com/rapidsai/cudf/pull/9977)) [@isVoid](https://github.com/isVoid)
- Remove str.subword_tokenize ([#9968](https://github.com/rapidsai/cudf/pull/9968)) [@VibhuJawa](https://github.com/VibhuJawa)
- Remove deprecated `method` parameter from `merge` and `join`. ([#9944](https://github.com/rapidsai/cudf/pull/9944)) [@bdice](https://github.com/bdice)
- Remove deprecated method DataFrame.hash_columns. ([#9943](https://github.com/rapidsai/cudf/pull/9943)) [@bdice](https://github.com/bdice)
- Remove deprecated method Series.hash_encode. ([#9942](https://github.com/rapidsai/cudf/pull/9942)) [@bdice](https://github.com/bdice)
- Refactoring ceil/round/floor code for datetime64 types ([#9926](https://github.com/rapidsai/cudf/pull/9926)) [@mayankanand007](https://github.com/mayankanand007)
- Introduce `nan_as_null` parameter for `cudf.Index` ([#9893](https://github.com/rapidsai/cudf/pull/9893)) [@galipremsagar](https://github.com/galipremsagar)
- Add regex_flags parameter to strings replace_re functions ([#9878](https://github.com/rapidsai/cudf/pull/9878)) [@davidwendt](https://github.com/davidwendt)
- Break tie for `top` categorical columns in `Series.describe` ([#9867](https://github.com/rapidsai/cudf/pull/9867)) [@isVoid](https://github.com/isVoid)
- Add partitioning support in parquet writer ([#9810](https://github.com/rapidsai/cudf/pull/9810)) [@devavret](https://github.com/devavret)
- Move `drop_duplicates`, `drop_na`, `_gather`, `take` to IndexFrame and create their `_base_index` counterparts ([#9807](https://github.com/rapidsai/cudf/pull/9807)) [@isVoid](https://github.com/isVoid)
- Raise temporary error for `decimal128` types in parquet reader ([#9804](https://github.com/rapidsai/cudf/pull/9804)) [@galipremsagar](https://github.com/galipremsagar)
- Change default `dtype` of all nulls column from `float` to `object` ([#9803](https://github.com/rapidsai/cudf/pull/9803)) [@galipremsagar](https://github.com/galipremsagar)
- Remove unused masked udf cython/c++ code ([#9792](https://github.com/rapidsai/cudf/pull/9792)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Pick smallest decimal type with required precision in ORC reader ([#9775](https://github.com/rapidsai/cudf/pull/9775)) [@vuule](https://github.com/vuule)
- Add decimal128 support to Parquet reader and writer ([#9765](https://github.com/rapidsai/cudf/pull/9765)) [@vuule](https://github.com/vuule)
- Refactor TableTest assertion methods to a separate utility class ([#9762](https://github.com/rapidsai/cudf/pull/9762)) [@jlowe](https://github.com/jlowe)
- Use cuFile direct device reads/writes by default in cuIO ([#9722](https://github.com/rapidsai/cudf/pull/9722)) [@vuule](https://github.com/vuule)
- Match pandas scalar result types in reductions ([#9717](https://github.com/rapidsai/cudf/pull/9717)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Add parameters to control row group size in Parquet writer ([#9677](https://github.com/rapidsai/cudf/pull/9677)) [@vuule](https://github.com/vuule)
- Refactor bit counting APIs, introduce valid/null count functions, and split host/device side code for segmented counts. ([#9588](https://github.com/rapidsai/cudf/pull/9588)) [@bdice](https://github.com/bdice)
- Add support for `decimal128` in cudf python ([#9533](https://github.com/rapidsai/cudf/pull/9533)) [@galipremsagar](https://github.com/galipremsagar)
- Implement `lists::index_of()` to find positions in list rows ([#9510](https://github.com/rapidsai/cudf/pull/9510)) [@mythrocks](https://github.com/mythrocks)
- Rewriting row/column conversions for Spark <-> cudf data conversions ([#8444](https://github.com/rapidsai/cudf/pull/8444)) [@hyperbolic2346](https://github.com/hyperbolic2346)
## π Bug Fixes
- Add check for negative stripe index in ORC reader ([#10074](https://github.com/rapidsai/cudf/pull/10074)) [@vuule](https://github.com/vuule)
- Update Java tests to expect DECIMAL128 from Arrow ([#10073](https://github.com/rapidsai/cudf/pull/10073)) [@jlowe](https://github.com/jlowe)
- Avoid index materialization when `DataFrame` is created with un-named `Series` objects ([#10071](https://github.com/rapidsai/cudf/pull/10071)) [@galipremsagar](https://github.com/galipremsagar)
- fix gcc 11 compilation errors ([#10067](https://github.com/rapidsai/cudf/pull/10067)) [@rongou](https://github.com/rongou)
- Fix `columns` ordering issue in parquet reader ([#10066](https://github.com/rapidsai/cudf/pull/10066)) [@galipremsagar](https://github.com/galipremsagar)
- Fix dataframe setitem with `ndarray` types ([#10056](https://github.com/rapidsai/cudf/pull/10056)) [@galipremsagar](https://github.com/galipremsagar)
- Remove implicit copy due to conversion from cudf::size_type and size_t ([#10045](https://github.com/rapidsai/cudf/pull/10045)) [@robertmaynard](https://github.com/robertmaynard)
- Include <optional> in headers that use std::optional ([#10044](https://github.com/rapidsai/cudf/pull/10044)) [@robertmaynard](https://github.com/robertmaynard)
- Fix repr and concat of `StructColumn` ([#10042](https://github.com/rapidsai/cudf/pull/10042)) [@galipremsagar](https://github.com/galipremsagar)
- Include row group level stats when writing ORC files ([#10041](https://github.com/rapidsai/cudf/pull/10041)) [@vuule](https://github.com/vuule)
- build.sh respects the `--build_metrics` and `--incl_cache_stats` flags ([#10035](https://github.com/rapidsai/cudf/pull/10035)) [@robertmaynard](https://github.com/robertmaynard)
- Fix memory leaks in JNI native code. ([#10029](https://github.com/rapidsai/cudf/pull/10029)) [@mythrocks](https://github.com/mythrocks)
- Update JNI to use new arena mr constructor ([#10027](https://github.com/rapidsai/cudf/pull/10027)) [@rongou](https://github.com/rongou)
- Fix null check when comparing structs in `arg_min` operation of reduction/groupby ([#10026](https://github.com/rapidsai/cudf/pull/10026)) [@ttnghia](https://github.com/ttnghia)
- Wrap CI script shell variables in quotes to fix local testing. ([#10018](https://github.com/rapidsai/cudf/pull/10018)) [@bdice](https://github.com/bdice)
- cudftestutil no longer propagates compiler flags to external users ([#10017](https://github.com/rapidsai/cudf/pull/10017)) [@robertmaynard](https://github.com/robertmaynard)
- Remove `CUDA_DEVICE_CALLABLE` macro usage ([#10015](https://github.com/rapidsai/cudf/pull/10015)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Add missing list filling header in meta.yaml ([#10007](https://github.com/rapidsai/cudf/pull/10007)) [@devavret](https://github.com/devavret)
- Fix `conda` recipes for `custreamz` & `cudf_kafka` ([#10003](https://github.com/rapidsai/cudf/pull/10003)) [@ajschmidt8](https://github.com/ajschmidt8)
- Fix matching regex word-boundary () in strings replace ([#9997](https://github.com/rapidsai/cudf/pull/9997)) [@davidwendt](https://github.com/davidwendt)
- Fix null check when comparing structs in `min` and `max` reduction/groupby operations ([#9994](https://github.com/rapidsai/cudf/pull/9994)) [@ttnghia](https://github.com/ttnghia)
- Fix octal pattern matching in regex string ([#9993](https://github.com/rapidsai/cudf/pull/9993)) [@davidwendt](https://github.com/davidwendt)
- `decimal128` Support for `to/from_arrow` ([#9986](https://github.com/rapidsai/cudf/pull/9986)) [@codereport](https://github.com/codereport)
- Fix groupby shift/diff/fill after selecting from a `GroupBy` ([#9984](https://github.com/rapidsai/cudf/pull/9984)) [@shwina](https://github.com/shwina)
- Fix the overflow problem of decimal rescale ([#9966](https://github.com/rapidsai/cudf/pull/9966)) [@sperlingxx](https://github.com/sperlingxx)
- Use default value for decimal precision in parquet writer when not specified ([#9963](https://github.com/rapidsai/cudf/pull/9963)) [@devavret](https://github.com/devavret)
- Fix cudf java build error. ([#9958](https://github.com/rapidsai/cudf/pull/9958)) [@firestarman](https://github.com/firestarman)
- Use gpuci_mamba_retry to install local artifacts. ([#9951](https://github.com/rapidsai/cudf/pull/9951)) [@bdice](https://github.com/bdice)
- Fix regression HostColumnVectorCore requiring native libs ([#9948](https://github.com/rapidsai/cudf/pull/9948)) [@jlowe](https://github.com/jlowe)
- Rename aggregate_metadata in writer to fix name collision ([#9938](https://github.com/rapidsai/cudf/pull/9938)) [@devavret](https://github.com/devavret)
- Fixed issue with percentile_approx where output tdigests could have uninitialized data at the end. ([#9931](https://github.com/rapidsai/cudf/pull/9931)) [@nvdbaranec](https://github.com/nvdbaranec)
- Resolve racecheck errors in ORC kernels ([#9916](https://github.com/rapidsai/cudf/pull/9916)) [@vuule](https://github.com/vuule)
- Fix the java build after parquet partitioning support ([#9908](https://github.com/rapidsai/cudf/pull/9908)) [@revans2](https://github.com/revans2)
- Fix compilation of benchmark for parquet writer. ([#9905](https://github.com/rapidsai/cudf/pull/9905)) [@bdice](https://github.com/bdice)
- Fix a memcheck error in ORC writer ([#9896](https://github.com/rapidsai/cudf/pull/9896)) [@vuule](https://github.com/vuule)
- Introduce `nan_as_null` parameter for `cudf.Index` ([#9893](https://github.com/rapidsai/cudf/pull/9893)) [@galipremsagar](https://github.com/galipremsagar)
- Fix fallback to sort aggregation for grouping only hash aggregate ([#9891](https://github.com/rapidsai/cudf/pull/9891)) [@abellina](https://github.com/abellina)
- Add zlib to cudfjni link when using static libcudf library dependency ([#9890](https://github.com/rapidsai/cudf/pull/9890)) [@jlowe](https://github.com/jlowe)
- TimedeltaIndex constructor raises an AttributeError. ([#9884](https://github.com/rapidsai/cudf/pull/9884)) [@skirui-source](https://github.com/skirui-source)
- Fix cudf.Scalar string datetime construction ([#9875](https://github.com/rapidsai/cudf/pull/9875)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Load libcufile.so with RTLD_NODELETE flag ([#9872](https://github.com/rapidsai/cudf/pull/9872)) [@vuule](https://github.com/vuule)
- Break tie for `top` categorical columns in `Series.describe` ([#9867](https://github.com/rapidsai/cudf/pull/9867)) [@isVoid](https://github.com/isVoid)
- Fix null handling for structs `min` and `arg_min` in groupby, groupby scan, reduction, and inclusive_scan ([#9864](https://github.com/rapidsai/cudf/pull/9864)) [@ttnghia](https://github.com/ttnghia)
- Add one-level list encoding support in parquet reader ([#9848](https://github.com/rapidsai/cudf/pull/9848)) [@PointKernel](https://github.com/PointKernel)
- Fix an out-of-bounds read in validity copying in contiguous_split. ([#9842](https://github.com/rapidsai/cudf/pull/9842)) [@nvdbaranec](https://github.com/nvdbaranec)
- Fix join of MultiIndex to Index with one column and overlapping name. ([#9830](https://github.com/rapidsai/cudf/pull/9830)) [@vyasr](https://github.com/vyasr)
- Fix caching in `Series.applymap` ([#9821](https://github.com/rapidsai/cudf/pull/9821)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Enforce boolean `ascending` for dask-cudf `sort_values` ([#9814](https://github.com/rapidsai/cudf/pull/9814)) [@charlesbluca](https://github.com/charlesbluca)
- Fix ORC writer crash with empty input columns ([#9808](https://github.com/rapidsai/cudf/pull/9808)) [@vuule](https://github.com/vuule)
- Change default `dtype` of all nulls column from `float` to `object` ([#9803](https://github.com/rapidsai/cudf/pull/9803)) [@galipremsagar](https://github.com/galipremsagar)
- Load native dependencies when Java ColumnView is loaded ([#9800](https://github.com/rapidsai/cudf/pull/9800)) [@jlowe](https://github.com/jlowe)
- Fix dtype-argument bug in dask_cudf read_csv ([#9796](https://github.com/rapidsai/cudf/pull/9796)) [@rjzamora](https://github.com/rjzamora)
- Fix overflow for min calculation in strings::from_timestamps ([#9793](https://github.com/rapidsai/cudf/pull/9793)) [@revans2](https://github.com/revans2)
- Fix memory error due to lambda return type deduction limitation ([#9778](https://github.com/rapidsai/cudf/pull/9778)) [@karthikeyann](https://github.com/karthikeyann)
- Revert regex $/EOL end-of-string new-line special case handling ([#9774](https://github.com/rapidsai/cudf/pull/9774)) [@davidwendt](https://github.com/davidwendt)
- Fix missing streams ([#9767](https://github.com/rapidsai/cudf/pull/9767)) [@karthikeyann](https://github.com/karthikeyann)
- Fix make_empty_scalar_like on list_type ([#9759](https://github.com/rapidsai/cudf/pull/9759)) [@sperlingxx](https://github.com/sperlingxx)
- Update cmake and conda to 22.02 ([#9746](https://github.com/rapidsai/cudf/pull/9746)) [@devavret](https://github.com/devavret)
- Fix out-of-bounds memory write in decimal128-to-string conversion ([#9740](https://github.com/rapidsai/cudf/pull/9740)) [@davidwendt](https://github.com/davidwendt)
- Match pandas scalar result types in reductions ([#9717](https://github.com/rapidsai/cudf/pull/9717)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Fix regex non-multiline EOL/$ matching strings ending with a new-line ([#9715](https://github.com/rapidsai/cudf/pull/9715)) [@davidwendt](https://github.com/davidwendt)
- Fixed build by adding more checks for int8, int16 ([#9707](https://github.com/rapidsai/cudf/pull/9707)) [@razajafri](https://github.com/razajafri)
- Fix `null` handling when `boolean` dtype is passed ([#9691](https://github.com/rapidsai/cudf/pull/9691)) [@galipremsagar](https://github.com/galipremsagar)
- Fix stream usage in `segmented_gather()` ([#9679](https://github.com/rapidsai/cudf/pull/9679)) [@mythrocks](https://github.com/mythrocks)
## π Documentation
- Update `decimal` dtypes related docs entries ([#10072](https://github.com/rapidsai/cudf/pull/10072)) [@galipremsagar](https://github.com/galipremsagar)
- Fix regex doc describing hexadecimal escape characters ([#10009](https://github.com/rapidsai/cudf/pull/10009)) [@davidwendt](https://github.com/davidwendt)
- Fix cudf compilation instructions. ([#9956](https://github.com/rapidsai/cudf/pull/9956)) [@esoha-nvidia](https://github.com/esoha-nvidia)
- Fix see also links for IO APIs ([#9895](https://github.com/rapidsai/cudf/pull/9895)) [@galipremsagar](https://github.com/galipremsagar)
- Fix build instructions for libcudf doxygen ([#9837](https://github.com/rapidsai/cudf/pull/9837)) [@davidwendt](https://github.com/davidwendt)
- Fix some doxygen warnings and add missing documentation ([#9770](https://github.com/rapidsai/cudf/pull/9770)) [@karthikeyann](https://github.com/karthikeyann)
- update cuda version in local build ([#9736](https://github.com/rapidsai/cudf/pull/9736)) [@karthikeyann](https://github.com/karthikeyann)
- Fix doxygen for enum types in libcudf ([#9724](https://github.com/rapidsai/cudf/pull/9724)) [@davidwendt](https://github.com/davidwendt)
- Spell check fixes ([#9682](https://github.com/rapidsai/cudf/pull/9682)) [@karthikeyann](https://github.com/karthikeyann)
- Fix links in C++ Developer Guide. ([#9675](https://github.com/rapidsai/cudf/pull/9675)) [@bdice](https://github.com/bdice)
## π New Features
- Remove libcudacxx patch needed for nvcc 11.4 ([#10057](https://github.com/rapidsai/cudf/pull/10057)) [@robertmaynard](https://github.com/robertmaynard)
- Allow CuPy 10 ([#10048](https://github.com/rapidsai/cudf/pull/10048)) [@jakirkham](https://github.com/jakirkham)
- Add in support for NULL_LOGICAL_AND and NULL_LOGICAL_OR binops ([#10016](https://github.com/rapidsai/cudf/pull/10016)) [@revans2](https://github.com/revans2)
- Add `groupby.transform` (only support for aggregations) ([#10005](https://github.com/rapidsai/cudf/pull/10005)) [@shwina](https://github.com/shwina)
- Add partitioning support to Parquet chunked writer ([#10000](https://github.com/rapidsai/cudf/pull/10000)) [@devavret](https://github.com/devavret)
- Add jni for sequences ([#9972](https://github.com/rapidsai/cudf/pull/9972)) [@wbo4958](https://github.com/wbo4958)
- Java bindings for mixed left, inner, and full joins ([#9941](https://github.com/rapidsai/cudf/pull/9941)) [@jlowe](https://github.com/jlowe)
- Java bindings for JSON reader support ([#9940](https://github.com/rapidsai/cudf/pull/9940)) [@wbo4958](https://github.com/wbo4958)
- Enable transpose for string columns in cudf python ([#9937](https://github.com/rapidsai/cudf/pull/9937)) [@galipremsagar](https://github.com/galipremsagar)
- Support structs for `cudf::contains` with column/scalar input ([#9929](https://github.com/rapidsai/cudf/pull/9929)) [@ttnghia](https://github.com/ttnghia)
- Implement mixed equality/conditional joins ([#9917](https://github.com/rapidsai/cudf/pull/9917)) [@vyasr](https://github.com/vyasr)
- Add cudf::strings::extract_all API ([#9909](https://github.com/rapidsai/cudf/pull/9909)) [@davidwendt](https://github.com/davidwendt)
- Implement JNI for `cudf::scatter` APIs ([#9903](https://github.com/rapidsai/cudf/pull/9903)) [@ttnghia](https://github.com/ttnghia)
- JNI: Function to copy and set validity from bool column. ([#9901](https://github.com/rapidsai/cudf/pull/9901)) [@mythrocks](https://github.com/mythrocks)
- Add dictionary support to cudf::copy_if_else ([#9887](https://github.com/rapidsai/cudf/pull/9887)) [@davidwendt](https://github.com/davidwendt)
- add run_benchmarks target for running benchmarks with json output ([#9879](https://github.com/rapidsai/cudf/pull/9879)) [@karthikeyann](https://github.com/karthikeyann)
- Add regex_flags parameter to strings replace_re functions ([#9878](https://github.com/rapidsai/cudf/pull/9878)) [@davidwendt](https://github.com/davidwendt)
- Add_suffix and add_prefix for DataFrames and Series ([#9846](https://github.com/rapidsai/cudf/pull/9846)) [@mayankanand007](https://github.com/mayankanand007)
- Add JNI for `cudf::drop_duplicates` ([#9841](https://github.com/rapidsai/cudf/pull/9841)) [@ttnghia](https://github.com/ttnghia)
- Implement per-list sequence ([#9839](https://github.com/rapidsai/cudf/pull/9839)) [@ttnghia](https://github.com/ttnghia)
- adding `series.transpose` ([#9835](https://github.com/rapidsai/cudf/pull/9835)) [@mayankanand007](https://github.com/mayankanand007)
- Adding support for `Series.autocorr` ([#9833](https://github.com/rapidsai/cudf/pull/9833)) [@mayankanand007](https://github.com/mayankanand007)
- Support round operation on datetime64 datatypes ([#9820](https://github.com/rapidsai/cudf/pull/9820)) [@mayankanand007](https://github.com/mayankanand007)
- Add partitioning support in parquet writer ([#9810](https://github.com/rapidsai/cudf/pull/9810)) [@devavret](https://github.com/devavret)
- Raise temporary error for `decimal128` types in parquet reader ([#9804](https://github.com/rapidsai/cudf/pull/9804)) [@galipremsagar](https://github.com/galipremsagar)
- Add decimal128 support to Parquet reader and writer ([#9765](https://github.com/rapidsai/cudf/pull/9765)) [@vuule](https://github.com/vuule)
- Optimize `groupby::scan` ([#9754](https://github.com/rapidsai/cudf/pull/9754)) [@PointKernel](https://github.com/PointKernel)
- Add sample JNI API ([#9728](https://github.com/rapidsai/cudf/pull/9728)) [@res-life](https://github.com/res-life)
- Support `min` and `max` in inclusive scan for structs ([#9725](https://github.com/rapidsai/cudf/pull/9725)) [@ttnghia](https://github.com/ttnghia)
- Add `first` and `last` method to `IndexedFrame` ([#9710](https://github.com/rapidsai/cudf/pull/9710)) [@isVoid](https://github.com/isVoid)
- Support `min` and `max` reduction for structs ([#9697](https://github.com/rapidsai/cudf/pull/9697)) [@ttnghia](https://github.com/ttnghia)
- Add parameters to control row group size in Parquet writer ([#9677](https://github.com/rapidsai/cudf/pull/9677)) [@vuule](https://github.com/vuule)
- Run compute-sanitizer in nightly build ([#9641](https://github.com/rapidsai/cudf/pull/9641)) [@karthikeyann](https://github.com/karthikeyann)
- Implement Series.datetime.floor ([#9571](https://github.com/rapidsai/cudf/pull/9571)) [@skirui-source](https://github.com/skirui-source)
- ceil/floor for `DatetimeIndex` ([#9554](https://github.com/rapidsai/cudf/pull/9554)) [@mayankanand007](https://github.com/mayankanand007)
- Add support for `decimal128` in cudf python ([#9533](https://github.com/rapidsai/cudf/pull/9533)) [@galipremsagar](https://github.com/galipremsagar)
- Implement `lists::index_of()` to find positions in list rows ([#9510](https://github.com/rapidsai/cudf/pull/9510)) [@mythrocks](https://github.com/mythrocks)
- custreamz oauth callback for kafka (librdkafka) ([#9486](https://github.com/rapidsai/cudf/pull/9486)) [@jdye64](https://github.com/jdye64)
- Add Pearson correlation for sort groupby (python) ([#9166](https://github.com/rapidsai/cudf/pull/9166)) [@skirui-source](https://github.com/skirui-source)
- Interchange dataframe protocol ([#9071](https://github.com/rapidsai/cudf/pull/9071)) [@iskode](https://github.com/iskode)
- Rewriting row/column conversions for Spark <-> cudf data conversions ([#8444](https://github.com/rapidsai/cudf/pull/8444)) [@hyperbolic2346](https://github.com/hyperbolic2346)
## π οΈ Improvements
- Prepare upload scripts for Python 3.7 removal ([#10092](https://github.com/rapidsai/cudf/pull/10092)) [@Ethyling](https://github.com/Ethyling)
- Simplify custreamz and cudf_kafka recipes files ([#10065](https://github.com/rapidsai/cudf/pull/10065)) [@Ethyling](https://github.com/Ethyling)
- ORC writer API changes for granular statistics ([#10058](https://github.com/rapidsai/cudf/pull/10058)) [@mythrocks](https://github.com/mythrocks)
- Remove python constraints in cutreamz and cudf_kafka recipes ([#10052](https://github.com/rapidsai/cudf/pull/10052)) [@Ethyling](https://github.com/Ethyling)
- Unpin `dask` and `distributed` in CI ([#10028](https://github.com/rapidsai/cudf/pull/10028)) [@galipremsagar](https://github.com/galipremsagar)
- Add `_from_column_like_self` factory ([#10022](https://github.com/rapidsai/cudf/pull/10022)) [@isVoid](https://github.com/isVoid)
- Replace custom CUDA bindings previously provided by RMM with official CUDA Python bindings ([#10008](https://github.com/rapidsai/cudf/pull/10008)) [@shwina](https://github.com/shwina)
- Use `cuda::std::is_arithmetic` in `cudf::is_numeric` trait. ([#9996](https://github.com/rapidsai/cudf/pull/9996)) [@bdice](https://github.com/bdice)
- Clean up CUDA stream use in cuIO ([#9991](https://github.com/rapidsai/cudf/pull/9991)) [@vuule](https://github.com/vuule)
- Use addressed-ordered first fit for the pinned memory pool ([#9989](https://github.com/rapidsai/cudf/pull/9989)) [@rongou](https://github.com/rongou)
- Add strings tests to transpose_test.cpp ([#9985](https://github.com/rapidsai/cudf/pull/9985)) [@davidwendt](https://github.com/davidwendt)
- Use gpuci_mamba_retry on Java CI. ([#9983](https://github.com/rapidsai/cudf/pull/9983)) [@bdice](https://github.com/bdice)
- Remove deprecated method `one_hot_encoding` ([#9977](https://github.com/rapidsai/cudf/pull/9977)) [@isVoid](https://github.com/isVoid)
- Minor cleanup of unused Python functions ([#9974](https://github.com/rapidsai/cudf/pull/9974)) [@vyasr](https://github.com/vyasr)
- Use new efficient partitioned parquet writing in cuDF ([#9971](https://github.com/rapidsai/cudf/pull/9971)) [@devavret](https://github.com/devavret)
- Remove str.subword_tokenize ([#9968](https://github.com/rapidsai/cudf/pull/9968)) [@VibhuJawa](https://github.com/VibhuJawa)
- Forward-merge branch-21.12 to branch-22.02 ([#9947](https://github.com/rapidsai/cudf/pull/9947)) [@bdice](https://github.com/bdice)
- Remove deprecated `method` parameter from `merge` and `join`. ([#9944](https://github.com/rapidsai/cudf/pull/9944)) [@bdice](https://github.com/bdice)
- Remove deprecated method DataFrame.hash_columns. ([#9943](https://github.com/rapidsai/cudf/pull/9943)) [@bdice](https://github.com/bdice)
- Remove deprecated method Series.hash_encode. ([#9942](https://github.com/rapidsai/cudf/pull/9942)) [@bdice](https://github.com/bdice)
- use ninja in java ci build ([#9933](https://github.com/rapidsai/cudf/pull/9933)) [@rongou](https://github.com/rongou)
- Add build-time publish step to cpu build script ([#9927](https://github.com/rapidsai/cudf/pull/9927)) [@davidwendt](https://github.com/davidwendt)
- Refactoring ceil/round/floor code for datetime64 types ([#9926](https://github.com/rapidsai/cudf/pull/9926)) [@mayankanand007](https://github.com/mayankanand007)
- Remove various unused functions ([#9922](https://github.com/rapidsai/cudf/pull/9922)) [@vyasr](https://github.com/vyasr)
- Raise in `query` if dtype is not supported ([#9921](https://github.com/rapidsai/cudf/pull/9921)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Add missing imports tests ([#9920](https://github.com/rapidsai/cudf/pull/9920)) [@Ethyling](https://github.com/Ethyling)
- Spark Decimal128 hashing ([#9919](https://github.com/rapidsai/cudf/pull/9919)) [@rwlee](https://github.com/rwlee)
- Replace `thrust/std::get` with structured bindings ([#9915](https://github.com/rapidsai/cudf/pull/9915)) [@codereport](https://github.com/codereport)
- Upgrade thrust version to 1.15 ([#9912](https://github.com/rapidsai/cudf/pull/9912)) [@robertmaynard](https://github.com/robertmaynard)
- Remove conda envs for CUDA 11.0 and 11.2. ([#9910](https://github.com/rapidsai/cudf/pull/9910)) [@bdice](https://github.com/bdice)
- Return count of set bits from inplace_bitmask_and. ([#9904](https://github.com/rapidsai/cudf/pull/9904)) [@bdice](https://github.com/bdice)
- Use dynamic nullate for join hasher and equality comparator ([#9902](https://github.com/rapidsai/cudf/pull/9902)) [@davidwendt](https://github.com/davidwendt)
- Update ucx-py version on release using rvc ([#9897](https://github.com/rapidsai/cudf/pull/9897)) [@Ethyling](https://github.com/Ethyling)
- Remove `IncludeCategories` from `.clang-format` ([#9876](https://github.com/rapidsai/cudf/pull/9876)) [@codereport](https://github.com/codereport)
- Support statically linking CUDA runtime for Java bindings ([#9873](https://github.com/rapidsai/cudf/pull/9873)) [@jlowe](https://github.com/jlowe)
- Add `clang-tidy` to libcudf ([#9860](https://github.com/rapidsai/cudf/pull/9860)) [@codereport](https://github.com/codereport)
- Remove deprecated methods from Java Table class ([#9853](https://github.com/rapidsai/cudf/pull/9853)) [@jlowe](https://github.com/jlowe)
- Add test for map column metadata handling in ORC writer ([#9852](https://github.com/rapidsai/cudf/pull/9852)) [@vuule](https://github.com/vuule)
- Use pandas `to_offset` to parse frequency string in `date_range` ([#9843](https://github.com/rapidsai/cudf/pull/9843)) [@isVoid](https://github.com/isVoid)
- add templated benchmark with fixture ([#9838](https://github.com/rapidsai/cudf/pull/9838)) [@karthikeyann](https://github.com/karthikeyann)
- Use list of column inputs for `apply_boolean_mask` ([#9832](https://github.com/rapidsai/cudf/pull/9832)) [@isVoid](https://github.com/isVoid)
- Added a few more tests for Decimal to String cast ([#9818](https://github.com/rapidsai/cudf/pull/9818)) [@razajafri](https://github.com/razajafri)
- Run doctests. ([#9815](https://github.com/rapidsai/cudf/pull/9815)) [@bdice](https://github.com/bdice)
- Avoid overflow for fixed_point round ([#9809](https://github.com/rapidsai/cudf/pull/9809)) [@sperlingxx](https://github.com/sperlingxx)
- Move `drop_duplicates`, `drop_na`, `_gather`, `take` to IndexFrame and create their `_base_index` counterparts ([#9807](https://github.com/rapidsai/cudf/pull/9807)) [@isVoid](https://github.com/isVoid)
- Use vector factories for host-device copies. ([#9806](https://github.com/rapidsai/cudf/pull/9806)) [@bdice](https://github.com/bdice)
- Refactor host device macros ([#9797](https://github.com/rapidsai/cudf/pull/9797)) [@vyasr](https://github.com/vyasr)
- Remove unused masked udf cython/c++ code ([#9792](https://github.com/rapidsai/cudf/pull/9792)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Allow custom sort functions for dask-cudf `sort_values` ([#9789](https://github.com/rapidsai/cudf/pull/9789)) [@charlesbluca](https://github.com/charlesbluca)
- Improve build time of libcudf iterator tests ([#9788](https://github.com/rapidsai/cudf/pull/9788)) [@davidwendt](https://github.com/davidwendt)
- Copy Java native dependencies directly into classpath ([#9787](https://github.com/rapidsai/cudf/pull/9787)) [@jlowe](https://github.com/jlowe)
- Add decimal types to cuIO benchmarks ([#9776](https://github.com/rapidsai/cudf/pull/9776)) [@vuule](https://github.com/vuule)
- Pick smallest decimal type with required precision in ORC reader ([#9775](https://github.com/rapidsai/cudf/pull/9775)) [@vuule](https://github.com/vuule)
- Avoid overflow for `fixed_point` `cudf::cast` and performance optimization ([#9772](https://github.com/rapidsai/cudf/pull/9772)) [@codereport](https://github.com/codereport)
- Use CTAD with Thrust function objects ([#9768](https://github.com/rapidsai/cudf/pull/9768)) [@codereport](https://github.com/codereport)
- Refactor TableTest assertion methods to a separate utility class ([#9762](https://github.com/rapidsai/cudf/pull/9762)) [@jlowe](https://github.com/jlowe)
- Use Java classloader to find test resources ([#9760](https://github.com/rapidsai/cudf/pull/9760)) [@jlowe](https://github.com/jlowe)
- Allow cast decimal128 to string and add tests ([#9756](https://github.com/rapidsai/cudf/pull/9756)) [@razajafri](https://github.com/razajafri)
- Load balance optimization for contiguous_split ([#9755](https://github.com/rapidsai/cudf/pull/9755)) [@nvdbaranec](https://github.com/nvdbaranec)
- Consolidate and improve `reset_index` ([#9750](https://github.com/rapidsai/cudf/pull/9750)) [@isVoid](https://github.com/isVoid)
- Update to UCX-Py 0.24 ([#9748](https://github.com/rapidsai/cudf/pull/9748)) [@pentschev](https://github.com/pentschev)
- Skip cufile tests in JNI build script ([#9744](https://github.com/rapidsai/cudf/pull/9744)) [@pxLi](https://github.com/pxLi)
- Enable string to decimal 128 cast ([#9742](https://github.com/rapidsai/cudf/pull/9742)) [@razajafri](https://github.com/razajafri)
- Use stop instead of stop_. ([#9735](https://github.com/rapidsai/cudf/pull/9735)) [@bdice](https://github.com/bdice)
- Forward-merge branch-21.12 to branch-22.02 ([#9730](https://github.com/rapidsai/cudf/pull/9730)) [@bdice](https://github.com/bdice)
- Improve cmake format script ([#9723](https://github.com/rapidsai/cudf/pull/9723)) [@vyasr](https://github.com/vyasr)
- Use cuFile direct device reads/writes by default in cuIO ([#9722](https://github.com/rapidsai/cudf/pull/9722)) [@vuule](https://github.com/vuule)
- Add directory-partitioned data support to cudf.read_parquet ([#9720](https://github.com/rapidsai/cudf/pull/9720)) [@rjzamora](https://github.com/rjzamora)
- Use stream allocator adaptor for hash join table ([#9704](https://github.com/rapidsai/cudf/pull/9704)) [@PointKernel](https://github.com/PointKernel)
- Update check for inf/nan strings in libcudf float conversion to ignore case ([#9694](https://github.com/rapidsai/cudf/pull/9694)) [@davidwendt](https://github.com/davidwendt)
- Update cudf JNI to 22.02.0-SNAPSHOT ([#9681](https://github.com/rapidsai/cudf/pull/9681)) [@pxLi](https://github.com/pxLi)
- Replace cudf's concurrent_ordered_map with cuco::static_map in semi/anti joins ([#9666](https://github.com/rapidsai/cudf/pull/9666)) [@vyasr](https://github.com/vyasr)
- Some improvements to `parse_decimal` function and bindings for `is_fixed_point` ([#9658](https://github.com/rapidsai/cudf/pull/9658)) [@razajafri](https://github.com/razajafri)
- Add utility to format ninja-log build times ([#9631](https://github.com/rapidsai/cudf/pull/9631)) [@davidwendt](https://github.com/davidwendt)
- Allow runtime has_nulls parameter for row operators ([#9623](https://github.com/rapidsai/cudf/pull/9623)) [@davidwendt](https://github.com/davidwendt)
- Use fsspec.parquet for improved read_parquet performance from remote storage ([#9589](https://github.com/rapidsai/cudf/pull/9589)) [@rjzamora](https://github.com/rjzamora)
- Refactor bit counting APIs, introduce valid/null count functions, and split host/device side code for segmented counts. ([#9588](https://github.com/rapidsai/cudf/pull/9588)) [@bdice](https://github.com/bdice)
- Use List of Columns as Input for `drop_nulls`, `gather` and `drop_duplicates` ([#9558](https://github.com/rapidsai/cudf/pull/9558)) [@isVoid](https://github.com/isVoid)
- Simplify merge internals and reduce overhead ([#9516](https://github.com/rapidsai/cudf/pull/9516)) [@vyasr](https://github.com/vyasr)
- Add `struct` generation support in datagenerator & fuzz tests ([#9180](https://github.com/rapidsai/cudf/pull/9180)) [@galipremsagar](https://github.com/galipremsagar)
- Simplify write_csv by removing unnecessary writer/impl classes ([#9089](https://github.com/rapidsai/cudf/pull/9089)) [@cwharris](https://github.com/cwharris)
# cuDF 21.12.00 (9 Dec 2021)
## π¨ Breaking Changes
- Update `bitmask_and` and `bitmask_or` to return a pair of resulting mask and count of unset bits ([#9616](https://github.com/rapidsai/cudf/pull/9616)) [@PointKernel](https://github.com/PointKernel)
- Remove sizeof and standardize on memory_usage ([#9544](https://github.com/rapidsai/cudf/pull/9544)) [@vyasr](https://github.com/vyasr)
- Add support for single-line regex anchors ^/$ in contains_re ([#9482](https://github.com/rapidsai/cudf/pull/9482)) [@davidwendt](https://github.com/davidwendt)
- Refactor sorting APIs ([#9464](https://github.com/rapidsai/cudf/pull/9464)) [@vyasr](https://github.com/vyasr)
- Update Java nvcomp JNI bindings to nvcomp 2.x API ([#9384](https://github.com/rapidsai/cudf/pull/9384)) [@jbrennan333](https://github.com/jbrennan333)
- Support Python UDFs written in terms of rows ([#9343](https://github.com/rapidsai/cudf/pull/9343)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- JNI: Support nested types in ORC writer ([#9334](https://github.com/rapidsai/cudf/pull/9334)) [@firestarman](https://github.com/firestarman)
- Optionally nullify out-of-bounds indices in segmented_gather(). ([#9318](https://github.com/rapidsai/cudf/pull/9318)) [@mythrocks](https://github.com/mythrocks)
- Refactor cuIO timestamp processing with `cuda::std::chrono` ([#9278](https://github.com/rapidsai/cudf/pull/9278)) [@PointKernel](https://github.com/PointKernel)
- Various internal MultiIndex improvements ([#9243](https://github.com/rapidsai/cudf/pull/9243)) [@vyasr](https://github.com/vyasr)
## π Bug Fixes
- Fix read_parquet bug for bytes input ([#9669](https://github.com/rapidsai/cudf/pull/9669)) [@rjzamora](https://github.com/rjzamora)
- Use `_gather` internal for `sort_*` ([#9668](https://github.com/rapidsai/cudf/pull/9668)) [@isVoid](https://github.com/isVoid)
- Fix behavior of equals for non-DataFrame Frames and add tests. ([#9653](https://github.com/rapidsai/cudf/pull/9653)) [@vyasr](https://github.com/vyasr)
- Dont recompute output size if it is already available ([#9649](https://github.com/rapidsai/cudf/pull/9649)) [@abellina](https://github.com/abellina)
- Fix read_parquet bug for extended dtypes from remote storage ([#9638](https://github.com/rapidsai/cudf/pull/9638)) [@rjzamora](https://github.com/rjzamora)
- add const when getting data from a JNI data wrapper ([#9637](https://github.com/rapidsai/cudf/pull/9637)) [@wjxiz1992](https://github.com/wjxiz1992)
- Fix debrotli issue on CUDA 11.5 ([#9632](https://github.com/rapidsai/cudf/pull/9632)) [@vuule](https://github.com/vuule)
- Use std::size_t when computing join output size ([#9626](https://github.com/rapidsai/cudf/pull/9626)) [@jlowe](https://github.com/jlowe)
- Fix `usecols` parameter handling in `dask_cudf.read_csv` ([#9618](https://github.com/rapidsai/cudf/pull/9618)) [@galipremsagar](https://github.com/galipremsagar)
- Add support for string `'nan', 'inf' & '-inf'` values while type-casting to `float` ([#9613](https://github.com/rapidsai/cudf/pull/9613)) [@galipremsagar](https://github.com/galipremsagar)
- Avoid passing NativeFileDatasource to pyarrow in read_parquet ([#9608](https://github.com/rapidsai/cudf/pull/9608)) [@rjzamora](https://github.com/rjzamora)
- Fix test failure with cuda 11.5 in row_bit_count tests. ([#9581](https://github.com/rapidsai/cudf/pull/9581)) [@nvdbaranec](https://github.com/nvdbaranec)
- Correct _LIBCUDACXX_CUDACC_VER value computation ([#9579](https://github.com/rapidsai/cudf/pull/9579)) [@robertmaynard](https://github.com/robertmaynard)
- Increase max RLE stream size estimate to avoid potential overflows ([#9568](https://github.com/rapidsai/cudf/pull/9568)) [@vuule](https://github.com/vuule)
- Fix edge case in tdigest scalar generation for groups containing all nulls. ([#9551](https://github.com/rapidsai/cudf/pull/9551)) [@nvdbaranec](https://github.com/nvdbaranec)
- Fix pytests failing in `cuda-11.5` environment ([#9547](https://github.com/rapidsai/cudf/pull/9547)) [@galipremsagar](https://github.com/galipremsagar)
- compile libnvcomp with PTDS if requested ([#9540](https://github.com/rapidsai/cudf/pull/9540)) [@jbrennan333](https://github.com/jbrennan333)
- Fix `segmented_gather()` for null LIST rows ([#9537](https://github.com/rapidsai/cudf/pull/9537)) [@mythrocks](https://github.com/mythrocks)
- Deprecate DataFrame.label_encoding, use private _label_encoding method internally. ([#9535](https://github.com/rapidsai/cudf/pull/9535)) [@bdice](https://github.com/bdice)
- Fix several test and benchmark issues related to bitmask allocations. ([#9521](https://github.com/rapidsai/cudf/pull/9521)) [@nvdbaranec](https://github.com/nvdbaranec)
- Fix for inserting duplicates in groupby result cache ([#9508](https://github.com/rapidsai/cudf/pull/9508)) [@karthikeyann](https://github.com/karthikeyann)
- Fix mismatched types error in clip() when using non int64 numeric types ([#9498](https://github.com/rapidsai/cudf/pull/9498)) [@davidwendt](https://github.com/davidwendt)
- Match conda pinnings for style checks (revert part of #9412, #9433). ([#9490](https://github.com/rapidsai/cudf/pull/9490)) [@bdice](https://github.com/bdice)
- Make sure all dask-cudf supported aggs are handled in `_tree_node_agg` ([#9487](https://github.com/rapidsai/cudf/pull/9487)) [@charlesbluca](https://github.com/charlesbluca)
- Resolve `hash_columns` `FutureWarning` in `dask_cudf` ([#9481](https://github.com/rapidsai/cudf/pull/9481)) [@pentschev](https://github.com/pentschev)
- Add fixed point to AllTypes in libcudf unit tests ([#9472](https://github.com/rapidsai/cudf/pull/9472)) [@karthikeyann](https://github.com/karthikeyann)
- Fix regex handling of embedded null characters ([#9470](https://github.com/rapidsai/cudf/pull/9470)) [@davidwendt](https://github.com/davidwendt)
- Fix memcheck error in copy-if-else ([#9467](https://github.com/rapidsai/cudf/pull/9467)) [@davidwendt](https://github.com/davidwendt)
- Fix bug in dask_cudf.read_parquet for index=False ([#9453](https://github.com/rapidsai/cudf/pull/9453)) [@rjzamora](https://github.com/rjzamora)
- Preserve the decimal scale when creating a default scalar ([#9449](https://github.com/rapidsai/cudf/pull/9449)) [@revans2](https://github.com/revans2)
- Push down parent nulls when flattening nested columns. ([#9443](https://github.com/rapidsai/cudf/pull/9443)) [@mythrocks](https://github.com/mythrocks)
- Fix memcheck error in gtest SegmentedGatherTest/GatherSliced ([#9442](https://github.com/rapidsai/cudf/pull/9442)) [@davidwendt](https://github.com/davidwendt)
- Revert "Fix quantile division / partition handling for dask-cudf sort⦠([#9438](https://github.com/rapidsai/cudf/pull/9438)) [@charlesbluca](https://github.com/charlesbluca)
- Allow int-like objects for the `decimals` argument in `round` ([#9428](https://github.com/rapidsai/cudf/pull/9428)) [@shwina](https://github.com/shwina)
- Fix stream compaction's `drop_duplicates` API to use stable sort ([#9417](https://github.com/rapidsai/cudf/pull/9417)) [@ttnghia](https://github.com/ttnghia)
- Skip Comparing Uniform Window Results in Var/std Tests ([#9416](https://github.com/rapidsai/cudf/pull/9416)) [@isVoid](https://github.com/isVoid)
- Fix `StructColumn.to_pandas` type handling issues ([#9388](https://github.com/rapidsai/cudf/pull/9388)) [@galipremsagar](https://github.com/galipremsagar)
- Correct issues in the build dir cudf-config.cmake ([#9386](https://github.com/rapidsai/cudf/pull/9386)) [@robertmaynard](https://github.com/robertmaynard)
- Fix Java table partition test to account for non-deterministic ordering ([#9385](https://github.com/rapidsai/cudf/pull/9385)) [@jlowe](https://github.com/jlowe)
- Fix timestamp truncation/overflow bugs in orc/parquet ([#9382](https://github.com/rapidsai/cudf/pull/9382)) [@PointKernel](https://github.com/PointKernel)
- Fix the crash in stats code ([#9368](https://github.com/rapidsai/cudf/pull/9368)) [@devavret](https://github.com/devavret)
- Make Series.hash_encode results reproducible. ([#9366](https://github.com/rapidsai/cudf/pull/9366)) [@bdice](https://github.com/bdice)
- Fix libcudf compile warnings on debug 11.4 build ([#9360](https://github.com/rapidsai/cudf/pull/9360)) [@davidwendt](https://github.com/davidwendt)
- Fail gracefully when compiling python UDFs that attempt to access columns with unsupported dtypes ([#9359](https://github.com/rapidsai/cudf/pull/9359)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Set pass_filenames: false in mypy pre-commit configuration. ([#9349](https://github.com/rapidsai/cudf/pull/9349)) [@bdice](https://github.com/bdice)
- Fix cudf_assert in cudf::io::orc::gpu::gpuDecodeOrcColumnData ([#9348](https://github.com/rapidsai/cudf/pull/9348)) [@davidwendt](https://github.com/davidwendt)
- Fix memcheck error in groupby-tdigest get_scalar_minmax ([#9339](https://github.com/rapidsai/cudf/pull/9339)) [@davidwendt](https://github.com/davidwendt)
- Optimizations for `cudf.concat` when `axis=1` ([#9333](https://github.com/rapidsai/cudf/pull/9333)) [@galipremsagar](https://github.com/galipremsagar)
- Use f-string in join helper warning message. ([#9325](https://github.com/rapidsai/cudf/pull/9325)) [@bdice](https://github.com/bdice)
- Avoid casting to list or struct dtypes in dask_cudf.read_parquet ([#9314](https://github.com/rapidsai/cudf/pull/9314)) [@rjzamora](https://github.com/rjzamora)
- Fix null count in statistics for parquet ([#9303](https://github.com/rapidsai/cudf/pull/9303)) [@devavret](https://github.com/devavret)
- Potential overflow of `decimal32` when casting to `int64_t` ([#9287](https://github.com/rapidsai/cudf/pull/9287)) [@codereport](https://github.com/codereport)
- Fix quantile division / partition handling for dask-cudf sort on null dataframes ([#9259](https://github.com/rapidsai/cudf/pull/9259)) [@charlesbluca](https://github.com/charlesbluca)
- Updating cudf version also updates rapids cmake branch ([#9249](https://github.com/rapidsai/cudf/pull/9249)) [@robertmaynard](https://github.com/robertmaynard)
- Implement `one_hot_encoding` in libcudf and bind to python ([#9229](https://github.com/rapidsai/cudf/pull/9229)) [@isVoid](https://github.com/isVoid)
- BUG FIX: CSV Writer ignores the header parameter when no metadata is provided ([#8740](https://github.com/rapidsai/cudf/pull/8740)) [@skirui-source](https://github.com/skirui-source)
## π Documentation
- Update Documentation to use `TYPED_TEST_SUITE` ([#9654](https://github.com/rapidsai/cudf/pull/9654)) [@codereport](https://github.com/codereport)
- Add dedicated page for `StringHandling` in python docs ([#9624](https://github.com/rapidsai/cudf/pull/9624)) [@galipremsagar](https://github.com/galipremsagar)
- Update docstring of `DataFrame.merge` ([#9572](https://github.com/rapidsai/cudf/pull/9572)) [@galipremsagar](https://github.com/galipremsagar)
- Use raw strings to avoid SyntaxErrors in parsed docstrings. ([#9526](https://github.com/rapidsai/cudf/pull/9526)) [@bdice](https://github.com/bdice)
- Add example to docstrings in `rolling.apply` ([#9522](https://github.com/rapidsai/cudf/pull/9522)) [@isVoid](https://github.com/isVoid)
- Update help message to escape quotes in ./build.sh --cmake-args. ([#9494](https://github.com/rapidsai/cudf/pull/9494)) [@bdice](https://github.com/bdice)
- Improve Python docstring formatting. ([#9493](https://github.com/rapidsai/cudf/pull/9493)) [@bdice](https://github.com/bdice)
- Update table of I/O supported types ([#9476](https://github.com/rapidsai/cudf/pull/9476)) [@vuule](https://github.com/vuule)
- Document invalid regex patterns as undefined behavior ([#9473](https://github.com/rapidsai/cudf/pull/9473)) [@davidwendt](https://github.com/davidwendt)
- Miscellaneous documentation fixes to `cudf` ([#9471](https://github.com/rapidsai/cudf/pull/9471)) [@galipremsagar](https://github.com/galipremsagar)
- Fix many documentation errors in libcudf. ([#9355](https://github.com/rapidsai/cudf/pull/9355)) [@karthikeyann](https://github.com/karthikeyann)
- Fixing SubwordTokenizer docs issue ([#9354](https://github.com/rapidsai/cudf/pull/9354)) [@mayankanand007](https://github.com/mayankanand007)
- Improved deprecation warnings. ([#9347](https://github.com/rapidsai/cudf/pull/9347)) [@bdice](https://github.com/bdice)
- doc reorder mr, stream to stream, mr ([#9308](https://github.com/rapidsai/cudf/pull/9308)) [@karthikeyann](https://github.com/karthikeyann)
- Deprecate method parameters to DataFrame.join, DataFrame.merge. ([#9291](https://github.com/rapidsai/cudf/pull/9291)) [@bdice](https://github.com/bdice)
- Added deprecation warning for `.label_encoding()` ([#9289](https://github.com/rapidsai/cudf/pull/9289)) [@mayankanand007](https://github.com/mayankanand007)
## π New Features
- Enable Series.divide and DataFrame.divide ([#9630](https://github.com/rapidsai/cudf/pull/9630)) [@vyasr](https://github.com/vyasr)
- Update `bitmask_and` and `bitmask_or` to return a pair of resulting mask and count of unset bits ([#9616](https://github.com/rapidsai/cudf/pull/9616)) [@PointKernel](https://github.com/PointKernel)
- Add handling of mixed numeric types in `to_dlpack` ([#9585](https://github.com/rapidsai/cudf/pull/9585)) [@galipremsagar](https://github.com/galipremsagar)
- Support re.Pattern object for pat arg in str.replace ([#9573](https://github.com/rapidsai/cudf/pull/9573)) [@davidwendt](https://github.com/davidwendt)
- Add JNI for `lists::drop_list_duplicates` with keys-values input column ([#9553](https://github.com/rapidsai/cudf/pull/9553)) [@ttnghia](https://github.com/ttnghia)
- Support structs column in `min`, `max`, `argmin` and `argmax` groupby aggregate() and scan() ([#9545](https://github.com/rapidsai/cudf/pull/9545)) [@ttnghia](https://github.com/ttnghia)
- Move libcudacxx to use `rapids_cpm` and use newer versions ([#9539](https://github.com/rapidsai/cudf/pull/9539)) [@robertmaynard](https://github.com/robertmaynard)
- Add scan min/max support for chrono types to libcudf reduction-scan (not groupby scan) ([#9518](https://github.com/rapidsai/cudf/pull/9518)) [@davidwendt](https://github.com/davidwendt)
- Support `args=` in `apply` ([#9514](https://github.com/rapidsai/cudf/pull/9514)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Add groupby scan min/max support for strings values ([#9502](https://github.com/rapidsai/cudf/pull/9502)) [@davidwendt](https://github.com/davidwendt)
- Add list output option to character_ngrams() function ([#9499](https://github.com/rapidsai/cudf/pull/9499)) [@davidwendt](https://github.com/davidwendt)
- More granular column selection in ORC reader ([#9496](https://github.com/rapidsai/cudf/pull/9496)) [@vuule](https://github.com/vuule)
- add min_periods, ddof to groupby covariance, & correlation aggregation ([#9492](https://github.com/rapidsai/cudf/pull/9492)) [@karthikeyann](https://github.com/karthikeyann)
- Implement Series.datetime.floor ([#9488](https://github.com/rapidsai/cudf/pull/9488)) [@skirui-source](https://github.com/skirui-source)
- Enable linting of CMake files using pre-commit ([#9484](https://github.com/rapidsai/cudf/pull/9484)) [@vyasr](https://github.com/vyasr)
- Add support for single-line regex anchors ^/$ in contains_re ([#9482](https://github.com/rapidsai/cudf/pull/9482)) [@davidwendt](https://github.com/davidwendt)
- Augment `order_by` to Accept a List of `null_precedence` ([#9455](https://github.com/rapidsai/cudf/pull/9455)) [@isVoid](https://github.com/isVoid)
- Add format API for list column of strings ([#9454](https://github.com/rapidsai/cudf/pull/9454)) [@davidwendt](https://github.com/davidwendt)
- Enable Datetime/Timedelta dtypes in Masked UDFs ([#9451](https://github.com/rapidsai/cudf/pull/9451)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Add cudf python groupby.diff ([#9446](https://github.com/rapidsai/cudf/pull/9446)) [@karthikeyann](https://github.com/karthikeyann)
- Implement `lists::stable_sort_lists` for stable sorting of elements within each row of lists column ([#9425](https://github.com/rapidsai/cudf/pull/9425)) [@ttnghia](https://github.com/ttnghia)
- add ctest memcheck using cuda-sanitizer ([#9414](https://github.com/rapidsai/cudf/pull/9414)) [@karthikeyann](https://github.com/karthikeyann)
- Support Unary Operations in Masked UDF ([#9409](https://github.com/rapidsai/cudf/pull/9409)) [@isVoid](https://github.com/isVoid)
- Move Several Series Function to Frame ([#9394](https://github.com/rapidsai/cudf/pull/9394)) [@isVoid](https://github.com/isVoid)
- MD5 Python hash API ([#9390](https://github.com/rapidsai/cudf/pull/9390)) [@bdice](https://github.com/bdice)
- Add cudf strings is_title API ([#9380](https://github.com/rapidsai/cudf/pull/9380)) [@davidwendt](https://github.com/davidwendt)
- Enable casting to int64, uint64, and double in AST code. ([#9379](https://github.com/rapidsai/cudf/pull/9379)) [@vyasr](https://github.com/vyasr)
- Add support for writing ORC with map columns ([#9369](https://github.com/rapidsai/cudf/pull/9369)) [@vuule](https://github.com/vuule)
- extract_list_elements() with column_view indices ([#9367](https://github.com/rapidsai/cudf/pull/9367)) [@mythrocks](https://github.com/mythrocks)
- Reimplement `lists::drop_list_duplicates` for keys-values lists columns ([#9345](https://github.com/rapidsai/cudf/pull/9345)) [@ttnghia](https://github.com/ttnghia)
- Support Python UDFs written in terms of rows ([#9343](https://github.com/rapidsai/cudf/pull/9343)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- JNI: Support nested types in ORC writer ([#9334](https://github.com/rapidsai/cudf/pull/9334)) [@firestarman](https://github.com/firestarman)
- Optionally nullify out-of-bounds indices in segmented_gather(). ([#9318](https://github.com/rapidsai/cudf/pull/9318)) [@mythrocks](https://github.com/mythrocks)
- Add shallow hash function and shallow equality comparison for column_view ([#9312](https://github.com/rapidsai/cudf/pull/9312)) [@karthikeyann](https://github.com/karthikeyann)
- Add CudaMemoryBuffer for cudaMalloc memory using RMM cuda_memory_resource ([#9311](https://github.com/rapidsai/cudf/pull/9311)) [@rongou](https://github.com/rongou)
- Add parameters to control row index stride and stripe size in ORC writer ([#9310](https://github.com/rapidsai/cudf/pull/9310)) [@vuule](https://github.com/vuule)
- Add `na_position` param to dask-cudf `sort_values` ([#9264](https://github.com/rapidsai/cudf/pull/9264)) [@charlesbluca](https://github.com/charlesbluca)
- Add `ascending` parameter for dask-cudf `sort_values` ([#9250](https://github.com/rapidsai/cudf/pull/9250)) [@charlesbluca](https://github.com/charlesbluca)
- New array conversion methods ([#9236](https://github.com/rapidsai/cudf/pull/9236)) [@vyasr](https://github.com/vyasr)
- Series `apply` method backed by masked UDFs ([#9217](https://github.com/rapidsai/cudf/pull/9217)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Grouping by frequency and resampling ([#9178](https://github.com/rapidsai/cudf/pull/9178)) [@shwina](https://github.com/shwina)
- Pure-python masked UDFs ([#9174](https://github.com/rapidsai/cudf/pull/9174)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Add Covariance, Pearson correlation for sort groupby (libcudf) ([#9154](https://github.com/rapidsai/cudf/pull/9154)) [@karthikeyann](https://github.com/karthikeyann)
- Add `calendrical_month_sequence` in c++ and `date_range` in python ([#8886](https://github.com/rapidsai/cudf/pull/8886)) [@shwina](https://github.com/shwina)
## π οΈ Improvements
- Followup to PR 9088 comments ([#9659](https://github.com/rapidsai/cudf/pull/9659)) [@cwharris](https://github.com/cwharris)
- Update cuCollections to version that supports installed libcudacxx ([#9633](https://github.com/rapidsai/cudf/pull/9633)) [@robertmaynard](https://github.com/robertmaynard)
- Add `11.5` dev.yml to `cudf` ([#9617](https://github.com/rapidsai/cudf/pull/9617)) [@galipremsagar](https://github.com/galipremsagar)
- Add `xfail` for parquet reader `11.5` issue ([#9612](https://github.com/rapidsai/cudf/pull/9612)) [@galipremsagar](https://github.com/galipremsagar)
- remove deprecated Rmm.initialize method ([#9607](https://github.com/rapidsai/cudf/pull/9607)) [@rongou](https://github.com/rongou)
- Use HostColumnVectorCore for child columns in JCudfSerialization.unpackHostColumnVectors ([#9596](https://github.com/rapidsai/cudf/pull/9596)) [@sperlingxx](https://github.com/sperlingxx)
- Set RMM pool to a fixed size in JNI ([#9583](https://github.com/rapidsai/cudf/pull/9583)) [@rongou](https://github.com/rongou)
- Use nvCOMP for Snappy compression/decompression ([#9582](https://github.com/rapidsai/cudf/pull/9582)) [@vuule](https://github.com/vuule)
- Build CUDA version agnostic packages for dask-cudf ([#9578](https://github.com/rapidsai/cudf/pull/9578)) [@Ethyling](https://github.com/Ethyling)
- Fixed tests warning: "TYPED_TEST_CASE is deprecated, please use TYPED_TEST_SUITE" ([#9574](https://github.com/rapidsai/cudf/pull/9574)) [@ttnghia](https://github.com/ttnghia)
- Enable CMake format in CI and fix style ([#9570](https://github.com/rapidsai/cudf/pull/9570)) [@vyasr](https://github.com/vyasr)
- Add NVTX Start/End Ranges to JNI ([#9563](https://github.com/rapidsai/cudf/pull/9563)) [@abellina](https://github.com/abellina)
- Add librdkafka and python-confluent-kafka to dev conda environments s⦠([#9562](https://github.com/rapidsai/cudf/pull/9562)) [@jdye64](https://github.com/jdye64)
- Add offsets_begin/end() to strings_column_view ([#9559](https://github.com/rapidsai/cudf/pull/9559)) [@davidwendt](https://github.com/davidwendt)
- remove alignment options for RMM jni ([#9550](https://github.com/rapidsai/cudf/pull/9550)) [@rongou](https://github.com/rongou)
- Add axis parameter passthrough to `DataFrame` and `Series` take for pandas API compatibility ([#9549](https://github.com/rapidsai/cudf/pull/9549)) [@dantegd](https://github.com/dantegd)
- Remove sizeof and standardize on memory_usage ([#9544](https://github.com/rapidsai/cudf/pull/9544)) [@vyasr](https://github.com/vyasr)
- Adds cudaProfilerStart/cudaProfilerStop in JNI api ([#9543](https://github.com/rapidsai/cudf/pull/9543)) [@abellina](https://github.com/abellina)
- Generalize comparison binary operations ([#9542](https://github.com/rapidsai/cudf/pull/9542)) [@vyasr](https://github.com/vyasr)
- Expose APIs to wrap CUDA or RMM allocations with a Java device buffer instance ([#9538](https://github.com/rapidsai/cudf/pull/9538)) [@jlowe](https://github.com/jlowe)
- Add scan sum support for duration types to libcudf ([#9536](https://github.com/rapidsai/cudf/pull/9536)) [@davidwendt](https://github.com/davidwendt)
- Force inlining to improve AST performance ([#9530](https://github.com/rapidsai/cudf/pull/9530)) [@vyasr](https://github.com/vyasr)
- Generalize some more indexed frame methods ([#9529](https://github.com/rapidsai/cudf/pull/9529)) [@vyasr](https://github.com/vyasr)
- Add Java bindings for rolling window stddev aggregation ([#9527](https://github.com/rapidsai/cudf/pull/9527)) [@razajafri](https://github.com/razajafri)
- catch rmm::out_of_memory exceptions in jni ([#9525](https://github.com/rapidsai/cudf/pull/9525)) [@rongou](https://github.com/rongou)
- Add an overload of `make_empty_column` with `type_id` parameter ([#9524](https://github.com/rapidsai/cudf/pull/9524)) [@ttnghia](https://github.com/ttnghia)
- Accelerate conditional inner joins with larger right tables ([#9523](https://github.com/rapidsai/cudf/pull/9523)) [@vyasr](https://github.com/vyasr)
- Initial pass of generalizing `decimal` support in `cudf` python layer ([#9517](https://github.com/rapidsai/cudf/pull/9517)) [@galipremsagar](https://github.com/galipremsagar)
- Cleanup for flattening nested columns ([#9509](https://github.com/rapidsai/cudf/pull/9509)) [@rwlee](https://github.com/rwlee)
- Enable running tests using RMM arena and async memory resources ([#9506](https://github.com/rapidsai/cudf/pull/9506)) [@rongou](https://github.com/rongou)
- Remove dependency on six. ([#9495](https://github.com/rapidsai/cudf/pull/9495)) [@bdice](https://github.com/bdice)
- Cleanup some libcudf strings gtests ([#9489](https://github.com/rapidsai/cudf/pull/9489)) [@davidwendt](https://github.com/davidwendt)
- Rename strings/array_tests.cu to strings/array_tests.cpp ([#9480](https://github.com/rapidsai/cudf/pull/9480)) [@davidwendt](https://github.com/davidwendt)
- Refactor sorting APIs ([#9464](https://github.com/rapidsai/cudf/pull/9464)) [@vyasr](https://github.com/vyasr)
- Implement DataFrame.hash_values, deprecate DataFrame.hash_columns. ([#9458](https://github.com/rapidsai/cudf/pull/9458)) [@bdice](https://github.com/bdice)
- Deprecate Series.hash_encode. ([#9457](https://github.com/rapidsai/cudf/pull/9457)) [@bdice](https://github.com/bdice)
- Update `conda` recipes for Enhanced Compatibility effort ([#9456](https://github.com/rapidsai/cudf/pull/9456)) [@ajschmidt8](https://github.com/ajschmidt8)
- Small clean up to simplify column selection code in ORC reader ([#9444](https://github.com/rapidsai/cudf/pull/9444)) [@vuule](https://github.com/vuule)
- add missing stream to scalar.is_valid() wherever stream is available ([#9436](https://github.com/rapidsai/cudf/pull/9436)) [@karthikeyann](https://github.com/karthikeyann)
- Adds Deprecation Warnings to `one_hot_encoding` and Implement `get_dummies` with Cython API ([#9435](https://github.com/rapidsai/cudf/pull/9435)) [@isVoid](https://github.com/isVoid)
- Update pre-commit hook URLs. ([#9433](https://github.com/rapidsai/cudf/pull/9433)) [@bdice](https://github.com/bdice)
- Remove pyarrow import in `dask_cudf.io.parquet` ([#9429](https://github.com/rapidsai/cudf/pull/9429)) [@charlesbluca](https://github.com/charlesbluca)
- Miscellaneous improvements for UDFs ([#9422](https://github.com/rapidsai/cudf/pull/9422)) [@isVoid](https://github.com/isVoid)
- Use pre-commit for CI ([#9412](https://github.com/rapidsai/cudf/pull/9412)) [@vyasr](https://github.com/vyasr)
- Update to UCX-Py 0.23 ([#9407](https://github.com/rapidsai/cudf/pull/9407)) [@pentschev](https://github.com/pentschev)
- Expose OutOfBoundsPolicy in JNI for Table.gather ([#9406](https://github.com/rapidsai/cudf/pull/9406)) [@abellina](https://github.com/abellina)
- Improvements to tdigest aggregation code. ([#9403](https://github.com/rapidsai/cudf/pull/9403)) [@nvdbaranec](https://github.com/nvdbaranec)
- Add Java API to deserialize a table to host columns ([#9402](https://github.com/rapidsai/cudf/pull/9402)) [@jlowe](https://github.com/jlowe)
- Frame copy to use __class__ instead of type() ([#9397](https://github.com/rapidsai/cudf/pull/9397)) [@madsbk](https://github.com/madsbk)
- Change all DeprecationWarnings to FutureWarning. ([#9392](https://github.com/rapidsai/cudf/pull/9392)) [@bdice](https://github.com/bdice)
- Update Java nvcomp JNI bindings to nvcomp 2.x API ([#9384](https://github.com/rapidsai/cudf/pull/9384)) [@jbrennan333](https://github.com/jbrennan333)
- Add IndexedFrame class and move SingleColumnFrame to a separate module ([#9378](https://github.com/rapidsai/cudf/pull/9378)) [@vyasr](https://github.com/vyasr)
- Support Arrow NativeFile and PythonFile for remote ORC storage ([#9377](https://github.com/rapidsai/cudf/pull/9377)) [@rjzamora](https://github.com/rjzamora)
- Use Arrow PythonFile for remote CSV storage ([#9376](https://github.com/rapidsai/cudf/pull/9376)) [@rjzamora](https://github.com/rjzamora)
- Add multi-threaded writing to GDS writes ([#9372](https://github.com/rapidsai/cudf/pull/9372)) [@devavret](https://github.com/devavret)
- Miscellaneous column cleanup ([#9370](https://github.com/rapidsai/cudf/pull/9370)) [@vyasr](https://github.com/vyasr)
- Use single kernel to extract all groups in cudf::strings::extract ([#9358](https://github.com/rapidsai/cudf/pull/9358)) [@davidwendt](https://github.com/davidwendt)
- Consolidate binary ops into `Frame` ([#9357](https://github.com/rapidsai/cudf/pull/9357)) [@isVoid](https://github.com/isVoid)
- Move rank scan implementations from scan_inclusive.cu to rank_scan.cu ([#9351](https://github.com/rapidsai/cudf/pull/9351)) [@davidwendt](https://github.com/davidwendt)
- Remove usage of deprecated thrust::host_space_tag. ([#9350](https://github.com/rapidsai/cudf/pull/9350)) [@bdice](https://github.com/bdice)
- Use Default Memory Resource for Temporaries in `reduction.cpp` ([#9344](https://github.com/rapidsai/cudf/pull/9344)) [@isVoid](https://github.com/isVoid)
- Fix Cython compilation warnings. ([#9327](https://github.com/rapidsai/cudf/pull/9327)) [@bdice](https://github.com/bdice)
- Fix some unused variable warnings in libcudf ([#9326](https://github.com/rapidsai/cudf/pull/9326)) [@davidwendt](https://github.com/davidwendt)
- Use optional-iterator for copy-if-else kernel ([#9324](https://github.com/rapidsai/cudf/pull/9324)) [@davidwendt](https://github.com/davidwendt)
- Remove Table class ([#9315](https://github.com/rapidsai/cudf/pull/9315)) [@vyasr](https://github.com/vyasr)
- Unpin `dask` and `distributed` in CI ([#9307](https://github.com/rapidsai/cudf/pull/9307)) [@galipremsagar](https://github.com/galipremsagar)
- Add optional-iterator support to indexalator ([#9306](https://github.com/rapidsai/cudf/pull/9306)) [@davidwendt](https://github.com/davidwendt)
- Consolidate more methods in Frame ([#9305](https://github.com/rapidsai/cudf/pull/9305)) [@vyasr](https://github.com/vyasr)
- Add Arrow-NativeFile and PythonFile support to read_parquet and read_csv in cudf ([#9304](https://github.com/rapidsai/cudf/pull/9304)) [@rjzamora](https://github.com/rjzamora)
- Pin mypy in .pre-commit-config.yaml to match conda environment pinning. ([#9300](https://github.com/rapidsai/cudf/pull/9300)) [@bdice](https://github.com/bdice)
- Use gather.hpp when gather-map exists in device memory ([#9299](https://github.com/rapidsai/cudf/pull/9299)) [@davidwendt](https://github.com/davidwendt)
- Fix Automerger for `Branch-21.12` from `branch-21.10` ([#9285](https://github.com/rapidsai/cudf/pull/9285)) [@galipremsagar](https://github.com/galipremsagar)
- Refactor cuIO timestamp processing with `cuda::std::chrono` ([#9278](https://github.com/rapidsai/cudf/pull/9278)) [@PointKernel](https://github.com/PointKernel)
- Change strings copy_if_else to use optional-iterator instead of pair-iterator ([#9266](https://github.com/rapidsai/cudf/pull/9266)) [@davidwendt](https://github.com/davidwendt)
- Update cudf java bindings to 21.12.0-SNAPSHOT ([#9248](https://github.com/rapidsai/cudf/pull/9248)) [@pxLi](https://github.com/pxLi)
- Various internal MultiIndex improvements ([#9243](https://github.com/rapidsai/cudf/pull/9243)) [@vyasr](https://github.com/vyasr)
- Add detail interface for `split` and `slice(table_view)`, refactors both function with `host_span` ([#9226](https://github.com/rapidsai/cudf/pull/9226)) [@isVoid](https://github.com/isVoid)
- Refactor MD5 implementation. ([#9212](https://github.com/rapidsai/cudf/pull/9212)) [@bdice](https://github.com/bdice)
- Update groupby result_cache to allow sharing intermediate results based on column_view instead of requests. ([#9195](https://github.com/rapidsai/cudf/pull/9195)) [@karthikeyann](https://github.com/karthikeyann)
- Use nvcomp's snappy decompressor in avro reader ([#9181](https://github.com/rapidsai/cudf/pull/9181)) [@devavret](https://github.com/devavret)
- Add `isocalendar` API support ([#9169](https://github.com/rapidsai/cudf/pull/9169)) [@marlenezw](https://github.com/marlenezw)
- Simplify read_json by removing unnecessary reader/impl classes ([#9088](https://github.com/rapidsai/cudf/pull/9088)) [@cwharris](https://github.com/cwharris)
- Simplify read_csv by removing unnecessary reader/impl classes ([#9041](https://github.com/rapidsai/cudf/pull/9041)) [@cwharris](https://github.com/cwharris)
- Refactor hash join with cuCollections multimap ([#8934](https://github.com/rapidsai/cudf/pull/8934)) [@PointKernel](https://github.com/PointKernel)
# cuDF 21.10.00 (7 Oct 2021)
## π¨ Breaking Changes
- Remove Cython APIs for table view generation ([#9199](https://github.com/rapidsai/cudf/pull/9199)) [@vyasr](https://github.com/vyasr)
- Upgrade `pandas` version in `cudf` ([#9147](https://github.com/rapidsai/cudf/pull/9147)) [@galipremsagar](https://github.com/galipremsagar)
- Make AST operators nullable ([#9096](https://github.com/rapidsai/cudf/pull/9096)) [@vyasr](https://github.com/vyasr)
- Remove the option to pass data types as strings to `read_csv` and `read_json` ([#9079](https://github.com/rapidsai/cudf/pull/9079)) [@vuule](https://github.com/vuule)
- Update JNI java CSV APIs to not use deprecated API ([#9066](https://github.com/rapidsai/cudf/pull/9066)) [@revans2](https://github.com/revans2)
- Support additional format specifiers in from_timestamps ([#9047](https://github.com/rapidsai/cudf/pull/9047)) [@davidwendt](https://github.com/davidwendt)
- Expose expression base class publicly and simplify public AST API ([#9045](https://github.com/rapidsai/cudf/pull/9045)) [@vyasr](https://github.com/vyasr)
- Add support for struct type in ORC writer ([#9025](https://github.com/rapidsai/cudf/pull/9025)) [@vuule](https://github.com/vuule)
- Remove aliases of various api.types APIs from utils.dtypes. ([#9011](https://github.com/rapidsai/cudf/pull/9011)) [@vyasr](https://github.com/vyasr)
- Java bindings for conditional join output sizes ([#9002](https://github.com/rapidsai/cudf/pull/9002)) [@jlowe](https://github.com/jlowe)
- Move compute_column API out of ast namespace ([#8957](https://github.com/rapidsai/cudf/pull/8957)) [@vyasr](https://github.com/vyasr)
- `cudf.dtype` function ([#8949](https://github.com/rapidsai/cudf/pull/8949)) [@shwina](https://github.com/shwina)
- Refactor Frame reductions ([#8944](https://github.com/rapidsai/cudf/pull/8944)) [@vyasr](https://github.com/vyasr)
- Add nested column selection to parquet reader ([#8933](https://github.com/rapidsai/cudf/pull/8933)) [@devavret](https://github.com/devavret)
- JNI Aggregation Type Changes ([#8919](https://github.com/rapidsai/cudf/pull/8919)) [@revans2](https://github.com/revans2)
- Add groupby_aggregation and groupby_scan_aggregation classes and force their usage. ([#8906](https://github.com/rapidsai/cudf/pull/8906)) [@nvdbaranec](https://github.com/nvdbaranec)
- Expand CSV and JSON reader APIs to accept `dtypes` as a vector or map of `data_type` objects ([#8856](https://github.com/rapidsai/cudf/pull/8856)) [@vuule](https://github.com/vuule)
- Change cudf docs theme to pydata theme ([#8746](https://github.com/rapidsai/cudf/pull/8746)) [@galipremsagar](https://github.com/galipremsagar)
- Enable compiled binary ops in libcudf, python and java ([#8741](https://github.com/rapidsai/cudf/pull/8741)) [@karthikeyann](https://github.com/karthikeyann)
- Make groupby transform-like op order match original data order ([#8720](https://github.com/rapidsai/cudf/pull/8720)) [@isVoid](https://github.com/isVoid)
## π Bug Fixes
- `fixed_point` `cudf::groupby` for `mean` aggregation ([#9296](https://github.com/rapidsai/cudf/pull/9296)) [@codereport](https://github.com/codereport)
- Fix `interleave_columns` when the input string lists column having empty child column ([#9292](https://github.com/rapidsai/cudf/pull/9292)) [@ttnghia](https://github.com/ttnghia)
- Update nvcomp to include fixes for installation of headers ([#9276](https://github.com/rapidsai/cudf/pull/9276)) [@devavret](https://github.com/devavret)
- Fix Java column leak in testParquetWriteMap ([#9271](https://github.com/rapidsai/cudf/pull/9271)) [@jlowe](https://github.com/jlowe)
- Fix call to thrust::reduce_by_key in argmin/argmax libcudf groupby ([#9263](https://github.com/rapidsai/cudf/pull/9263)) [@davidwendt](https://github.com/davidwendt)
- Fixing empty input to getMapValue crashing ([#9262](https://github.com/rapidsai/cudf/pull/9262)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Fix duplicate names issue in `MultiIndex.deserialize ` ([#9258](https://github.com/rapidsai/cudf/pull/9258)) [@galipremsagar](https://github.com/galipremsagar)
- `Dataframe.sort_index` optimizations ([#9238](https://github.com/rapidsai/cudf/pull/9238)) [@galipremsagar](https://github.com/galipremsagar)
- Temporarily disabling problematic test in parquet writer ([#9230](https://github.com/rapidsai/cudf/pull/9230)) [@devavret](https://github.com/devavret)
- Explicitly disable groupby on unsupported key types. ([#9227](https://github.com/rapidsai/cudf/pull/9227)) [@mythrocks](https://github.com/mythrocks)
- Fix `gather` for sliced input structs column ([#9218](https://github.com/rapidsai/cudf/pull/9218)) [@ttnghia](https://github.com/ttnghia)
- Fix JNI code for left semi and anti joins ([#9207](https://github.com/rapidsai/cudf/pull/9207)) [@jlowe](https://github.com/jlowe)
- Only install thrust when using a non 'system' version ([#9206](https://github.com/rapidsai/cudf/pull/9206)) [@robertmaynard](https://github.com/robertmaynard)
- Remove zlib from libcudf public CMake dependencies ([#9204](https://github.com/rapidsai/cudf/pull/9204)) [@robertmaynard](https://github.com/robertmaynard)
- Fix out-of-bounds memory read in orc gpuEncodeOrcColumnData ([#9196](https://github.com/rapidsai/cudf/pull/9196)) [@davidwendt](https://github.com/davidwendt)
- Fix `gather()` for `STRUCT` inputs with no nulls in members. ([#9194](https://github.com/rapidsai/cudf/pull/9194)) [@mythrocks](https://github.com/mythrocks)
- get_cucollections properly uses rapids_cpm_find ([#9189](https://github.com/rapidsai/cudf/pull/9189)) [@robertmaynard](https://github.com/robertmaynard)
- rapids-export correctly reference build code block and doc strings ([#9186](https://github.com/rapidsai/cudf/pull/9186)) [@robertmaynard](https://github.com/robertmaynard)
- Fix logic while parsing the sum statistic for numerical orc columns ([#9183](https://github.com/rapidsai/cudf/pull/9183)) [@ayushdg](https://github.com/ayushdg)
- Add handling for nulls in `dask_cudf.sorting.quantile_divisions` ([#9171](https://github.com/rapidsai/cudf/pull/9171)) [@charlesbluca](https://github.com/charlesbluca)
- Approximate overflow detection in ORC statistics ([#9163](https://github.com/rapidsai/cudf/pull/9163)) [@vuule](https://github.com/vuule)
- Use decimal precision metadata when reading from parquet files ([#9162](https://github.com/rapidsai/cudf/pull/9162)) [@shwina](https://github.com/shwina)
- Fix variable name in Java build script ([#9161](https://github.com/rapidsai/cudf/pull/9161)) [@jlowe](https://github.com/jlowe)
- Import rapids-cmake modules using the correct cmake variable. ([#9149](https://github.com/rapidsai/cudf/pull/9149)) [@robertmaynard](https://github.com/robertmaynard)
- Fix conditional joins with empty left table ([#9146](https://github.com/rapidsai/cudf/pull/9146)) [@vyasr](https://github.com/vyasr)
- Fix joining on indexes with duplicate level names ([#9137](https://github.com/rapidsai/cudf/pull/9137)) [@shwina](https://github.com/shwina)
- Fixes missing child column name in dtype while reading ORC file. ([#9134](https://github.com/rapidsai/cudf/pull/9134)) [@rgsl888prabhu](https://github.com/rgsl888prabhu)
- Apply type metadata after column is slice-copied ([#9131](https://github.com/rapidsai/cudf/pull/9131)) [@isVoid](https://github.com/isVoid)
- Fix a bug: inner_join_size return zero if build table is empty ([#9128](https://github.com/rapidsai/cudf/pull/9128)) [@PointKernel](https://github.com/PointKernel)
- Fix multi hive-partition parquet reading in dask-cudf ([#9122](https://github.com/rapidsai/cudf/pull/9122)) [@rjzamora](https://github.com/rjzamora)
- Support null literals in expressions ([#9117](https://github.com/rapidsai/cudf/pull/9117)) [@vyasr](https://github.com/vyasr)
- Fix cudf::hash_join output size for struct joins ([#9107](https://github.com/rapidsai/cudf/pull/9107)) [@jlowe](https://github.com/jlowe)
- Import fix ([#9104](https://github.com/rapidsai/cudf/pull/9104)) [@shwina](https://github.com/shwina)
- Fix cudf::strings::is_fixed_point checking of overflow for decimal32 ([#9093](https://github.com/rapidsai/cudf/pull/9093)) [@davidwendt](https://github.com/davidwendt)
- Fix branch_stack calculation in `row_bit_count()` ([#9076](https://github.com/rapidsai/cudf/pull/9076)) [@mythrocks](https://github.com/mythrocks)
- Fetch rapids-cmake to work around cuCollection cmake issue ([#9075](https://github.com/rapidsai/cudf/pull/9075)) [@jlowe](https://github.com/jlowe)
- Fix compilation errors in groupby benchmarks. ([#9072](https://github.com/rapidsai/cudf/pull/9072)) [@nvdbaranec](https://github.com/nvdbaranec)
- Preserve float16 upscaling ([#9069](https://github.com/rapidsai/cudf/pull/9069)) [@galipremsagar](https://github.com/galipremsagar)
- Fix memcheck read error in libcudf contiguous_split ([#9067](https://github.com/rapidsai/cudf/pull/9067)) [@davidwendt](https://github.com/davidwendt)
- Add support for reading ORC file with no row group index ([#9060](https://github.com/rapidsai/cudf/pull/9060)) [@rgsl888prabhu](https://github.com/rgsl888prabhu)
- Various multiindex related fixes ([#9036](https://github.com/rapidsai/cudf/pull/9036)) [@shwina](https://github.com/shwina)
- Avoid rebuilding cython in build.sh ([#9034](https://github.com/rapidsai/cudf/pull/9034)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Add support for percentile dispatch in `dask_cudf` ([#9031](https://github.com/rapidsai/cudf/pull/9031)) [@galipremsagar](https://github.com/galipremsagar)
- cudf resolve nvcc 11.0 compiler crashes during codegen ([#9028](https://github.com/rapidsai/cudf/pull/9028)) [@robertmaynard](https://github.com/robertmaynard)
- Fetch correct grouping keys `agg` of dask groupby ([#9022](https://github.com/rapidsai/cudf/pull/9022)) [@galipremsagar](https://github.com/galipremsagar)
- Allow `where()` to work with a Series and `other=cudf.NA` ([#9019](https://github.com/rapidsai/cudf/pull/9019)) [@sarahyurick](https://github.com/sarahyurick)
- Use correct index when returning Series from `GroupBy.apply()` ([#9016](https://github.com/rapidsai/cudf/pull/9016)) [@charlesbluca](https://github.com/charlesbluca)
- Fix `Dataframe` indexer setitem when array is passed ([#9006](https://github.com/rapidsai/cudf/pull/9006)) [@galipremsagar](https://github.com/galipremsagar)
- Fix ORC reading of files with struct columns that have null values ([#9005](https://github.com/rapidsai/cudf/pull/9005)) [@vuule](https://github.com/vuule)
- Ensure JNI native libraries load when CompiledExpression loads ([#8997](https://github.com/rapidsai/cudf/pull/8997)) [@jlowe](https://github.com/jlowe)
- Fix memory read error in get_dremel_data in page_enc.cu ([#8995](https://github.com/rapidsai/cudf/pull/8995)) [@davidwendt](https://github.com/davidwendt)
- Fix memory write error in get_list_child_to_list_row_mapping utility ([#8994](https://github.com/rapidsai/cudf/pull/8994)) [@davidwendt](https://github.com/davidwendt)
- Fix debug compile error for csv_test.cpp ([#8981](https://github.com/rapidsai/cudf/pull/8981)) [@davidwendt](https://github.com/davidwendt)
- Fix memory read/write error in concatenate_lists_ignore_null ([#8978](https://github.com/rapidsai/cudf/pull/8978)) [@davidwendt](https://github.com/davidwendt)
- Fix concatenation of `cudf.RangeIndex` ([#8970](https://github.com/rapidsai/cudf/pull/8970)) [@galipremsagar](https://github.com/galipremsagar)
- Java conditional joins should not require matching column counts ([#8955](https://github.com/rapidsai/cudf/pull/8955)) [@jlowe](https://github.com/jlowe)
- Fix concatenate empty structs ([#8947](https://github.com/rapidsai/cudf/pull/8947)) [@sperlingxx](https://github.com/sperlingxx)
- Fix cuda-memcheck errors for some libcudf functions ([#8941](https://github.com/rapidsai/cudf/pull/8941)) [@davidwendt](https://github.com/davidwendt)
- Apply series name to result of `SeriesGroupby.apply()` ([#8939](https://github.com/rapidsai/cudf/pull/8939)) [@charlesbluca](https://github.com/charlesbluca)
- `cdef packed_columns` as `cppclass` instead of `struct` ([#8936](https://github.com/rapidsai/cudf/pull/8936)) [@charlesbluca](https://github.com/charlesbluca)
- Inserting a `cudf.NA` into a DataFrame ([#8923](https://github.com/rapidsai/cudf/pull/8923)) [@sarahyurick](https://github.com/sarahyurick)
- Support casting with Pandas dtype aliases ([#8920](https://github.com/rapidsai/cudf/pull/8920)) [@sarahyurick](https://github.com/sarahyurick)
- Allow `sort_values` to accept same `kind` values as Pandas ([#8912](https://github.com/rapidsai/cudf/pull/8912)) [@sarahyurick](https://github.com/sarahyurick)
- Enable casting to pandas nullable dtypes ([#8889](https://github.com/rapidsai/cudf/pull/8889)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Fix libcudf memory errors ([#8884](https://github.com/rapidsai/cudf/pull/8884)) [@karthikeyann](https://github.com/karthikeyann)
- Throw KeyError when accessing field from struct with nonexistent key ([#8880](https://github.com/rapidsai/cudf/pull/8880)) [@NV-jpt](https://github.com/NV-jpt)
- replace auto with auto& ref for cast<&> ([#8866](https://github.com/rapidsai/cudf/pull/8866)) [@karthikeyann](https://github.com/karthikeyann)
- Add missing include<optional> in binops ([#8864](https://github.com/rapidsai/cudf/pull/8864)) [@karthikeyann](https://github.com/karthikeyann)
- Fix `select_dtypes` to work when non-class dtypes present in dataframe ([#8849](https://github.com/rapidsai/cudf/pull/8849)) [@sarahyurick](https://github.com/sarahyurick)
- Re-enable JSON tests ([#8843](https://github.com/rapidsai/cudf/pull/8843)) [@vuule](https://github.com/vuule)
- Support header with embedded delimiter in csv writer ([#8798](https://github.com/rapidsai/cudf/pull/8798)) [@davidwendt](https://github.com/davidwendt)
## π Documentation
- Add IO docs page in `cudf` documentation ([#9145](https://github.com/rapidsai/cudf/pull/9145)) [@galipremsagar](https://github.com/galipremsagar)
- use correct namespace in cuio code examples ([#9037](https://github.com/rapidsai/cudf/pull/9037)) [@cwharris](https://github.com/cwharris)
- Restructuring `Contributing doc` ([#9026](https://github.com/rapidsai/cudf/pull/9026)) [@iskode](https://github.com/iskode)
- Update stable version in readme ([#9008](https://github.com/rapidsai/cudf/pull/9008)) [@galipremsagar](https://github.com/galipremsagar)
- Add spans and more include guidelines to libcudf developer guide ([#8931](https://github.com/rapidsai/cudf/pull/8931)) [@harrism](https://github.com/harrism)
- Update Java build instructions to mention Arrow S3 and Docker ([#8867](https://github.com/rapidsai/cudf/pull/8867)) [@jlowe](https://github.com/jlowe)
- List GDS-enabled formats in the docs ([#8805](https://github.com/rapidsai/cudf/pull/8805)) [@vuule](https://github.com/vuule)
- Change cudf docs theme to pydata theme ([#8746](https://github.com/rapidsai/cudf/pull/8746)) [@galipremsagar](https://github.com/galipremsagar)
## π New Features
- Revert "Add shallow hash function and shallow equality comparison for column_view ([#9185)" (#9283](https://github.com/rapidsai/cudf/pull/9185)" (#9283)) [@karthikeyann](https://github.com/karthikeyann)
- Align `DataFrame.apply` signature with pandas ([#9275](https://github.com/rapidsai/cudf/pull/9275)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Add struct type support for `drop_list_duplicates` ([#9202](https://github.com/rapidsai/cudf/pull/9202)) [@ttnghia](https://github.com/ttnghia)
- support CUDA async memory resource in JNI ([#9201](https://github.com/rapidsai/cudf/pull/9201)) [@rongou](https://github.com/rongou)
- Add shallow hash function and shallow equality comparison for column_view ([#9185](https://github.com/rapidsai/cudf/pull/9185)) [@karthikeyann](https://github.com/karthikeyann)
- Superimpose null masks for STRUCT columns. ([#9144](https://github.com/rapidsai/cudf/pull/9144)) [@mythrocks](https://github.com/mythrocks)
- Implemented bindings for `ceil` timestamp operation ([#9141](https://github.com/rapidsai/cudf/pull/9141)) [@shaneding](https://github.com/shaneding)
- Adding MAP type support for ORC Reader ([#9132](https://github.com/rapidsai/cudf/pull/9132)) [@rgsl888prabhu](https://github.com/rgsl888prabhu)
- Implement `interleave_columns` for lists with arbitrary nested type ([#9130](https://github.com/rapidsai/cudf/pull/9130)) [@ttnghia](https://github.com/ttnghia)
- Add python bindings to fixed-size window and groupby `rolling.var`, `rolling.std` ([#9097](https://github.com/rapidsai/cudf/pull/9097)) [@isVoid](https://github.com/isVoid)
- Make AST operators nullable ([#9096](https://github.com/rapidsai/cudf/pull/9096)) [@vyasr](https://github.com/vyasr)
- Java bindings for approx_percentile ([#9094](https://github.com/rapidsai/cudf/pull/9094)) [@andygrove](https://github.com/andygrove)
- Add `dseries.struct.explode` ([#9086](https://github.com/rapidsai/cudf/pull/9086)) [@isVoid](https://github.com/isVoid)
- Add support for BaseIndexer in Rolling APIs ([#9085](https://github.com/rapidsai/cudf/pull/9085)) [@galipremsagar](https://github.com/galipremsagar)
- Remove the option to pass data types as strings to `read_csv` and `read_json` ([#9079](https://github.com/rapidsai/cudf/pull/9079)) [@vuule](https://github.com/vuule)
- Add handling for nested dicts in dask-cudf groupby ([#9054](https://github.com/rapidsai/cudf/pull/9054)) [@charlesbluca](https://github.com/charlesbluca)
- Added Series.dt.is_quarter_start and Series.dt.is_quarter_end ([#9046](https://github.com/rapidsai/cudf/pull/9046)) [@TravisHester](https://github.com/TravisHester)
- Support nested types for nth_element reduction ([#9043](https://github.com/rapidsai/cudf/pull/9043)) [@sperlingxx](https://github.com/sperlingxx)
- Update sort groupby to use non-atomic operation ([#9035](https://github.com/rapidsai/cudf/pull/9035)) [@karthikeyann](https://github.com/karthikeyann)
- Add support for struct type in ORC writer ([#9025](https://github.com/rapidsai/cudf/pull/9025)) [@vuule](https://github.com/vuule)
- Implement `interleave_columns` for structs columns ([#9012](https://github.com/rapidsai/cudf/pull/9012)) [@ttnghia](https://github.com/ttnghia)
- Add groupby first and last aggregations ([#9004](https://github.com/rapidsai/cudf/pull/9004)) [@shwina](https://github.com/shwina)
- Add `DecimalBaseColumn` and move `as_decimal_column` ([#9001](https://github.com/rapidsai/cudf/pull/9001)) [@isVoid](https://github.com/isVoid)
- Python/Cython bindings for multibyte_split ([#8998](https://github.com/rapidsai/cudf/pull/8998)) [@jdye64](https://github.com/jdye64)
- Support scalar `months` in `add_calendrical_months`, extends API to INT32 support ([#8991](https://github.com/rapidsai/cudf/pull/8991)) [@isVoid](https://github.com/isVoid)
- Added Series.dt.is_month_end ([#8989](https://github.com/rapidsai/cudf/pull/8989)) [@TravisHester](https://github.com/TravisHester)
- Support for using tdigests to compute approximate percentiles. ([#8983](https://github.com/rapidsai/cudf/pull/8983)) [@nvdbaranec](https://github.com/nvdbaranec)
- Support "unflatten" of columns flattened via `flatten_nested_columns()`: ([#8956](https://github.com/rapidsai/cudf/pull/8956)) [@mythrocks](https://github.com/mythrocks)
- Implement timestamp ceil ([#8942](https://github.com/rapidsai/cudf/pull/8942)) [@shaneding](https://github.com/shaneding)
- Add nested column selection to parquet reader ([#8933](https://github.com/rapidsai/cudf/pull/8933)) [@devavret](https://github.com/devavret)
- Expose conditional join size calculation ([#8928](https://github.com/rapidsai/cudf/pull/8928)) [@vyasr](https://github.com/vyasr)
- Support Nulls in Timeseries Generator ([#8925](https://github.com/rapidsai/cudf/pull/8925)) [@isVoid](https://github.com/isVoid)
- Avoid index equality check in `_CPackedColumns.from_py_table()` ([#8917](https://github.com/rapidsai/cudf/pull/8917)) [@charlesbluca](https://github.com/charlesbluca)
- Add dot product binary op ([#8909](https://github.com/rapidsai/cudf/pull/8909)) [@charlesbluca](https://github.com/charlesbluca)
- Expose `days_in_month` function in libcudf and add python bindings ([#8892](https://github.com/rapidsai/cudf/pull/8892)) [@isVoid](https://github.com/isVoid)
- Series string repeat ([#8882](https://github.com/rapidsai/cudf/pull/8882)) [@sarahyurick](https://github.com/sarahyurick)
- Python binding for quarters ([#8862](https://github.com/rapidsai/cudf/pull/8862)) [@shaneding](https://github.com/shaneding)
- Expand CSV and JSON reader APIs to accept `dtypes` as a vector or map of `data_type` objects ([#8856](https://github.com/rapidsai/cudf/pull/8856)) [@vuule](https://github.com/vuule)
- Add Java bindings for AST transform ([#8846](https://github.com/rapidsai/cudf/pull/8846)) [@jlowe](https://github.com/jlowe)
- Series datetime is_month_start ([#8844](https://github.com/rapidsai/cudf/pull/8844)) [@sarahyurick](https://github.com/sarahyurick)
- Support bracket syntax for cudf::strings::replace_with_backrefs group index values ([#8841](https://github.com/rapidsai/cudf/pull/8841)) [@davidwendt](https://github.com/davidwendt)
- Support `VARIANCE` and `STD` aggregation in rolling op ([#8809](https://github.com/rapidsai/cudf/pull/8809)) [@isVoid](https://github.com/isVoid)
- Add quarters to libcudf datetime ([#8779](https://github.com/rapidsai/cudf/pull/8779)) [@shaneding](https://github.com/shaneding)
- Linear Interpolation of `nan`s via `cupy` ([#8767](https://github.com/rapidsai/cudf/pull/8767)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Enable compiled binary ops in libcudf, python and java ([#8741](https://github.com/rapidsai/cudf/pull/8741)) [@karthikeyann](https://github.com/karthikeyann)
- Make groupby transform-like op order match original data order ([#8720](https://github.com/rapidsai/cudf/pull/8720)) [@isVoid](https://github.com/isVoid)
- multibyte_split ([#8702](https://github.com/rapidsai/cudf/pull/8702)) [@cwharris](https://github.com/cwharris)
- Implement JNI for `strings:repeat_strings` that repeats each string separately by different numbers of times ([#8572](https://github.com/rapidsai/cudf/pull/8572)) [@ttnghia](https://github.com/ttnghia)
## π οΈ Improvements
- Pin max `dask` and `distributed` versions to `2021.09.1` ([#9286](https://github.com/rapidsai/cudf/pull/9286)) [@galipremsagar](https://github.com/galipremsagar)
- Optimized fsspec data transfer for remote file-systems ([#9265](https://github.com/rapidsai/cudf/pull/9265)) [@rjzamora](https://github.com/rjzamora)
- Skip dask-cudf tests on arm64 ([#9252](https://github.com/rapidsai/cudf/pull/9252)) [@Ethyling](https://github.com/Ethyling)
- Use nvcomp's snappy compressor in ORC writer ([#9242](https://github.com/rapidsai/cudf/pull/9242)) [@devavret](https://github.com/devavret)
- Only run imports tests on x86_64 ([#9241](https://github.com/rapidsai/cudf/pull/9241)) [@Ethyling](https://github.com/Ethyling)
- Remove unnecessary call to device_uvector::release() ([#9237](https://github.com/rapidsai/cudf/pull/9237)) [@harrism](https://github.com/harrism)
- Use nvcomp's snappy decompression in ORC reader ([#9235](https://github.com/rapidsai/cudf/pull/9235)) [@devavret](https://github.com/devavret)
- Add grouped_rolling test with STRUCT groupby keys. ([#9228](https://github.com/rapidsai/cudf/pull/9228)) [@mythrocks](https://github.com/mythrocks)
- Optimize `cudf.concat` for `axis=0` ([#9222](https://github.com/rapidsai/cudf/pull/9222)) [@galipremsagar](https://github.com/galipremsagar)
- Fix some libcudf calls not passing the stream parameter ([#9220](https://github.com/rapidsai/cudf/pull/9220)) [@davidwendt](https://github.com/davidwendt)
- Add min and max bounds for random dataframe generator numeric types ([#9211](https://github.com/rapidsai/cudf/pull/9211)) [@galipremsagar](https://github.com/galipremsagar)
- Improve performance of expression evaluation ([#9210](https://github.com/rapidsai/cudf/pull/9210)) [@vyasr](https://github.com/vyasr)
- Misc optimizations in `cudf` ([#9203](https://github.com/rapidsai/cudf/pull/9203)) [@galipremsagar](https://github.com/galipremsagar)
- Remove Cython APIs for table view generation ([#9199](https://github.com/rapidsai/cudf/pull/9199)) [@vyasr](https://github.com/vyasr)
- Add JNI support for drop_list_duplicates ([#9198](https://github.com/rapidsai/cudf/pull/9198)) [@revans2](https://github.com/revans2)
- Update pandas versions in conda recipes and requirements.txt files ([#9197](https://github.com/rapidsai/cudf/pull/9197)) [@galipremsagar](https://github.com/galipremsagar)
- Minor C++17 cleanup of `groupby.cu`: structured bindings, more concise lambda, etc ([#9193](https://github.com/rapidsai/cudf/pull/9193)) [@codereport](https://github.com/codereport)
- Explicit about bitwidth difference between cudf boolean and arrow boolean ([#9192](https://github.com/rapidsai/cudf/pull/9192)) [@isVoid](https://github.com/isVoid)
- Remove _source_index from MultiIndex ([#9191](https://github.com/rapidsai/cudf/pull/9191)) [@vyasr](https://github.com/vyasr)
- Fix typo in the name of `cudf-testing-targets.cmake` ([#9190](https://github.com/rapidsai/cudf/pull/9190)) [@trxcllnt](https://github.com/trxcllnt)
- Add support for single-digits in cudf::to_timestamps ([#9173](https://github.com/rapidsai/cudf/pull/9173)) [@davidwendt](https://github.com/davidwendt)
- Fix cufilejni build include path ([#9168](https://github.com/rapidsai/cudf/pull/9168)) [@pxLi](https://github.com/pxLi)
- `dask_cudf` dispatch registering cleanup ([#9160](https://github.com/rapidsai/cudf/pull/9160)) [@galipremsagar](https://github.com/galipremsagar)
- Remove unneeded stream/mr from a cudf::make_strings_column ([#9148](https://github.com/rapidsai/cudf/pull/9148)) [@davidwendt](https://github.com/davidwendt)
- Upgrade `pandas` version in `cudf` ([#9147](https://github.com/rapidsai/cudf/pull/9147)) [@galipremsagar](https://github.com/galipremsagar)
- make data chunk reader return unique_ptr ([#9129](https://github.com/rapidsai/cudf/pull/9129)) [@cwharris](https://github.com/cwharris)
- Add backend for `percentile_lookup` dispatch ([#9118](https://github.com/rapidsai/cudf/pull/9118)) [@galipremsagar](https://github.com/galipremsagar)
- Refactor implementation of column setitem ([#9110](https://github.com/rapidsai/cudf/pull/9110)) [@vyasr](https://github.com/vyasr)
- Fix compile warnings found using nvcc 11.4 ([#9101](https://github.com/rapidsai/cudf/pull/9101)) [@davidwendt](https://github.com/davidwendt)
- Update to UCX-Py 0.22 ([#9099](https://github.com/rapidsai/cudf/pull/9099)) [@pentschev](https://github.com/pentschev)
- Simplify read_avro by removing unnecessary writer/impl classes ([#9090](https://github.com/rapidsai/cudf/pull/9090)) [@cwharris](https://github.com/cwharris)
- Allowing %f in format to return nanoseconds ([#9081](https://github.com/rapidsai/cudf/pull/9081)) [@marlenezw](https://github.com/marlenezw)
- Java bindings for cudf::hash_join ([#9080](https://github.com/rapidsai/cudf/pull/9080)) [@jlowe](https://github.com/jlowe)
- Remove stale code in `ColumnBase._fill` ([#9078](https://github.com/rapidsai/cudf/pull/9078)) [@isVoid](https://github.com/isVoid)
- Add support for `get_group` in GroupBy ([#9070](https://github.com/rapidsai/cudf/pull/9070)) [@galipremsagar](https://github.com/galipremsagar)
- Remove remaining "support" methods from DataFrame ([#9068](https://github.com/rapidsai/cudf/pull/9068)) [@vyasr](https://github.com/vyasr)
- Update JNI java CSV APIs to not use deprecated API ([#9066](https://github.com/rapidsai/cudf/pull/9066)) [@revans2](https://github.com/revans2)
- Added method to remove null_masks if the column has no nulls ([#9061](https://github.com/rapidsai/cudf/pull/9061)) [@razajafri](https://github.com/razajafri)
- Consolidate Several Series and Dataframe Methods ([#9059](https://github.com/rapidsai/cudf/pull/9059)) [@isVoid](https://github.com/isVoid)
- Remove usage of string based `set_dtypes` for `csv` & `json` readers ([#9049](https://github.com/rapidsai/cudf/pull/9049)) [@galipremsagar](https://github.com/galipremsagar)
- Remove some debug print statements from gtests ([#9048](https://github.com/rapidsai/cudf/pull/9048)) [@davidwendt](https://github.com/davidwendt)
- Support additional format specifiers in from_timestamps ([#9047](https://github.com/rapidsai/cudf/pull/9047)) [@davidwendt](https://github.com/davidwendt)
- Expose expression base class publicly and simplify public AST API ([#9045](https://github.com/rapidsai/cudf/pull/9045)) [@vyasr](https://github.com/vyasr)
- move filepath and mmap logic out of json/csv up to functions.cpp ([#9040](https://github.com/rapidsai/cudf/pull/9040)) [@cwharris](https://github.com/cwharris)
- Refactor Index hierarchy ([#9039](https://github.com/rapidsai/cudf/pull/9039)) [@vyasr](https://github.com/vyasr)
- cudf now leverages rapids-cmake to reduce CMake boilerplate ([#9030](https://github.com/rapidsai/cudf/pull/9030)) [@robertmaynard](https://github.com/robertmaynard)
- Add support for `STRUCT` input to `groupby` ([#9024](https://github.com/rapidsai/cudf/pull/9024)) [@mythrocks](https://github.com/mythrocks)
- Refactor Frame scans ([#9021](https://github.com/rapidsai/cudf/pull/9021)) [@vyasr](https://github.com/vyasr)
- Remove duplicate `set_categories` code ([#9018](https://github.com/rapidsai/cudf/pull/9018)) [@isVoid](https://github.com/isVoid)
- Map support for ParquetWriter ([#9013](https://github.com/rapidsai/cudf/pull/9013)) [@razajafri](https://github.com/razajafri)
- Remove aliases of various api.types APIs from utils.dtypes. ([#9011](https://github.com/rapidsai/cudf/pull/9011)) [@vyasr](https://github.com/vyasr)
- Java bindings for conditional join output sizes ([#9002](https://github.com/rapidsai/cudf/pull/9002)) [@jlowe](https://github.com/jlowe)
- Remove _copy_construct factory ([#8999](https://github.com/rapidsai/cudf/pull/8999)) [@vyasr](https://github.com/vyasr)
- ENH Allow arbitrary CMake config options in build.sh ([#8996](https://github.com/rapidsai/cudf/pull/8996)) [@dillon-cullinan](https://github.com/dillon-cullinan)
- A small optimization for JNI copy column view to column vector ([#8985](https://github.com/rapidsai/cudf/pull/8985)) [@revans2](https://github.com/revans2)
- Fix nvcc warnings in ORC writer ([#8975](https://github.com/rapidsai/cudf/pull/8975)) [@devavret](https://github.com/devavret)
- Support nested structs in rank and dense rank ([#8962](https://github.com/rapidsai/cudf/pull/8962)) [@rwlee](https://github.com/rwlee)
- Move compute_column API out of ast namespace ([#8957](https://github.com/rapidsai/cudf/pull/8957)) [@vyasr](https://github.com/vyasr)
- Series datetime is_year_end and is_year_start ([#8954](https://github.com/rapidsai/cudf/pull/8954)) [@marlenezw](https://github.com/marlenezw)
- Make Java AstNode public ([#8953](https://github.com/rapidsai/cudf/pull/8953)) [@jlowe](https://github.com/jlowe)
- Replace allocate with device_uvector for subword_tokenize internal tables ([#8952](https://github.com/rapidsai/cudf/pull/8952)) [@davidwendt](https://github.com/davidwendt)
- `cudf.dtype` function ([#8949](https://github.com/rapidsai/cudf/pull/8949)) [@shwina](https://github.com/shwina)
- Refactor Frame reductions ([#8944](https://github.com/rapidsai/cudf/pull/8944)) [@vyasr](https://github.com/vyasr)
- Add deprecation warning for `Series.set_mask` API ([#8943](https://github.com/rapidsai/cudf/pull/8943)) [@galipremsagar](https://github.com/galipremsagar)
- Move AST evaluator into a separate header ([#8930](https://github.com/rapidsai/cudf/pull/8930)) [@vyasr](https://github.com/vyasr)
- JNI Aggregation Type Changes ([#8919](https://github.com/rapidsai/cudf/pull/8919)) [@revans2](https://github.com/revans2)
- Move template parameter to function parameter in cudf::detail::left_semi_anti_join ([#8914](https://github.com/rapidsai/cudf/pull/8914)) [@davidwendt](https://github.com/davidwendt)
- Upgrade `arrow` & `pyarrow` to `5.0.0` ([#8908](https://github.com/rapidsai/cudf/pull/8908)) [@galipremsagar](https://github.com/galipremsagar)
- Add groupby_aggregation and groupby_scan_aggregation classes and force their usage. ([#8906](https://github.com/rapidsai/cudf/pull/8906)) [@nvdbaranec](https://github.com/nvdbaranec)
- Move `structs_column_tests.cu` to `.cpp`. ([#8902](https://github.com/rapidsai/cudf/pull/8902)) [@mythrocks](https://github.com/mythrocks)
- Add stream and memory-resource parameters to struct-scalar copy ctor ([#8901](https://github.com/rapidsai/cudf/pull/8901)) [@davidwendt](https://github.com/davidwendt)
- Combine linearizer and ast_plan ([#8900](https://github.com/rapidsai/cudf/pull/8900)) [@vyasr](https://github.com/vyasr)
- Add Java bindings for conditional join gather maps ([#8888](https://github.com/rapidsai/cudf/pull/8888)) [@jlowe](https://github.com/jlowe)
- Remove max version pin for `dask` & `distributed` on development branch ([#8881](https://github.com/rapidsai/cudf/pull/8881)) [@galipremsagar](https://github.com/galipremsagar)
- fix cufilejni build w/ c++17 ([#8877](https://github.com/rapidsai/cudf/pull/8877)) [@pxLi](https://github.com/pxLi)
- Add struct accessor to dask-cudf ([#8874](https://github.com/rapidsai/cudf/pull/8874)) [@NV-jpt](https://github.com/NV-jpt)
- Migrate dask-cudf CudfEngine to leverage ArrowDatasetEngine ([#8871](https://github.com/rapidsai/cudf/pull/8871)) [@rjzamora](https://github.com/rjzamora)
- Add JNI for extract_quarter, add_calendrical_months, and is_leap_year ([#8863](https://github.com/rapidsai/cudf/pull/8863)) [@revans2](https://github.com/revans2)
- Change cudf::scalar copy and move constructors to protected ([#8857](https://github.com/rapidsai/cudf/pull/8857)) [@davidwendt](https://github.com/davidwendt)
- Replace `is_same<>::value` with `is_same_v<>` ([#8852](https://github.com/rapidsai/cudf/pull/8852)) [@codereport](https://github.com/codereport)
- Add min `pytorch` version to `importorskip` in pytest ([#8851](https://github.com/rapidsai/cudf/pull/8851)) [@galipremsagar](https://github.com/galipremsagar)
- Java bindings for regex replace ([#8847](https://github.com/rapidsai/cudf/pull/8847)) [@jlowe](https://github.com/jlowe)
- Remove make strings children with null mask ([#8830](https://github.com/rapidsai/cudf/pull/8830)) [@davidwendt](https://github.com/davidwendt)
- Refactor conditional joins ([#8815](https://github.com/rapidsai/cudf/pull/8815)) [@vyasr](https://github.com/vyasr)
- Small cleanup (unused headers / commented code removals) ([#8799](https://github.com/rapidsai/cudf/pull/8799)) [@codereport](https://github.com/codereport)
- ENH Replace gpuci_conda_retry with gpuci_mamba_retry ([#8770](https://github.com/rapidsai/cudf/pull/8770)) [@dillon-cullinan](https://github.com/dillon-cullinan)
- Update cudf java bindings to 21.10.0-SNAPSHOT ([#8765](https://github.com/rapidsai/cudf/pull/8765)) [@pxLi](https://github.com/pxLi)
- Refactor and improve join benchmarks with nvbench ([#8734](https://github.com/rapidsai/cudf/pull/8734)) [@PointKernel](https://github.com/PointKernel)
- Refactor Python factories and remove usage of Table for libcudf output handling ([#8687](https://github.com/rapidsai/cudf/pull/8687)) [@vyasr](https://github.com/vyasr)
- Optimize URL Decoding ([#8622](https://github.com/rapidsai/cudf/pull/8622)) [@gaohao95](https://github.com/gaohao95)
- Parquet writer dictionary encoding refactor ([#8476](https://github.com/rapidsai/cudf/pull/8476)) [@devavret](https://github.com/devavret)
- Use nvcomp's snappy decompression in parquet reader ([#8252](https://github.com/rapidsai/cudf/pull/8252)) [@devavret](https://github.com/devavret)
- Use nvcomp's snappy compressor in parquet writer ([#8229](https://github.com/rapidsai/cudf/pull/8229)) [@devavret](https://github.com/devavret)
# cuDF 21.08.00 (4 Aug 2021)
## π¨ Breaking Changes
- Fix a crash in pack() when being handed tables with no columns. ([#8697](https://github.com/rapidsai/cudf/pull/8697)) [@nvdbaranec](https://github.com/nvdbaranec)
- Remove unused cudf::strings::create_offsets ([#8663](https://github.com/rapidsai/cudf/pull/8663)) [@davidwendt](https://github.com/davidwendt)
- Add delimiter parameter to cudf::strings::capitalize() ([#8620](https://github.com/rapidsai/cudf/pull/8620)) [@davidwendt](https://github.com/davidwendt)
- Change default datetime index resolution to ns to match pandas ([#8611](https://github.com/rapidsai/cudf/pull/8611)) [@vyasr](https://github.com/vyasr)
- Add sequence_type parameter to cudf::strings::title function ([#8602](https://github.com/rapidsai/cudf/pull/8602)) [@davidwendt](https://github.com/davidwendt)
- Add `strings::repeat_strings` API that can repeat each string a different number of times ([#8561](https://github.com/rapidsai/cudf/pull/8561)) [@ttnghia](https://github.com/ttnghia)
- String-to-boolean conversion is different from Pandas ([#8549](https://github.com/rapidsai/cudf/pull/8549)) [@skirui-source](https://github.com/skirui-source)
- Add accurate hash join size functions ([#8453](https://github.com/rapidsai/cudf/pull/8453)) [@PointKernel](https://github.com/PointKernel)
- Expose a Decimal32Dtype in cuDF Python ([#8438](https://github.com/rapidsai/cudf/pull/8438)) [@skirui-source](https://github.com/skirui-source)
- Update dask make_meta changes to be compatible with dask upstream ([#8426](https://github.com/rapidsai/cudf/pull/8426)) [@galipremsagar](https://github.com/galipremsagar)
- Adapt `cudf::scalar` classes to changes in `rmm::device_scalar` ([#8411](https://github.com/rapidsai/cudf/pull/8411)) [@harrism](https://github.com/harrism)
- Remove special Index class from the general index class hierarchy ([#8309](https://github.com/rapidsai/cudf/pull/8309)) [@vyasr](https://github.com/vyasr)
- Add first-class dtype utilities ([#8308](https://github.com/rapidsai/cudf/pull/8308)) [@vyasr](https://github.com/vyasr)
- ORC - Support reading multiple orc files/buffers in a single operation ([#8142](https://github.com/rapidsai/cudf/pull/8142)) [@jdye64](https://github.com/jdye64)
- Upgrade arrow to 4.0.1 ([#7495](https://github.com/rapidsai/cudf/pull/7495)) [@galipremsagar](https://github.com/galipremsagar)
## π Bug Fixes
- Fix `contains` check in string column ([#8834](https://github.com/rapidsai/cudf/pull/8834)) [@galipremsagar](https://github.com/galipremsagar)
- Remove unused variable from `row_bit_count_test`. ([#8829](https://github.com/rapidsai/cudf/pull/8829)) [@mythrocks](https://github.com/mythrocks)
- Fixes issue with null struct columns in ORC reader ([#8819](https://github.com/rapidsai/cudf/pull/8819)) [@rgsl888prabhu](https://github.com/rgsl888prabhu)
- Set CMake vars for python/parquet support in libarrow builds ([#8808](https://github.com/rapidsai/cudf/pull/8808)) [@vyasr](https://github.com/vyasr)
- Handle empty child columns in row_bit_count() ([#8791](https://github.com/rapidsai/cudf/pull/8791)) [@mythrocks](https://github.com/mythrocks)
- Revert "Remove cudf unneeded build time requirement of the cuda driver" ([#8784](https://github.com/rapidsai/cudf/pull/8784)) [@robertmaynard](https://github.com/robertmaynard)
- Fix isort error in utils.pyx ([#8771](https://github.com/rapidsai/cudf/pull/8771)) [@charlesbluca](https://github.com/charlesbluca)
- Handle sliced struct/list columns properly in concatenate() bounds checking. ([#8760](https://github.com/rapidsai/cudf/pull/8760)) [@nvdbaranec](https://github.com/nvdbaranec)
- Fix issues with `_CPackedColumns.serialize()` handling of host and device data ([#8759](https://github.com/rapidsai/cudf/pull/8759)) [@charlesbluca](https://github.com/charlesbluca)
- Fix issues with `MultiIndex` in `dropna`, `stack` & `reset_index` ([#8753](https://github.com/rapidsai/cudf/pull/8753)) [@galipremsagar](https://github.com/galipremsagar)
- Write pandas extension types to parquet file metadata ([#8749](https://github.com/rapidsai/cudf/pull/8749)) [@devavret](https://github.com/devavret)
- Fix `where` to handle `DataFrame` & `Series` input combination ([#8747](https://github.com/rapidsai/cudf/pull/8747)) [@galipremsagar](https://github.com/galipremsagar)
- Fix `replace` to handle null values correctly ([#8744](https://github.com/rapidsai/cudf/pull/8744)) [@galipremsagar](https://github.com/galipremsagar)
- Handle sliced structs properly in pack/contiguous_split. ([#8739](https://github.com/rapidsai/cudf/pull/8739)) [@nvdbaranec](https://github.com/nvdbaranec)
- Fix issue in slice() where columns with a positive offset were computing null counts incorrectly. ([#8738](https://github.com/rapidsai/cudf/pull/8738)) [@nvdbaranec](https://github.com/nvdbaranec)
- Fix `cudf.Series` constructor to handle list of sequences ([#8735](https://github.com/rapidsai/cudf/pull/8735)) [@galipremsagar](https://github.com/galipremsagar)
- Fix min/max sorted groupby aggregation on string column with nulls (argmin, argmax sentinel value missing on nulls) ([#8731](https://github.com/rapidsai/cudf/pull/8731)) [@karthikeyann](https://github.com/karthikeyann)
- Fix orc reader assert on create data_type in debug ([#8706](https://github.com/rapidsai/cudf/pull/8706)) [@davidwendt](https://github.com/davidwendt)
- Fix min/max inclusive cudf::scan for strings column ([#8705](https://github.com/rapidsai/cudf/pull/8705)) [@davidwendt](https://github.com/davidwendt)
- JNI: Fix driver version assertion logic in testGetCudaRuntimeInfo ([#8701](https://github.com/rapidsai/cudf/pull/8701)) [@sperlingxx](https://github.com/sperlingxx)
- Adding fix for skip_rows and crash in orc reader ([#8700](https://github.com/rapidsai/cudf/pull/8700)) [@rgsl888prabhu](https://github.com/rgsl888prabhu)
- Bug fix: `replace_nulls_policy` functor not returning correct indices for gathermap ([#8699](https://github.com/rapidsai/cudf/pull/8699)) [@isVoid](https://github.com/isVoid)
- Fix a crash in pack() when being handed tables with no columns. ([#8697](https://github.com/rapidsai/cudf/pull/8697)) [@nvdbaranec](https://github.com/nvdbaranec)
- Add post-processing steps to `dask_cudf.groupby.CudfSeriesGroupby.aggregate` ([#8694](https://github.com/rapidsai/cudf/pull/8694)) [@charlesbluca](https://github.com/charlesbluca)
- JNI build no longer looks for Arrow in conda environment ([#8686](https://github.com/rapidsai/cudf/pull/8686)) [@jlowe](https://github.com/jlowe)
- Handle arbitrarily different data in null list column rows when checking for equivalency. ([#8666](https://github.com/rapidsai/cudf/pull/8666)) [@nvdbaranec](https://github.com/nvdbaranec)
- Add ConfigureNVBench to avoid concurrent main() entry points ([#8662](https://github.com/rapidsai/cudf/pull/8662)) [@PointKernel](https://github.com/PointKernel)
- Pin `*arrow` to use `*cuda` in `run` ([#8651](https://github.com/rapidsai/cudf/pull/8651)) [@jakirkham](https://github.com/jakirkham)
- Add proper support for tolerances in testing methods. ([#8649](https://github.com/rapidsai/cudf/pull/8649)) [@vyasr](https://github.com/vyasr)
- Support multi-char case conversion in capitalize function ([#8647](https://github.com/rapidsai/cudf/pull/8647)) [@davidwendt](https://github.com/davidwendt)
- Fix repeated mangled names in read_csv with duplicate column names ([#8645](https://github.com/rapidsai/cudf/pull/8645)) [@karthikeyann](https://github.com/karthikeyann)
- Temporarily disable libcudf example build tests ([#8642](https://github.com/rapidsai/cudf/pull/8642)) [@isVoid](https://github.com/isVoid)
- Use conda-sourced cudf artifacts for libcudf example in CI ([#8638](https://github.com/rapidsai/cudf/pull/8638)) [@isVoid](https://github.com/isVoid)
- Ensure dev environment uses Arrow GPU packages ([#8637](https://github.com/rapidsai/cudf/pull/8637)) [@charlesbluca](https://github.com/charlesbluca)
- Fix bug that columns only initialized once when specified `columns` and `index` in dataframe ctor ([#8628](https://github.com/rapidsai/cudf/pull/8628)) [@isVoid](https://github.com/isVoid)
- Propagate **kwargs through to as_*_column methods ([#8618](https://github.com/rapidsai/cudf/pull/8618)) [@shwina](https://github.com/shwina)
- Fix orc_reader_benchmark.cpp compile error ([#8609](https://github.com/rapidsai/cudf/pull/8609)) [@davidwendt](https://github.com/davidwendt)
- Fix missed renumbering of Aggregation values ([#8600](https://github.com/rapidsai/cudf/pull/8600)) [@revans2](https://github.com/revans2)
- Update cmake to 3.20.5 in the Java Docker image ([#8593](https://github.com/rapidsai/cudf/pull/8593)) [@NvTimLiu](https://github.com/NvTimLiu)
- Fix bug in replace_with_backrefs when group has greedy quantifier ([#8575](https://github.com/rapidsai/cudf/pull/8575)) [@davidwendt](https://github.com/davidwendt)
- Apply metadata to keys before returning in `Frame._encode` ([#8560](https://github.com/rapidsai/cudf/pull/8560)) [@charlesbluca](https://github.com/charlesbluca)
- Fix for strings containing special JSON characters in get_json_object(). ([#8556](https://github.com/rapidsai/cudf/pull/8556)) [@nvdbaranec](https://github.com/nvdbaranec)
- Fix debug compile error in gather_struct_tests.cpp ([#8554](https://github.com/rapidsai/cudf/pull/8554)) [@davidwendt](https://github.com/davidwendt)
- String-to-boolean conversion is different from Pandas ([#8549](https://github.com/rapidsai/cudf/pull/8549)) [@skirui-source](https://github.com/skirui-source)
- Fix `__repr__` output with `display.max_rows` is `None` ([#8547](https://github.com/rapidsai/cudf/pull/8547)) [@galipremsagar](https://github.com/galipremsagar)
- Fix size passed to column constructors in _with_type_metadata ([#8539](https://github.com/rapidsai/cudf/pull/8539)) [@shwina](https://github.com/shwina)
- Properly retrieve last column when `-1` is specified for column index ([#8529](https://github.com/rapidsai/cudf/pull/8529)) [@isVoid](https://github.com/isVoid)
- Fix importing `apply` from `dask` ([#8517](https://github.com/rapidsai/cudf/pull/8517)) [@galipremsagar](https://github.com/galipremsagar)
- Fix offset of the string dictionary length stream ([#8515](https://github.com/rapidsai/cudf/pull/8515)) [@vuule](https://github.com/vuule)
- Fix double counting of selected columns in CSV reader ([#8508](https://github.com/rapidsai/cudf/pull/8508)) [@ochan1](https://github.com/ochan1)
- Incorrect map size in scatter_to_gather corrupts struct columns ([#8507](https://github.com/rapidsai/cudf/pull/8507)) [@gerashegalov](https://github.com/gerashegalov)
- replace_nulls properly propagates memory resource to gather calls ([#8500](https://github.com/rapidsai/cudf/pull/8500)) [@robertmaynard](https://github.com/robertmaynard)
- Disallow groupby aggs for `StructColumns` ([#8499](https://github.com/rapidsai/cudf/pull/8499)) [@charlesbluca](https://github.com/charlesbluca)
- Fixes out-of-bounds access for small files in unzip ([#8498](https://github.com/rapidsai/cudf/pull/8498)) [@elstehle](https://github.com/elstehle)
- Adding support for writing empty dataframe ([#8490](https://github.com/rapidsai/cudf/pull/8490)) [@shaneding](https://github.com/shaneding)
- Fix exclusive scan when including nulls and improve testing ([#8478](https://github.com/rapidsai/cudf/pull/8478)) [@harrism](https://github.com/harrism)
- Add workaround for crash in libcudf debug build using output_indexalator in thrust::lower_bound ([#8432](https://github.com/rapidsai/cudf/pull/8432)) [@davidwendt](https://github.com/davidwendt)
- Install only the same Thrust files that Thrust itself installs ([#8420](https://github.com/rapidsai/cudf/pull/8420)) [@robertmaynard](https://github.com/robertmaynard)
- Add nightly version for ucx-py in ci script ([#8419](https://github.com/rapidsai/cudf/pull/8419)) [@galipremsagar](https://github.com/galipremsagar)
- Fix null_equality config of rolling_collect_set ([#8415](https://github.com/rapidsai/cudf/pull/8415)) [@sperlingxx](https://github.com/sperlingxx)
- CollectSetAggregation: implement RollingAggregation interface ([#8406](https://github.com/rapidsai/cudf/pull/8406)) [@sperlingxx](https://github.com/sperlingxx)
- Handle pre-sliced nested columns in contiguous_split. ([#8391](https://github.com/rapidsai/cudf/pull/8391)) [@nvdbaranec](https://github.com/nvdbaranec)
- Fix bitmask_tests.cpp host accessing device memory ([#8370](https://github.com/rapidsai/cudf/pull/8370)) [@davidwendt](https://github.com/davidwendt)
- Fix concurrent_unordered_map to prevent accessing padding bits in pair_type ([#8348](https://github.com/rapidsai/cudf/pull/8348)) [@davidwendt](https://github.com/davidwendt)
- BUG FIX: Raise appropriate strings error when concatenating strings column ([#8290](https://github.com/rapidsai/cudf/pull/8290)) [@skirui-source](https://github.com/skirui-source)
- Make gpuCI and pre-commit style configurations consistent ([#8215](https://github.com/rapidsai/cudf/pull/8215)) [@charlesbluca](https://github.com/charlesbluca)
- Add collect list to dask-cudf groupby aggregations ([#8045](https://github.com/rapidsai/cudf/pull/8045)) [@charlesbluca](https://github.com/charlesbluca)
## π Documentation
- Update Python UDFs notebook ([#8810](https://github.com/rapidsai/cudf/pull/8810)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Fix dask.dataframe API docs links after reorg ([#8772](https://github.com/rapidsai/cudf/pull/8772)) [@jsignell](https://github.com/jsignell)
- Fix instructions for running cuDF/dask-cuDF tests in CONTRIBUTING.md ([#8724](https://github.com/rapidsai/cudf/pull/8724)) [@shwina](https://github.com/shwina)
- Translate Markdown documentation to rST and remove recommonmark ([#8698](https://github.com/rapidsai/cudf/pull/8698)) [@vyasr](https://github.com/vyasr)
- Fixed spelling mistakes in libcudf documentation ([#8664](https://github.com/rapidsai/cudf/pull/8664)) [@karthikeyann](https://github.com/karthikeyann)
- Custom Sphinx Extension: `PandasCompat` ([#8643](https://github.com/rapidsai/cudf/pull/8643)) [@isVoid](https://github.com/isVoid)
- Fix README.md ([#8535](https://github.com/rapidsai/cudf/pull/8535)) [@ajschmidt8](https://github.com/ajschmidt8)
- Change namespace contains_nulls to struct ([#8523](https://github.com/rapidsai/cudf/pull/8523)) [@davidwendt](https://github.com/davidwendt)
- Add info about NVTX ranges to dev guide ([#8461](https://github.com/rapidsai/cudf/pull/8461)) [@jrhemstad](https://github.com/jrhemstad)
- Fixed documentation bug in groupby agg method ([#8325](https://github.com/rapidsai/cudf/pull/8325)) [@ahmet-uyar](https://github.com/ahmet-uyar)
## π New Features
- Fix concatenating structs ([#8811](https://github.com/rapidsai/cudf/pull/8811)) [@shaneding](https://github.com/shaneding)
- Implement JNI for groupby aggregations `M2` and `MERGE_M2` ([#8763](https://github.com/rapidsai/cudf/pull/8763)) [@ttnghia](https://github.com/ttnghia)
- Bump `isort` to `5.6.4` and remove `isort` overrides made for 5.0.7 ([#8755](https://github.com/rapidsai/cudf/pull/8755)) [@charlesbluca](https://github.com/charlesbluca)
- Implement `__setitem__` for `StructColumn` ([#8737](https://github.com/rapidsai/cudf/pull/8737)) [@shaneding](https://github.com/shaneding)
- Add `is_leap_year` to `DateTimeProperties` and `DatetimeIndex` ([#8736](https://github.com/rapidsai/cudf/pull/8736)) [@isVoid](https://github.com/isVoid)
- Add `struct.explode()` method ([#8729](https://github.com/rapidsai/cudf/pull/8729)) [@shwina](https://github.com/shwina)
- Add `DataFrame.to_struct()` method to convert a DataFrame to a struct Series ([#8728](https://github.com/rapidsai/cudf/pull/8728)) [@shwina](https://github.com/shwina)
- Add support for list type in ORC writer ([#8723](https://github.com/rapidsai/cudf/pull/8723)) [@vuule](https://github.com/vuule)
- Fix slicing from struct columns and accessing struct columns ([#8719](https://github.com/rapidsai/cudf/pull/8719)) [@shaneding](https://github.com/shaneding)
- Add `datetime::is_leap_year` ([#8711](https://github.com/rapidsai/cudf/pull/8711)) [@isVoid](https://github.com/isVoid)
- Accessing struct columns from `dask_cudf` ([#8675](https://github.com/rapidsai/cudf/pull/8675)) [@shaneding](https://github.com/shaneding)
- Added pct_change to Series ([#8650](https://github.com/rapidsai/cudf/pull/8650)) [@TravisHester](https://github.com/TravisHester)
- Add strings support to cudf::shift function ([#8648](https://github.com/rapidsai/cudf/pull/8648)) [@davidwendt](https://github.com/davidwendt)
- Support Scatter `struct_scalar` ([#8630](https://github.com/rapidsai/cudf/pull/8630)) [@isVoid](https://github.com/isVoid)
- Struct scalar from host dictionary ([#8629](https://github.com/rapidsai/cudf/pull/8629)) [@shaneding](https://github.com/shaneding)
- Add dayofyear and day_of_year to Series, DatetimeColumn, and DatetimeIndex ([#8626](https://github.com/rapidsai/cudf/pull/8626)) [@beckernick](https://github.com/beckernick)
- JNI support for capitalize ([#8624](https://github.com/rapidsai/cudf/pull/8624)) [@firestarman](https://github.com/firestarman)
- Add delimiter parameter to cudf::strings::capitalize() ([#8620](https://github.com/rapidsai/cudf/pull/8620)) [@davidwendt](https://github.com/davidwendt)
- Add NVBench in CMake ([#8619](https://github.com/rapidsai/cudf/pull/8619)) [@PointKernel](https://github.com/PointKernel)
- Change default datetime index resolution to ns to match pandas ([#8611](https://github.com/rapidsai/cudf/pull/8611)) [@vyasr](https://github.com/vyasr)
- ListColumn `__setitem__` ([#8606](https://github.com/rapidsai/cudf/pull/8606)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Implement groupby aggregations `M2` and `MERGE_M2` ([#8605](https://github.com/rapidsai/cudf/pull/8605)) [@ttnghia](https://github.com/ttnghia)
- Add sequence_type parameter to cudf::strings::title function ([#8602](https://github.com/rapidsai/cudf/pull/8602)) [@davidwendt](https://github.com/davidwendt)
- Adding support for list and struct type in ORC Reader ([#8599](https://github.com/rapidsai/cudf/pull/8599)) [@rgsl888prabhu](https://github.com/rgsl888prabhu)
- Benchmark for `strings::repeat_strings` APIs ([#8589](https://github.com/rapidsai/cudf/pull/8589)) [@ttnghia](https://github.com/ttnghia)
- Nested scalar support for copy if else ([#8588](https://github.com/rapidsai/cudf/pull/8588)) [@gerashegalov](https://github.com/gerashegalov)
- User specified decimal columns to float64 ([#8587](https://github.com/rapidsai/cudf/pull/8587)) [@jdye64](https://github.com/jdye64)
- Add `get_element` for struct column ([#8578](https://github.com/rapidsai/cudf/pull/8578)) [@isVoid](https://github.com/isVoid)
- Python changes for adding `__getitem__` for `struct` ([#8577](https://github.com/rapidsai/cudf/pull/8577)) [@shaneding](https://github.com/shaneding)
- Add `strings::repeat_strings` API that can repeat each string a different number of times ([#8561](https://github.com/rapidsai/cudf/pull/8561)) [@ttnghia](https://github.com/ttnghia)
- Refactor `tests/iterator_utilities.hpp` functions ([#8540](https://github.com/rapidsai/cudf/pull/8540)) [@ttnghia](https://github.com/ttnghia)
- Support MERGE_LISTS and MERGE_SETS in Java package ([#8516](https://github.com/rapidsai/cudf/pull/8516)) [@sperlingxx](https://github.com/sperlingxx)
- Decimal support csv reader ([#8511](https://github.com/rapidsai/cudf/pull/8511)) [@elstehle](https://github.com/elstehle)
- Add column type tests ([#8505](https://github.com/rapidsai/cudf/pull/8505)) [@isVoid](https://github.com/isVoid)
- Warn when downscaling decimal columns ([#8492](https://github.com/rapidsai/cudf/pull/8492)) [@ChrisJar](https://github.com/ChrisJar)
- Add JNI for `strings::repeat_strings` ([#8491](https://github.com/rapidsai/cudf/pull/8491)) [@ttnghia](https://github.com/ttnghia)
- Add `Index.get_loc` for Numerical, String Index support ([#8489](https://github.com/rapidsai/cudf/pull/8489)) [@isVoid](https://github.com/isVoid)
- Expose half_up rounding in cuDF ([#8477](https://github.com/rapidsai/cudf/pull/8477)) [@shwina](https://github.com/shwina)
- Java APIs to fetch CUDA runtime info ([#8465](https://github.com/rapidsai/cudf/pull/8465)) [@sperlingxx](https://github.com/sperlingxx)
- Add `str.edit_distance_matrix` ([#8463](https://github.com/rapidsai/cudf/pull/8463)) [@isVoid](https://github.com/isVoid)
- Support constructing `cudf.Scalar` objects from host side lists ([#8459](https://github.com/rapidsai/cudf/pull/8459)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Add accurate hash join size functions ([#8453](https://github.com/rapidsai/cudf/pull/8453)) [@PointKernel](https://github.com/PointKernel)
- Add cudf::strings::integer_to_hex convert API ([#8450](https://github.com/rapidsai/cudf/pull/8450)) [@davidwendt](https://github.com/davidwendt)
- Create objects from iterables that contain cudf.NA ([#8442](https://github.com/rapidsai/cudf/pull/8442)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- JNI bindings for sort_lists ([#8439](https://github.com/rapidsai/cudf/pull/8439)) [@sperlingxx](https://github.com/sperlingxx)
- Expose a Decimal32Dtype in cuDF Python ([#8438](https://github.com/rapidsai/cudf/pull/8438)) [@skirui-source](https://github.com/skirui-source)
- Replace `all_null()` and `all_valid()` by `iterator_all_nulls()` and `iterator_no_null()` in tests ([#8437](https://github.com/rapidsai/cudf/pull/8437)) [@ttnghia](https://github.com/ttnghia)
- Implement groupby `MERGE_LISTS` and `MERGE_SETS` aggregates ([#8436](https://github.com/rapidsai/cudf/pull/8436)) [@ttnghia](https://github.com/ttnghia)
- Add public libcudf match_dictionaries API ([#8429](https://github.com/rapidsai/cudf/pull/8429)) [@davidwendt](https://github.com/davidwendt)
- Add move constructors for `string_scalar` and `struct_scalar` ([#8428](https://github.com/rapidsai/cudf/pull/8428)) [@ttnghia](https://github.com/ttnghia)
- Implement `strings::repeat_strings` ([#8423](https://github.com/rapidsai/cudf/pull/8423)) [@ttnghia](https://github.com/ttnghia)
- STRUCT column support for cudf::merge. ([#8422](https://github.com/rapidsai/cudf/pull/8422)) [@nvdbaranec](https://github.com/nvdbaranec)
- Implement reverse in libcudf ([#8410](https://github.com/rapidsai/cudf/pull/8410)) [@shaneding](https://github.com/shaneding)
- Support multiple input files/buffers for read_json ([#8403](https://github.com/rapidsai/cudf/pull/8403)) [@jdye64](https://github.com/jdye64)
- Improve test coverage for struct search ([#8396](https://github.com/rapidsai/cudf/pull/8396)) [@ttnghia](https://github.com/ttnghia)
- Add `groupby.fillna` ([#8362](https://github.com/rapidsai/cudf/pull/8362)) [@isVoid](https://github.com/isVoid)
- Enable AST-based joining ([#8214](https://github.com/rapidsai/cudf/pull/8214)) [@vyasr](https://github.com/vyasr)
- Generalized null support in user defined functions ([#8213](https://github.com/rapidsai/cudf/pull/8213)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Add compiled binary operation ([#8192](https://github.com/rapidsai/cudf/pull/8192)) [@karthikeyann](https://github.com/karthikeyann)
- Implement `.describe() ` for `DataFrameGroupBy` ([#8179](https://github.com/rapidsai/cudf/pull/8179)) [@skirui-source](https://github.com/skirui-source)
- ORC - Support reading multiple orc files/buffers in a single operation ([#8142](https://github.com/rapidsai/cudf/pull/8142)) [@jdye64](https://github.com/jdye64)
- Add Python bindings for `lists::concatenate_list_elements` and expose them as `.list.concat()` ([#8006](https://github.com/rapidsai/cudf/pull/8006)) [@shwina](https://github.com/shwina)
- Use Arrow URI FileSystem backed instance to retrieve remote files ([#7709](https://github.com/rapidsai/cudf/pull/7709)) [@jdye64](https://github.com/jdye64)
- Example to build custom application and link to libcudf ([#7671](https://github.com/rapidsai/cudf/pull/7671)) [@isVoid](https://github.com/isVoid)
- Upgrade arrow to 4.0.1 ([#7495](https://github.com/rapidsai/cudf/pull/7495)) [@galipremsagar](https://github.com/galipremsagar)
## π οΈ Improvements
- Provide a better error message when `CUDA::cuda_driver` not found ([#8794](https://github.com/rapidsai/cudf/pull/8794)) [@robertmaynard](https://github.com/robertmaynard)
- Remove anonymous namespace from null_mask.cuh ([#8786](https://github.com/rapidsai/cudf/pull/8786)) [@nvdbaranec](https://github.com/nvdbaranec)
- Allow cudf to be built without libcuda.so existing ([#8751](https://github.com/rapidsai/cudf/pull/8751)) [@robertmaynard](https://github.com/robertmaynard)
- Pin `mimesis` to `<4.1` ([#8745](https://github.com/rapidsai/cudf/pull/8745)) [@galipremsagar](https://github.com/galipremsagar)
- Update `conda` environment name for CI ([#8692](https://github.com/rapidsai/cudf/pull/8692)) [@ajschmidt8](https://github.com/ajschmidt8)
- Remove flatbuffers dependency ([#8671](https://github.com/rapidsai/cudf/pull/8671)) [@Ethyling](https://github.com/Ethyling)
- Add options to build Arrow with Python and Parquet support ([#8670](https://github.com/rapidsai/cudf/pull/8670)) [@trxcllnt](https://github.com/trxcllnt)
- Remove unused cudf::strings::create_offsets ([#8663](https://github.com/rapidsai/cudf/pull/8663)) [@davidwendt](https://github.com/davidwendt)
- Update GDS lib version to 1.0.0 ([#8654](https://github.com/rapidsai/cudf/pull/8654)) [@pxLi](https://github.com/pxLi)
- Support for groupby/scan rank and dense_rank aggregations ([#8652](https://github.com/rapidsai/cudf/pull/8652)) [@rwlee](https://github.com/rwlee)
- Fix usage of deprecated arrow ipc API ([#8632](https://github.com/rapidsai/cudf/pull/8632)) [@revans2](https://github.com/revans2)
- Use absolute imports in `cudf` ([#8631](https://github.com/rapidsai/cudf/pull/8631)) [@galipremsagar](https://github.com/galipremsagar)
- ENH Add Java CI build script ([#8627](https://github.com/rapidsai/cudf/pull/8627)) [@dillon-cullinan](https://github.com/dillon-cullinan)
- Add DeprecationWarning to `ser.str.subword_tokenize` ([#8603](https://github.com/rapidsai/cudf/pull/8603)) [@VibhuJawa](https://github.com/VibhuJawa)
- Rewrite binary operations for improved performance and additional type support ([#8598](https://github.com/rapidsai/cudf/pull/8598)) [@vyasr](https://github.com/vyasr)
- Fix `mypy` errors surfacing because of `numpy-1.21.0` ([#8595](https://github.com/rapidsai/cudf/pull/8595)) [@galipremsagar](https://github.com/galipremsagar)
- Remove unneeded includes from cudf::string_view headers ([#8594](https://github.com/rapidsai/cudf/pull/8594)) [@davidwendt](https://github.com/davidwendt)
- Use cmake 3.20.1 as it is now required by rmm ([#8586](https://github.com/rapidsai/cudf/pull/8586)) [@robertmaynard](https://github.com/robertmaynard)
- Remove device debug symbols from cmake CUDF_CUDA_FLAGS ([#8584](https://github.com/rapidsai/cudf/pull/8584)) [@davidwendt](https://github.com/davidwendt)
- Dask-CuDF: use default Dask Dataframe optimizer ([#8581](https://github.com/rapidsai/cudf/pull/8581)) [@madsbk](https://github.com/madsbk)
- Remove checking if an unsigned value is less than zero ([#8579](https://github.com/rapidsai/cudf/pull/8579)) [@robertmaynard](https://github.com/robertmaynard)
- Remove strings_count parameter from cudf::strings::detail::create_chars_child_column ([#8576](https://github.com/rapidsai/cudf/pull/8576)) [@davidwendt](https://github.com/davidwendt)
- Make `cudf.api.types` imports consistent ([#8571](https://github.com/rapidsai/cudf/pull/8571)) [@galipremsagar](https://github.com/galipremsagar)
- Modernize libcudf basic example CMakeFile; updates CI build tests ([#8568](https://github.com/rapidsai/cudf/pull/8568)) [@isVoid](https://github.com/isVoid)
- Rename concatenate_tests.cu to .cpp ([#8555](https://github.com/rapidsai/cudf/pull/8555)) [@davidwendt](https://github.com/davidwendt)
- enable window lead/lag test on struct ([#8548](https://github.com/rapidsai/cudf/pull/8548)) [@wbo4958](https://github.com/wbo4958)
- Add Java methods to split and write column views ([#8546](https://github.com/rapidsai/cudf/pull/8546)) [@razajafri](https://github.com/razajafri)
- Small cleanup ([#8534](https://github.com/rapidsai/cudf/pull/8534)) [@codereport](https://github.com/codereport)
- Unpin `dask` version in CI ([#8533](https://github.com/rapidsai/cudf/pull/8533)) [@galipremsagar](https://github.com/galipremsagar)
- Added optional flag for building Arrow with S3 filesystem support ([#8531](https://github.com/rapidsai/cudf/pull/8531)) [@jdye64](https://github.com/jdye64)
- Minor clean up of various internal column and frame utilities ([#8528](https://github.com/rapidsai/cudf/pull/8528)) [@vyasr](https://github.com/vyasr)
- Rename some copying_test source files .cu to .cpp ([#8527](https://github.com/rapidsai/cudf/pull/8527)) [@davidwendt](https://github.com/davidwendt)
- Correct the last warnings and issues when using newer cuda versions ([#8525](https://github.com/rapidsai/cudf/pull/8525)) [@robertmaynard](https://github.com/robertmaynard)
- Correct unused parameter warnings in transform and unary ops ([#8521](https://github.com/rapidsai/cudf/pull/8521)) [@robertmaynard](https://github.com/robertmaynard)
- Correct unused parameter warnings in string algorithms ([#8509](https://github.com/rapidsai/cudf/pull/8509)) [@robertmaynard](https://github.com/robertmaynard)
- Add in JNI APIs for scan, replace_nulls, group_by.scan, and group_by.replace_nulls ([#8503](https://github.com/rapidsai/cudf/pull/8503)) [@revans2](https://github.com/revans2)
- Fix `21.08` forward-merge conflicts ([#8502](https://github.com/rapidsai/cudf/pull/8502)) [@ajschmidt8](https://github.com/ajschmidt8)
- Fix Cython formatting command in Contributing.md. ([#8496](https://github.com/rapidsai/cudf/pull/8496)) [@marlenezw](https://github.com/marlenezw)
- Bug/correct unused parameters in reshape and text ([#8495](https://github.com/rapidsai/cudf/pull/8495)) [@robertmaynard](https://github.com/robertmaynard)
- Correct unused parameter warnings in partitioning and stream compact ([#8494](https://github.com/rapidsai/cudf/pull/8494)) [@robertmaynard](https://github.com/robertmaynard)
- Correct unused parameter warnings in labelling and list algorithms ([#8493](https://github.com/rapidsai/cudf/pull/8493)) [@robertmaynard](https://github.com/robertmaynard)
- Refactor index construction ([#8485](https://github.com/rapidsai/cudf/pull/8485)) [@vyasr](https://github.com/vyasr)
- Correct unused parameter warnings in replace algorithms ([#8483](https://github.com/rapidsai/cudf/pull/8483)) [@robertmaynard](https://github.com/robertmaynard)
- Correct unused parameter warnings in reduction algorithms ([#8481](https://github.com/rapidsai/cudf/pull/8481)) [@robertmaynard](https://github.com/robertmaynard)
- Correct unused parameter warnings in io algorithms ([#8480](https://github.com/rapidsai/cudf/pull/8480)) [@robertmaynard](https://github.com/robertmaynard)
- Correct unused parameter warnings in interop algorithms ([#8479](https://github.com/rapidsai/cudf/pull/8479)) [@robertmaynard](https://github.com/robertmaynard)
- Correct unused parameter warnings in filling algorithms ([#8468](https://github.com/rapidsai/cudf/pull/8468)) [@robertmaynard](https://github.com/robertmaynard)
- Correct unused parameter warnings in groupby ([#8467](https://github.com/rapidsai/cudf/pull/8467)) [@robertmaynard](https://github.com/robertmaynard)
- use libcu++ time_point as timestamp ([#8466](https://github.com/rapidsai/cudf/pull/8466)) [@karthikeyann](https://github.com/karthikeyann)
- Modify reprog_device::extract to return groups in a single pass ([#8460](https://github.com/rapidsai/cudf/pull/8460)) [@davidwendt](https://github.com/davidwendt)
- Update minimum Dask requirement to 2021.6.0 ([#8458](https://github.com/rapidsai/cudf/pull/8458)) [@pentschev](https://github.com/pentschev)
- Fix failures when performing binary operations on DataFrames with empty columns ([#8452](https://github.com/rapidsai/cudf/pull/8452)) [@ChrisJar](https://github.com/ChrisJar)
- Fix conflicts in `8447` ([#8448](https://github.com/rapidsai/cudf/pull/8448)) [@ajschmidt8](https://github.com/ajschmidt8)
- Add serialization methods for `List` and `StructDtype` ([#8441](https://github.com/rapidsai/cudf/pull/8441)) [@charlesbluca](https://github.com/charlesbluca)
- Replace make_empty_strings_column with make_empty_column ([#8435](https://github.com/rapidsai/cudf/pull/8435)) [@davidwendt](https://github.com/davidwendt)
- JNI bindings for get_element ([#8433](https://github.com/rapidsai/cudf/pull/8433)) [@revans2](https://github.com/revans2)
- Update dask make_meta changes to be compatible with dask upstream ([#8426](https://github.com/rapidsai/cudf/pull/8426)) [@galipremsagar](https://github.com/galipremsagar)
- Unpin dask version on CI ([#8425](https://github.com/rapidsai/cudf/pull/8425)) [@galipremsagar](https://github.com/galipremsagar)
- Add benchmark for strings/fixed_point convert APIs ([#8417](https://github.com/rapidsai/cudf/pull/8417)) [@davidwendt](https://github.com/davidwendt)
- Adapt `cudf::scalar` classes to changes in `rmm::device_scalar` ([#8411](https://github.com/rapidsai/cudf/pull/8411)) [@harrism](https://github.com/harrism)
- Add benchmark for strings/integers convert APIs ([#8402](https://github.com/rapidsai/cudf/pull/8402)) [@davidwendt](https://github.com/davidwendt)
- Enable multi-file partitioning in dask_cudf.read_parquet ([#8393](https://github.com/rapidsai/cudf/pull/8393)) [@rjzamora](https://github.com/rjzamora)
- Correct unused parameter warnings in rolling algorithms ([#8390](https://github.com/rapidsai/cudf/pull/8390)) [@robertmaynard](https://github.com/robertmaynard)
- Correct unused parameters in column round and search ([#8389](https://github.com/rapidsai/cudf/pull/8389)) [@robertmaynard](https://github.com/robertmaynard)
- Add functionality to apply `Dtype` metadata to `ColumnBase` ([#8373](https://github.com/rapidsai/cudf/pull/8373)) [@charlesbluca](https://github.com/charlesbluca)
- Refactor setting stack size in regex code ([#8358](https://github.com/rapidsai/cudf/pull/8358)) [@davidwendt](https://github.com/davidwendt)
- Update Java bindings to 21.08-SNAPSHOT ([#8344](https://github.com/rapidsai/cudf/pull/8344)) [@pxLi](https://github.com/pxLi)
- Replace remaining uses of device_vector ([#8343](https://github.com/rapidsai/cudf/pull/8343)) [@harrism](https://github.com/harrism)
- Statically link libnvcomp into libcudfjni ([#8334](https://github.com/rapidsai/cudf/pull/8334)) [@jlowe](https://github.com/jlowe)
- Resolve auto merge conflicts for Branch 21.08 from branch 21.06 ([#8329](https://github.com/rapidsai/cudf/pull/8329)) [@galipremsagar](https://github.com/galipremsagar)
- Minor code refactor for sorted_order ([#8326](https://github.com/rapidsai/cudf/pull/8326)) [@wbo4958](https://github.com/wbo4958)
- Remove special Index class from the general index class hierarchy ([#8309](https://github.com/rapidsai/cudf/pull/8309)) [@vyasr](https://github.com/vyasr)
- Add first-class dtype utilities ([#8308](https://github.com/rapidsai/cudf/pull/8308)) [@vyasr](https://github.com/vyasr)
- Add option to link Java bindings with Arrow dynamically ([#8307](https://github.com/rapidsai/cudf/pull/8307)) [@jlowe](https://github.com/jlowe)
- Refactor ColumnMethods and its subclasses to remove `column` argument and require `parent` argument ([#8306](https://github.com/rapidsai/cudf/pull/8306)) [@shwina](https://github.com/shwina)
- Refactor `scatter` for list columns ([#8255](https://github.com/rapidsai/cudf/pull/8255)) [@isVoid](https://github.com/isVoid)
- Expose pack/unpack API to Python ([#8153](https://github.com/rapidsai/cudf/pull/8153)) [@charlesbluca](https://github.com/charlesbluca)
- Adding cudf.cut method ([#8002](https://github.com/rapidsai/cudf/pull/8002)) [@marlenezw](https://github.com/marlenezw)
- Optimize string gather performance for large strings ([#7980](https://github.com/rapidsai/cudf/pull/7980)) [@gaohao95](https://github.com/gaohao95)
- Add peak memory usage tracking to cuIO benchmarks ([#7770](https://github.com/rapidsai/cudf/pull/7770)) [@devavret](https://github.com/devavret)
- Updating Clang Version to 11.0.0 ([#6695](https://github.com/rapidsai/cudf/pull/6695)) [@codereport](https://github.com/codereport)
# cuDF 21.06.00 (9 Jun 2021)
## π¨ Breaking Changes
- Add support for `make_meta_obj` dispatch in `dask-cudf` ([#8342](https://github.com/rapidsai/cudf/pull/8342)) [@galipremsagar](https://github.com/galipremsagar)
- Add separator-on-null parameter to strings concatenate APIs ([#8282](https://github.com/rapidsai/cudf/pull/8282)) [@davidwendt](https://github.com/davidwendt)
- Introduce a common parent class for NumericalColumn and DecimalColumn ([#8278](https://github.com/rapidsai/cudf/pull/8278)) [@vyasr](https://github.com/vyasr)
- Update ORC statistics API to use C++17 standard library ([#8241](https://github.com/rapidsai/cudf/pull/8241)) [@vuule](https://github.com/vuule)
- Preserve column hierarchy when getting NULL row from `LIST` column ([#8206](https://github.com/rapidsai/cudf/pull/8206)) [@isVoid](https://github.com/isVoid)
- `Groupby.shift` c++ API refactor and python binding ([#8131](https://github.com/rapidsai/cudf/pull/8131)) [@isVoid](https://github.com/isVoid)
## π Bug Fixes
- Fix struct flattening to add a validity column only when the input column has null element ([#8374](https://github.com/rapidsai/cudf/pull/8374)) [@ttnghia](https://github.com/ttnghia)
- Compilation fix: Remove redefinition for `std::is_same_v()` ([#8369](https://github.com/rapidsai/cudf/pull/8369)) [@mythrocks](https://github.com/mythrocks)
- Add backward compatibility for `dask-cudf` to work with other versions of `dask` ([#8368](https://github.com/rapidsai/cudf/pull/8368)) [@galipremsagar](https://github.com/galipremsagar)
- Handle empty results with nested types in copy_if_else ([#8359](https://github.com/rapidsai/cudf/pull/8359)) [@nvdbaranec](https://github.com/nvdbaranec)
- Handle nested column types properly for empty parquet files. ([#8350](https://github.com/rapidsai/cudf/pull/8350)) [@nvdbaranec](https://github.com/nvdbaranec)
- Raise error when unsupported arguments are passed to `dask_cudf.DataFrame.sort_values` ([#8349](https://github.com/rapidsai/cudf/pull/8349)) [@galipremsagar](https://github.com/galipremsagar)
- Raise `NotImplementedError` for axis=1 in `rank` ([#8347](https://github.com/rapidsai/cudf/pull/8347)) [@galipremsagar](https://github.com/galipremsagar)
- Add support for `make_meta_obj` dispatch in `dask-cudf` ([#8342](https://github.com/rapidsai/cudf/pull/8342)) [@galipremsagar](https://github.com/galipremsagar)
- Update Java string concatenate test for single column ([#8330](https://github.com/rapidsai/cudf/pull/8330)) [@tgravescs](https://github.com/tgravescs)
- Use empty_like in scatter ([#8314](https://github.com/rapidsai/cudf/pull/8314)) [@revans2](https://github.com/revans2)
- Fix concatenate_lists_ignore_null on rows of all_nulls ([#8312](https://github.com/rapidsai/cudf/pull/8312)) [@sperlingxx](https://github.com/sperlingxx)
- Add separator-on-null parameter to strings concatenate APIs ([#8282](https://github.com/rapidsai/cudf/pull/8282)) [@davidwendt](https://github.com/davidwendt)
- COLLECT_LIST support returning empty output columns. ([#8279](https://github.com/rapidsai/cudf/pull/8279)) [@mythrocks](https://github.com/mythrocks)
- Update io util to convert path like object to string ([#8275](https://github.com/rapidsai/cudf/pull/8275)) [@ayushdg](https://github.com/ayushdg)
- Fix result column types for empty inputs to rolling window ([#8274](https://github.com/rapidsai/cudf/pull/8274)) [@mythrocks](https://github.com/mythrocks)
- Actually test equality in assert_groupby_results_equal ([#8272](https://github.com/rapidsai/cudf/pull/8272)) [@shwina](https://github.com/shwina)
- CMake always explicitly specify a source files extension ([#8270](https://github.com/rapidsai/cudf/pull/8270)) [@robertmaynard](https://github.com/robertmaynard)
- Fix struct binary search and struct flattening ([#8268](https://github.com/rapidsai/cudf/pull/8268)) [@ttnghia](https://github.com/ttnghia)
- Revert "patch thrust to fix intmax num elements limitation in scan_by_key" ([#8263](https://github.com/rapidsai/cudf/pull/8263)) [@cwharris](https://github.com/cwharris)
- upgrade dlpack to 0.5 ([#8262](https://github.com/rapidsai/cudf/pull/8262)) [@cwharris](https://github.com/cwharris)
- Fixes CSV-reader type inference for thousands separator and decimal point ([#8261](https://github.com/rapidsai/cudf/pull/8261)) [@elstehle](https://github.com/elstehle)
- Fix incorrect assertion in Java concat ([#8258](https://github.com/rapidsai/cudf/pull/8258)) [@sperlingxx](https://github.com/sperlingxx)
- Copy nested types upon construction ([#8244](https://github.com/rapidsai/cudf/pull/8244)) [@isVoid](https://github.com/isVoid)
- Preserve column hierarchy when getting NULL row from `LIST` column ([#8206](https://github.com/rapidsai/cudf/pull/8206)) [@isVoid](https://github.com/isVoid)
- Clip decimal binary op precision at max precision ([#8194](https://github.com/rapidsai/cudf/pull/8194)) [@ChrisJar](https://github.com/ChrisJar)
## π Documentation
- Add docstring for `dask_cudf.read_csv` ([#8355](https://github.com/rapidsai/cudf/pull/8355)) [@galipremsagar](https://github.com/galipremsagar)
- Fix cudf release version in readme ([#8331](https://github.com/rapidsai/cudf/pull/8331)) [@galipremsagar](https://github.com/galipremsagar)
- Fix structs column description in dev docs ([#8318](https://github.com/rapidsai/cudf/pull/8318)) [@isVoid](https://github.com/isVoid)
- Update readme with correct CUDA versions ([#8315](https://github.com/rapidsai/cudf/pull/8315)) [@raydouglass](https://github.com/raydouglass)
- Add description of the cuIO GDS integration ([#8293](https://github.com/rapidsai/cudf/pull/8293)) [@vuule](https://github.com/vuule)
- Remove unused parameter from copy_partition kernel documentation ([#8283](https://github.com/rapidsai/cudf/pull/8283)) [@robertmaynard](https://github.com/robertmaynard)
## π New Features
- Add support merging b/w categorical data ([#8332](https://github.com/rapidsai/cudf/pull/8332)) [@galipremsagar](https://github.com/galipremsagar)
- Java: Support struct scalar ([#8327](https://github.com/rapidsai/cudf/pull/8327)) [@sperlingxx](https://github.com/sperlingxx)
- added _is_homogeneous property ([#8299](https://github.com/rapidsai/cudf/pull/8299)) [@shaneding](https://github.com/shaneding)
- Added decimal writing for CSV writer ([#8296](https://github.com/rapidsai/cudf/pull/8296)) [@kaatish](https://github.com/kaatish)
- Java: Support creating a scalar from utf8 string ([#8294](https://github.com/rapidsai/cudf/pull/8294)) [@firestarman](https://github.com/firestarman)
- Add Java API for Concatenate strings with separator ([#8289](https://github.com/rapidsai/cudf/pull/8289)) [@tgravescs](https://github.com/tgravescs)
- `strings::join_list_elements` options for empty list inputs ([#8285](https://github.com/rapidsai/cudf/pull/8285)) [@ttnghia](https://github.com/ttnghia)
- Return python lists for __getitem__ calls to list type series ([#8265](https://github.com/rapidsai/cudf/pull/8265)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- add unit tests for lead/lag on list for row window ([#8259](https://github.com/rapidsai/cudf/pull/8259)) [@wbo4958](https://github.com/wbo4958)
- Create a String column from UTF8 String byte arrays ([#8257](https://github.com/rapidsai/cudf/pull/8257)) [@firestarman](https://github.com/firestarman)
- Support scattering `list_scalar` ([#8256](https://github.com/rapidsai/cudf/pull/8256)) [@isVoid](https://github.com/isVoid)
- Implement `lists::concatenate_list_elements` ([#8231](https://github.com/rapidsai/cudf/pull/8231)) [@ttnghia](https://github.com/ttnghia)
- Support for struct scalars. ([#8220](https://github.com/rapidsai/cudf/pull/8220)) [@nvdbaranec](https://github.com/nvdbaranec)
- Add support for decimal types in ORC writer ([#8198](https://github.com/rapidsai/cudf/pull/8198)) [@vuule](https://github.com/vuule)
- Support create lists column from a `list_scalar` ([#8185](https://github.com/rapidsai/cudf/pull/8185)) [@isVoid](https://github.com/isVoid)
- `Groupby.shift` c++ API refactor and python binding ([#8131](https://github.com/rapidsai/cudf/pull/8131)) [@isVoid](https://github.com/isVoid)
- Add `groupby::replace_nulls(replace_policy)` api ([#7118](https://github.com/rapidsai/cudf/pull/7118)) [@isVoid](https://github.com/isVoid)
## π οΈ Improvements
- Support Dask + Distributed 2021.05.1 ([#8392](https://github.com/rapidsai/cudf/pull/8392)) [@jakirkham](https://github.com/jakirkham)
- Add aliases for string methods ([#8353](https://github.com/rapidsai/cudf/pull/8353)) [@shwina](https://github.com/shwina)
- Update environment variable used to determine `cuda_version` ([#8321](https://github.com/rapidsai/cudf/pull/8321)) [@ajschmidt8](https://github.com/ajschmidt8)
- JNI: Refactor the code of making column from scalar ([#8310](https://github.com/rapidsai/cudf/pull/8310)) [@firestarman](https://github.com/firestarman)
- Update `CHANGELOG.md` links for calver ([#8303](https://github.com/rapidsai/cudf/pull/8303)) [@ajschmidt8](https://github.com/ajschmidt8)
- Merge `branch-0.19` into `branch-21.06` ([#8302](https://github.com/rapidsai/cudf/pull/8302)) [@ajschmidt8](https://github.com/ajschmidt8)
- use address and length for GDS reads/writes ([#8301](https://github.com/rapidsai/cudf/pull/8301)) [@rongou](https://github.com/rongou)
- Update cudfjni version to 21.06.0 ([#8292](https://github.com/rapidsai/cudf/pull/8292)) [@pxLi](https://github.com/pxLi)
- Update docs build script ([#8284](https://github.com/rapidsai/cudf/pull/8284)) [@ajschmidt8](https://github.com/ajschmidt8)
- Make device_buffer streams explicit and enforce move construction ([#8280](https://github.com/rapidsai/cudf/pull/8280)) [@harrism](https://github.com/harrism)
- Introduce a common parent class for NumericalColumn and DecimalColumn ([#8278](https://github.com/rapidsai/cudf/pull/8278)) [@vyasr](https://github.com/vyasr)
- Do not add nulls to the hash table when null_equality::NOT_EQUAL is passed to left_semi_join and left_anti_join ([#8277](https://github.com/rapidsai/cudf/pull/8277)) [@nvdbaranec](https://github.com/nvdbaranec)
- Enable implicit casting when concatenating mixed types ([#8276](https://github.com/rapidsai/cudf/pull/8276)) [@ChrisJar](https://github.com/ChrisJar)
- Fix CMake FindPackage rmm, pin dev envs' dlpack to v0.3 ([#8271](https://github.com/rapidsai/cudf/pull/8271)) [@trxcllnt](https://github.com/trxcllnt)
- Update cudfjni version to 21.06 ([#8267](https://github.com/rapidsai/cudf/pull/8267)) [@pxLi](https://github.com/pxLi)
- support RMM aligned resource adapter in JNI ([#8266](https://github.com/rapidsai/cudf/pull/8266)) [@rongou](https://github.com/rongou)
- Pass compiler environment variables to conda python build ([#8260](https://github.com/rapidsai/cudf/pull/8260)) [@Ethyling](https://github.com/Ethyling)
- Remove abc inheritance from Serializable ([#8254](https://github.com/rapidsai/cudf/pull/8254)) [@vyasr](https://github.com/vyasr)
- Move more methods into SingleColumnFrame ([#8253](https://github.com/rapidsai/cudf/pull/8253)) [@vyasr](https://github.com/vyasr)
- Update ORC statistics API to use C++17 standard library ([#8241](https://github.com/rapidsai/cudf/pull/8241)) [@vuule](https://github.com/vuule)
- Correct unused parameter warnings in dictionary algorithms ([#8239](https://github.com/rapidsai/cudf/pull/8239)) [@robertmaynard](https://github.com/robertmaynard)
- Correct unused parameters in the copying algorithms ([#8232](https://github.com/rapidsai/cudf/pull/8232)) [@robertmaynard](https://github.com/robertmaynard)
- IO statistics cleanup ([#8191](https://github.com/rapidsai/cudf/pull/8191)) [@kaatish](https://github.com/kaatish)
- Refactor of rolling_window implementation. ([#8158](https://github.com/rapidsai/cudf/pull/8158)) [@nvdbaranec](https://github.com/nvdbaranec)
- Add a flag for allowing single quotes in JSON strings. ([#8144](https://github.com/rapidsai/cudf/pull/8144)) [@nvdbaranec](https://github.com/nvdbaranec)
- Column refactoring 2 ([#8130](https://github.com/rapidsai/cudf/pull/8130)) [@vyasr](https://github.com/vyasr)
- support space in workspace ([#7956](https://github.com/rapidsai/cudf/pull/7956)) [@jolorunyomi](https://github.com/jolorunyomi)
- Support collect_set on rolling window ([#7881](https://github.com/rapidsai/cudf/pull/7881)) [@sperlingxx](https://github.com/sperlingxx)
# cuDF 0.19.0 (21 Apr 2021)
## π¨ Breaking Changes
- Allow hash_partition to take a seed value ([#7771](https://github.com/rapidsai/cudf/pull/7771)) [@magnatelee](https://github.com/magnatelee)
- Allow merging index column with data column using keyword "on" ([#7736](https://github.com/rapidsai/cudf/pull/7736)) [@skirui-source](https://github.com/skirui-source)
- Change JNI API to avoid loading native dependencies when creating sort order classes. ([#7729](https://github.com/rapidsai/cudf/pull/7729)) [@revans2](https://github.com/revans2)
- Replace device_vector with device_uvector in null_mask ([#7715](https://github.com/rapidsai/cudf/pull/7715)) [@harrism](https://github.com/harrism)
- Don't identify decimals as strings. ([#7710](https://github.com/rapidsai/cudf/pull/7710)) [@vyasr](https://github.com/vyasr)
- Fix Java Parquet write after writer API changes ([#7655](https://github.com/rapidsai/cudf/pull/7655)) [@revans2](https://github.com/revans2)
- Convert cudf::concatenate APIs to use spans and device_uvector ([#7621](https://github.com/rapidsai/cudf/pull/7621)) [@harrism](https://github.com/harrism)
- Update missing docstring examples in python public APIs ([#7546](https://github.com/rapidsai/cudf/pull/7546)) [@galipremsagar](https://github.com/galipremsagar)
- Remove unneeded step parameter from strings::detail::copy_slice ([#7525](https://github.com/rapidsai/cudf/pull/7525)) [@davidwendt](https://github.com/davidwendt)
- Rename ARROW_STATIC_LIB because it conflicts with one in FindArrow.cmake ([#7518](https://github.com/rapidsai/cudf/pull/7518)) [@trxcllnt](https://github.com/trxcllnt)
- Match Pandas logic for comparing two objects with nulls ([#7490](https://github.com/rapidsai/cudf/pull/7490)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Add struct support to parquet writer ([#7461](https://github.com/rapidsai/cudf/pull/7461)) [@devavret](https://github.com/devavret)
- Join APIs that return gathermaps ([#7454](https://github.com/rapidsai/cudf/pull/7454)) [@shwina](https://github.com/shwina)
- `fixed_point` + `cudf::binary_operation` API Changes ([#7435](https://github.com/rapidsai/cudf/pull/7435)) [@codereport](https://github.com/codereport)
- Fix BUG: Exception when PYTHONOPTIMIZE=2 ([#7434](https://github.com/rapidsai/cudf/pull/7434)) [@skirui-source](https://github.com/skirui-source)
- Change nvtext::load_vocabulary_file to return a unique ptr ([#7424](https://github.com/rapidsai/cudf/pull/7424)) [@davidwendt](https://github.com/davidwendt)
- Refactor strings column factories ([#7397](https://github.com/rapidsai/cudf/pull/7397)) [@harrism](https://github.com/harrism)
- Use CMAKE_CUDA_ARCHITECTURES ([#7391](https://github.com/rapidsai/cudf/pull/7391)) [@robertmaynard](https://github.com/robertmaynard)
- Upgrade pandas to 1.2 ([#7375](https://github.com/rapidsai/cudf/pull/7375)) [@galipremsagar](https://github.com/galipremsagar)
- Rename `logical_cast` to `bit_cast` and allow additional conversions ([#7373](https://github.com/rapidsai/cudf/pull/7373)) [@ttnghia](https://github.com/ttnghia)
- Rework libcudf CMakeLists.txt to export targets for CPM ([#7107](https://github.com/rapidsai/cudf/pull/7107)) [@trxcllnt](https://github.com/trxcllnt)
## π Bug Fixes
- Fix a `NameError` in meta dispatch API ([#7996](https://github.com/rapidsai/cudf/pull/7996)) [@galipremsagar](https://github.com/galipremsagar)
- Reindex in `DataFrame.__setitem__` ([#7957](https://github.com/rapidsai/cudf/pull/7957)) [@galipremsagar](https://github.com/galipremsagar)
- jitify direct-to-cubin compilation and caching. ([#7919](https://github.com/rapidsai/cudf/pull/7919)) [@cwharris](https://github.com/cwharris)
- Use dynamic cudart for nvcomp in java build ([#7896](https://github.com/rapidsai/cudf/pull/7896)) [@abellina](https://github.com/abellina)
- fix "incompatible redefinition" warnings ([#7894](https://github.com/rapidsai/cudf/pull/7894)) [@cwharris](https://github.com/cwharris)
- cudf consistently specifies the cuda runtime ([#7887](https://github.com/rapidsai/cudf/pull/7887)) [@robertmaynard](https://github.com/robertmaynard)
- disable verbose output for jitify_preprocess ([#7886](https://github.com/rapidsai/cudf/pull/7886)) [@cwharris](https://github.com/cwharris)
- CMake jit_preprocess_files function only runs when needed ([#7872](https://github.com/rapidsai/cudf/pull/7872)) [@robertmaynard](https://github.com/robertmaynard)
- Push DeviceScalar construction into cython for list.contains ([#7864](https://github.com/rapidsai/cudf/pull/7864)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- cudf now sets an install rpath of $ORIGIN ([#7863](https://github.com/rapidsai/cudf/pull/7863)) [@robertmaynard](https://github.com/robertmaynard)
- Don't install Thrust examples, tests, docs, and python files ([#7811](https://github.com/rapidsai/cudf/pull/7811)) [@robertmaynard](https://github.com/robertmaynard)
- Sort by index in groupby tests more consistently ([#7802](https://github.com/rapidsai/cudf/pull/7802)) [@shwina](https://github.com/shwina)
- Revert "Update conda recipes pinning of repo dependencies ([#7743)" (#7793](https://github.com/rapidsai/cudf/pull/7743)" (#7793)) [@raydouglass](https://github.com/raydouglass)
- Add decimal column handling in copy_type_metadata ([#7788](https://github.com/rapidsai/cudf/pull/7788)) [@shwina](https://github.com/shwina)
- Add column names validation in parquet writer ([#7786](https://github.com/rapidsai/cudf/pull/7786)) [@galipremsagar](https://github.com/galipremsagar)
- Fix Java explode outer unit tests ([#7782](https://github.com/rapidsai/cudf/pull/7782)) [@jlowe](https://github.com/jlowe)
- Fix compiler warning about non-POD types passed through ellipsis ([#7781](https://github.com/rapidsai/cudf/pull/7781)) [@jrhemstad](https://github.com/jrhemstad)
- User resource fix for replace_nulls ([#7769](https://github.com/rapidsai/cudf/pull/7769)) [@magnatelee](https://github.com/magnatelee)
- Fix type dispatch for columnar replace_nulls ([#7768](https://github.com/rapidsai/cudf/pull/7768)) [@jlowe](https://github.com/jlowe)
- Add `ignore_order` parameter to dask-cudf concat dispatch ([#7765](https://github.com/rapidsai/cudf/pull/7765)) [@galipremsagar](https://github.com/galipremsagar)
- Fix slicing and arrow representations of decimal columns ([#7755](https://github.com/rapidsai/cudf/pull/7755)) [@vyasr](https://github.com/vyasr)
- Fixing issue with explode_outer position not nulling position entries of null rows ([#7754](https://github.com/rapidsai/cudf/pull/7754)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Implement scatter for struct columns ([#7752](https://github.com/rapidsai/cudf/pull/7752)) [@ttnghia](https://github.com/ttnghia)
- Fix data corruption in string columns ([#7746](https://github.com/rapidsai/cudf/pull/7746)) [@galipremsagar](https://github.com/galipremsagar)
- Fix string length in stripe dictionary building ([#7744](https://github.com/rapidsai/cudf/pull/7744)) [@kaatish](https://github.com/kaatish)
- Update conda recipes pinning of repo dependencies ([#7743](https://github.com/rapidsai/cudf/pull/7743)) [@mike-wendt](https://github.com/mike-wendt)
- Enable dask dispatch to cuDF's `is_categorical_dtype` for cuDF objects ([#7740](https://github.com/rapidsai/cudf/pull/7740)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Fix dictionary size computation in ORC writer ([#7737](https://github.com/rapidsai/cudf/pull/7737)) [@vuule](https://github.com/vuule)
- Fix `cudf::cast` overflow for `decimal64` to `int32_t` or smaller in certain cases ([#7733](https://github.com/rapidsai/cudf/pull/7733)) [@codereport](https://github.com/codereport)
- Change JNI API to avoid loading native dependencies when creating sort order classes. ([#7729](https://github.com/rapidsai/cudf/pull/7729)) [@revans2](https://github.com/revans2)
- Disable column_view data accessors for unsupported types ([#7725](https://github.com/rapidsai/cudf/pull/7725)) [@jrhemstad](https://github.com/jrhemstad)
- Materialize `RangeIndex` when `index=True` in parquet writer ([#7711](https://github.com/rapidsai/cudf/pull/7711)) [@galipremsagar](https://github.com/galipremsagar)
- Don't identify decimals as strings. ([#7710](https://github.com/rapidsai/cudf/pull/7710)) [@vyasr](https://github.com/vyasr)
- Fix return type of `DataFrame.argsort` ([#7706](https://github.com/rapidsai/cudf/pull/7706)) [@galipremsagar](https://github.com/galipremsagar)
- Fix/correct cudf installed package requirements ([#7688](https://github.com/rapidsai/cudf/pull/7688)) [@robertmaynard](https://github.com/robertmaynard)
- Fix SparkMurmurHash3_32 hash inconsistencies with Apache Spark ([#7672](https://github.com/rapidsai/cudf/pull/7672)) [@jlowe](https://github.com/jlowe)
- Fix ORC reader issue with reading empty string columns ([#7656](https://github.com/rapidsai/cudf/pull/7656)) [@rgsl888prabhu](https://github.com/rgsl888prabhu)
- Fix Java Parquet write after writer API changes ([#7655](https://github.com/rapidsai/cudf/pull/7655)) [@revans2](https://github.com/revans2)
- Fixing empty null lists throwing explode_outer for a loop. ([#7649](https://github.com/rapidsai/cudf/pull/7649)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Fix internal compiler error during JNI Docker build ([#7645](https://github.com/rapidsai/cudf/pull/7645)) [@jlowe](https://github.com/jlowe)
- Fix Debug build break with device_uvectors in grouped_rolling.cu ([#7633](https://github.com/rapidsai/cudf/pull/7633)) [@mythrocks](https://github.com/mythrocks)
- Parquet reader: Fix issue when using skip_rows on non-nested columns containing nulls ([#7627](https://github.com/rapidsai/cudf/pull/7627)) [@nvdbaranec](https://github.com/nvdbaranec)
- Fix ORC reader for empty DataFrame/Table ([#7624](https://github.com/rapidsai/cudf/pull/7624)) [@rgsl888prabhu](https://github.com/rgsl888prabhu)
- Fix specifying GPU architecture in JNI build ([#7612](https://github.com/rapidsai/cudf/pull/7612)) [@jlowe](https://github.com/jlowe)
- Fix ORC writer OOM issue ([#7605](https://github.com/rapidsai/cudf/pull/7605)) [@vuule](https://github.com/vuule)
- Fix 0.18 --> 0.19 automerge ([#7589](https://github.com/rapidsai/cudf/pull/7589)) [@kkraus14](https://github.com/kkraus14)
- Fix ORC issue with incorrect timestamp nanosecond values ([#7581](https://github.com/rapidsai/cudf/pull/7581)) [@vuule](https://github.com/vuule)
- Fix missing Dask imports ([#7580](https://github.com/rapidsai/cudf/pull/7580)) [@kkraus14](https://github.com/kkraus14)
- CMAKE_CUDA_ARCHITECTURES doesn't change when build-system invokes cmake ([#7579](https://github.com/rapidsai/cudf/pull/7579)) [@robertmaynard](https://github.com/robertmaynard)
- Another fix for offsets_end() iterator in lists_column_view ([#7575](https://github.com/rapidsai/cudf/pull/7575)) [@ttnghia](https://github.com/ttnghia)
- Fix ORC writer output corruption with string columns ([#7565](https://github.com/rapidsai/cudf/pull/7565)) [@vuule](https://github.com/vuule)
- Fix cudf::lists::sort_lists failing for sliced column ([#7564](https://github.com/rapidsai/cudf/pull/7564)) [@ttnghia](https://github.com/ttnghia)
- FIX Fix Anaconda upload args ([#7558](https://github.com/rapidsai/cudf/pull/7558)) [@dillon-cullinan](https://github.com/dillon-cullinan)
- Fix index mismatch issue in equality related APIs ([#7555](https://github.com/rapidsai/cudf/pull/7555)) [@galipremsagar](https://github.com/galipremsagar)
- FIX Revert gpuci_conda_retry on conda file output locations ([#7552](https://github.com/rapidsai/cudf/pull/7552)) [@dillon-cullinan](https://github.com/dillon-cullinan)
- Fix offset_end iterator for lists_column_view, which was not correctl⦠([#7551](https://github.com/rapidsai/cudf/pull/7551)) [@ttnghia](https://github.com/ttnghia)
- Fix no such file dlpack.h error when build libcudf ([#7549](https://github.com/rapidsai/cudf/pull/7549)) [@chenrui17](https://github.com/chenrui17)
- Update missing docstring examples in python public APIs ([#7546](https://github.com/rapidsai/cudf/pull/7546)) [@galipremsagar](https://github.com/galipremsagar)
- Decimal32 Build Fix ([#7544](https://github.com/rapidsai/cudf/pull/7544)) [@razajafri](https://github.com/razajafri)
- FIX Retry conda output location ([#7540](https://github.com/rapidsai/cudf/pull/7540)) [@dillon-cullinan](https://github.com/dillon-cullinan)
- fix missing renames of dask git branches from master to main ([#7535](https://github.com/rapidsai/cudf/pull/7535)) [@kkraus14](https://github.com/kkraus14)
- Remove detail from device_span ([#7533](https://github.com/rapidsai/cudf/pull/7533)) [@rwlee](https://github.com/rwlee)
- Change dask and distributed branch to main ([#7532](https://github.com/rapidsai/cudf/pull/7532)) [@dantegd](https://github.com/dantegd)
- Update JNI build to use CUDF_USE_ARROW_STATIC ([#7526](https://github.com/rapidsai/cudf/pull/7526)) [@jlowe](https://github.com/jlowe)
- Make sure rmm::rmm CMake target is visible to cudf users ([#7524](https://github.com/rapidsai/cudf/pull/7524)) [@robertmaynard](https://github.com/robertmaynard)
- Fix contiguous_split not properly handling output partitions > 2 GB. ([#7515](https://github.com/rapidsai/cudf/pull/7515)) [@nvdbaranec](https://github.com/nvdbaranec)
- Change jit launch to safe_launch ([#7510](https://github.com/rapidsai/cudf/pull/7510)) [@devavret](https://github.com/devavret)
- Fix comparison between Datetime/Timedelta columns and NULL scalars ([#7504](https://github.com/rapidsai/cudf/pull/7504)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Fix off-by-one error in char-parallel string scalar replace ([#7502](https://github.com/rapidsai/cudf/pull/7502)) [@jlowe](https://github.com/jlowe)
- Fix JNI deprecation of all, put it on the wrong version before ([#7501](https://github.com/rapidsai/cudf/pull/7501)) [@revans2](https://github.com/revans2)
- Fix Series/Dataframe Mixed Arithmetic ([#7491](https://github.com/rapidsai/cudf/pull/7491)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Fix JNI build after removal of libcudf sub-libraries ([#7486](https://github.com/rapidsai/cudf/pull/7486)) [@jlowe](https://github.com/jlowe)
- Correctly compile benchmarks ([#7485](https://github.com/rapidsai/cudf/pull/7485)) [@robertmaynard](https://github.com/robertmaynard)
- Fix bool column corruption with ORC Reader ([#7483](https://github.com/rapidsai/cudf/pull/7483)) [@rgsl888prabhu](https://github.com/rgsl888prabhu)
- Fix `__repr__` for categorical dtype ([#7476](https://github.com/rapidsai/cudf/pull/7476)) [@galipremsagar](https://github.com/galipremsagar)
- Java cleaner synchronization ([#7474](https://github.com/rapidsai/cudf/pull/7474)) [@abellina](https://github.com/abellina)
- Fix java float/double parsing tests ([#7473](https://github.com/rapidsai/cudf/pull/7473)) [@revans2](https://github.com/revans2)
- Pass stream and user resource to make_default_constructed_scalar ([#7469](https://github.com/rapidsai/cudf/pull/7469)) [@magnatelee](https://github.com/magnatelee)
- Improve stability of dask_cudf.DataFrame.var and dask_cudf.DataFrame.std ([#7453](https://github.com/rapidsai/cudf/pull/7453)) [@rjzamora](https://github.com/rjzamora)
- Missing `device_storage_dispatch` change affecting `cudf::gather` ([#7449](https://github.com/rapidsai/cudf/pull/7449)) [@codereport](https://github.com/codereport)
- fix cuFile JNI compile errors ([#7445](https://github.com/rapidsai/cudf/pull/7445)) [@rongou](https://github.com/rongou)
- Support `Series.__setitem__` with key to a new row ([#7443](https://github.com/rapidsai/cudf/pull/7443)) [@isVoid](https://github.com/isVoid)
- Fix BUG: Exception when PYTHONOPTIMIZE=2 ([#7434](https://github.com/rapidsai/cudf/pull/7434)) [@skirui-source](https://github.com/skirui-source)
- Make inclusive scan safe for cases with leading nulls ([#7432](https://github.com/rapidsai/cudf/pull/7432)) [@magnatelee](https://github.com/magnatelee)
- Fix typo in list_device_view::pair_rep_end() ([#7423](https://github.com/rapidsai/cudf/pull/7423)) [@mythrocks](https://github.com/mythrocks)
- Fix string to double conversion and row equivalent comparison ([#7410](https://github.com/rapidsai/cudf/pull/7410)) [@ttnghia](https://github.com/ttnghia)
- Fix thrust failure when transferring data from device_vector to host_vector with vectors of size 1 ([#7382](https://github.com/rapidsai/cudf/pull/7382)) [@ttnghia](https://github.com/ttnghia)
- Fix std::exception catch-by-reference gcc9 compile error ([#7380](https://github.com/rapidsai/cudf/pull/7380)) [@davidwendt](https://github.com/davidwendt)
- Fix skiprows issue with ORC Reader ([#7359](https://github.com/rapidsai/cudf/pull/7359)) [@rgsl888prabhu](https://github.com/rgsl888prabhu)
- fix Arrow CMake file ([#7358](https://github.com/rapidsai/cudf/pull/7358)) [@rongou](https://github.com/rongou)
- Fix lists::contains() for NaN and Decimals ([#7349](https://github.com/rapidsai/cudf/pull/7349)) [@mythrocks](https://github.com/mythrocks)
- Handle cupy array in `Dataframe.__setitem__` ([#7340](https://github.com/rapidsai/cudf/pull/7340)) [@galipremsagar](https://github.com/galipremsagar)
- Fix invalid-device-fn error in cudf::strings::replace_re with multiple regex's ([#7336](https://github.com/rapidsai/cudf/pull/7336)) [@davidwendt](https://github.com/davidwendt)
- FIX Add codecov upload block to gpu script ([#6860](https://github.com/rapidsai/cudf/pull/6860)) [@dillon-cullinan](https://github.com/dillon-cullinan)
## π Documentation
- Fix join API doxygen ([#7890](https://github.com/rapidsai/cudf/pull/7890)) [@shwina](https://github.com/shwina)
- Add Resources to README. ([#7697](https://github.com/rapidsai/cudf/pull/7697)) [@bdice](https://github.com/bdice)
- Add `isin` examples in Docstring ([#7479](https://github.com/rapidsai/cudf/pull/7479)) [@galipremsagar](https://github.com/galipremsagar)
- Resolving unlinked type shorthands in cudf doc ([#7416](https://github.com/rapidsai/cudf/pull/7416)) [@isVoid](https://github.com/isVoid)
- Fix typo in regex.md doc page ([#7363](https://github.com/rapidsai/cudf/pull/7363)) [@davidwendt](https://github.com/davidwendt)
- Fix incorrect strings_column_view::chars_size documentation ([#7360](https://github.com/rapidsai/cudf/pull/7360)) [@jlowe](https://github.com/jlowe)
## π New Features
- Enable basic reductions for decimal columns ([#7776](https://github.com/rapidsai/cudf/pull/7776)) [@ChrisJar](https://github.com/ChrisJar)
- Enable join on decimal columns ([#7764](https://github.com/rapidsai/cudf/pull/7764)) [@ChrisJar](https://github.com/ChrisJar)
- Allow merging index column with data column using keyword "on" ([#7736](https://github.com/rapidsai/cudf/pull/7736)) [@skirui-source](https://github.com/skirui-source)
- Implement DecimalColumn + Scalar and add cudf.Scalars of Decimal64Dtype ([#7732](https://github.com/rapidsai/cudf/pull/7732)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Add support for `unique` groupby aggregation ([#7726](https://github.com/rapidsai/cudf/pull/7726)) [@shwina](https://github.com/shwina)
- Expose libcudf's label_bins function to cudf ([#7724](https://github.com/rapidsai/cudf/pull/7724)) [@vyasr](https://github.com/vyasr)
- Adding support for equi-join on struct ([#7720](https://github.com/rapidsai/cudf/pull/7720)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Add decimal column comparison operations ([#7716](https://github.com/rapidsai/cudf/pull/7716)) [@isVoid](https://github.com/isVoid)
- Implement scan operations for decimal columns ([#7707](https://github.com/rapidsai/cudf/pull/7707)) [@ChrisJar](https://github.com/ChrisJar)
- Enable typecasting between decimal and int ([#7691](https://github.com/rapidsai/cudf/pull/7691)) [@ChrisJar](https://github.com/ChrisJar)
- Enable decimal support in parquet writer ([#7673](https://github.com/rapidsai/cudf/pull/7673)) [@devavret](https://github.com/devavret)
- Adds `list.unique` API ([#7664](https://github.com/rapidsai/cudf/pull/7664)) [@isVoid](https://github.com/isVoid)
- Fix NaN handling in drop_list_duplicates ([#7662](https://github.com/rapidsai/cudf/pull/7662)) [@ttnghia](https://github.com/ttnghia)
- Add `lists.sort_values` API ([#7657](https://github.com/rapidsai/cudf/pull/7657)) [@isVoid](https://github.com/isVoid)
- Add is_integer API that can check for the validity of a string-to-integer conversion ([#7642](https://github.com/rapidsai/cudf/pull/7642)) [@ttnghia](https://github.com/ttnghia)
- Adds `explode` API ([#7607](https://github.com/rapidsai/cudf/pull/7607)) [@isVoid](https://github.com/isVoid)
- Adds `list.take`, python binding for `cudf::lists::segmented_gather` ([#7591](https://github.com/rapidsai/cudf/pull/7591)) [@isVoid](https://github.com/isVoid)
- Implement cudf::label_bins() ([#7554](https://github.com/rapidsai/cudf/pull/7554)) [@vyasr](https://github.com/vyasr)
- Add Python bindings for `lists::contains` ([#7547](https://github.com/rapidsai/cudf/pull/7547)) [@skirui-source](https://github.com/skirui-source)
- cudf::row_bit_count() support. ([#7534](https://github.com/rapidsai/cudf/pull/7534)) [@nvdbaranec](https://github.com/nvdbaranec)
- Implement drop_list_duplicates ([#7528](https://github.com/rapidsai/cudf/pull/7528)) [@ttnghia](https://github.com/ttnghia)
- Add Python bindings for `lists::extract_lists_element` ([#7505](https://github.com/rapidsai/cudf/pull/7505)) [@skirui-source](https://github.com/skirui-source)
- Add explode_outer and explode_outer_position ([#7499](https://github.com/rapidsai/cudf/pull/7499)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Match Pandas logic for comparing two objects with nulls ([#7490](https://github.com/rapidsai/cudf/pull/7490)) [@brandon-b-miller](https://github.com/brandon-b-miller)
- Add struct support to parquet writer ([#7461](https://github.com/rapidsai/cudf/pull/7461)) [@devavret](https://github.com/devavret)
- Enable type conversion from float to decimal type ([#7450](https://github.com/rapidsai/cudf/pull/7450)) [@ChrisJar](https://github.com/ChrisJar)
- Add cython for converting strings/fixed-point functions ([#7429](https://github.com/rapidsai/cudf/pull/7429)) [@davidwendt](https://github.com/davidwendt)
- Add struct column support to cudf::sort and cudf::sorted_order ([#7422](https://github.com/rapidsai/cudf/pull/7422)) [@karthikeyann](https://github.com/karthikeyann)
- Implement groupby collect_set ([#7420](https://github.com/rapidsai/cudf/pull/7420)) [@ttnghia](https://github.com/ttnghia)
- Merge branch-0.18 into branch-0.19 ([#7411](https://github.com/rapidsai/cudf/pull/7411)) [@raydouglass](https://github.com/raydouglass)
- Refactor strings column factories ([#7397](https://github.com/rapidsai/cudf/pull/7397)) [@harrism](https://github.com/harrism)
- Add groupby scan operations (sort groupby) ([#7387](https://github.com/rapidsai/cudf/pull/7387)) [@karthikeyann](https://github.com/karthikeyann)
- Add cudf::explode_position ([#7376](https://github.com/rapidsai/cudf/pull/7376)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Add string conversion to/from decimal values libcudf APIs ([#7364](https://github.com/rapidsai/cudf/pull/7364)) [@davidwendt](https://github.com/davidwendt)
- Add groupby SUM_OF_SQUARES support ([#7362](https://github.com/rapidsai/cudf/pull/7362)) [@karthikeyann](https://github.com/karthikeyann)
- Add `Series.drop` api ([#7304](https://github.com/rapidsai/cudf/pull/7304)) [@isVoid](https://github.com/isVoid)
- get_json_object() implementation ([#7286](https://github.com/rapidsai/cudf/pull/7286)) [@nvdbaranec](https://github.com/nvdbaranec)
- Python API for `LIstMethods.len()` ([#7283](https://github.com/rapidsai/cudf/pull/7283)) [@isVoid](https://github.com/isVoid)
- Support null_policy::EXCLUDE for COLLECT rolling aggregation ([#7264](https://github.com/rapidsai/cudf/pull/7264)) [@mythrocks](https://github.com/mythrocks)
- Add support for special tokens in nvtext::subword_tokenizer ([#7254](https://github.com/rapidsai/cudf/pull/7254)) [@davidwendt](https://github.com/davidwendt)
- Fix inplace update of data and add Series.update ([#7201](https://github.com/rapidsai/cudf/pull/7201)) [@galipremsagar](https://github.com/galipremsagar)
- Implement `cudf::group_by` (hash) for `decimal32` and `decimal64` ([#7190](https://github.com/rapidsai/cudf/pull/7190)) [@codereport](https://github.com/codereport)
- Adding support to specify "level" parameter for `Dataframe.rename` ([#7135](https://github.com/rapidsai/cudf/pull/7135)) [@skirui-source](https://github.com/skirui-source)
## π οΈ Improvements
- fix GDS include path for version 0.95 ([#7877](https://github.com/rapidsai/cudf/pull/7877)) [@rongou](https://github.com/rongou)
- Update `dask` + `distributed` to `2021.4.0` ([#7858](https://github.com/rapidsai/cudf/pull/7858)) [@jakirkham](https://github.com/jakirkham)
- Add ability to extract include dirs from `CUDF_HOME` ([#7848](https://github.com/rapidsai/cudf/pull/7848)) [@galipremsagar](https://github.com/galipremsagar)
- Add USE_GDS as an option in build script ([#7833](https://github.com/rapidsai/cudf/pull/7833)) [@pxLi](https://github.com/pxLi)
- add an allocate method with stream in java DeviceMemoryBuffer ([#7826](https://github.com/rapidsai/cudf/pull/7826)) [@rongou](https://github.com/rongou)
- Constrain dask and distributed versions to 2021.3.1 ([#7825](https://github.com/rapidsai/cudf/pull/7825)) [@shwina](https://github.com/shwina)
- Revert dask versioning of concat dispatch ([#7823](https://github.com/rapidsai/cudf/pull/7823)) [@galipremsagar](https://github.com/galipremsagar)
- add copy methods in Java memory buffer ([#7791](https://github.com/rapidsai/cudf/pull/7791)) [@rongou](https://github.com/rongou)
- Update README and CONTRIBUTING for 0.19 ([#7778](https://github.com/rapidsai/cudf/pull/7778)) [@robertmaynard](https://github.com/robertmaynard)
- Allow hash_partition to take a seed value ([#7771](https://github.com/rapidsai/cudf/pull/7771)) [@magnatelee](https://github.com/magnatelee)
- Turn on NVTX by default in java build ([#7761](https://github.com/rapidsai/cudf/pull/7761)) [@tgravescs](https://github.com/tgravescs)
- Add Java bindings to join gather map APIs ([#7751](https://github.com/rapidsai/cudf/pull/7751)) [@jlowe](https://github.com/jlowe)
- Add replacements column support for Java replaceNulls ([#7750](https://github.com/rapidsai/cudf/pull/7750)) [@jlowe](https://github.com/jlowe)
- Add Java bindings for row_bit_count ([#7749](https://github.com/rapidsai/cudf/pull/7749)) [@jlowe](https://github.com/jlowe)
- Remove unused JVM array creation ([#7748](https://github.com/rapidsai/cudf/pull/7748)) [@jlowe](https://github.com/jlowe)
- Added JNI support for new is_integer ([#7739](https://github.com/rapidsai/cudf/pull/7739)) [@revans2](https://github.com/revans2)
- Create and promote library aliases in libcudf installations ([#7734](https://github.com/rapidsai/cudf/pull/7734)) [@trxcllnt](https://github.com/trxcllnt)
- Support groupby operations for decimal dtypes ([#7731](https://github.com/rapidsai/cudf/pull/7731)) [@vyasr](https://github.com/vyasr)
- Memory map the input file only when GDS compatibility mode is not used ([#7717](https://github.com/rapidsai/cudf/pull/7717)) [@vuule](https://github.com/vuule)
- Replace device_vector with device_uvector in null_mask ([#7715](https://github.com/rapidsai/cudf/pull/7715)) [@harrism](https://github.com/harrism)
- Struct hashing support for SerialMurmur3 and SparkMurmur3 ([#7714](https://github.com/rapidsai/cudf/pull/7714)) [@jlowe](https://github.com/jlowe)
- Add gbenchmark for nvtext replace-tokens function ([#7708](https://github.com/rapidsai/cudf/pull/7708)) [@davidwendt](https://github.com/davidwendt)
- Use stream in groupby calls ([#7705](https://github.com/rapidsai/cudf/pull/7705)) [@karthikeyann](https://github.com/karthikeyann)
- Update codeowners file ([#7701](https://github.com/rapidsai/cudf/pull/7701)) [@ajschmidt8](https://github.com/ajschmidt8)
- Cleanup groupby to use host_span, device_span, device_uvector ([#7698](https://github.com/rapidsai/cudf/pull/7698)) [@karthikeyann](https://github.com/karthikeyann)
- Add gbenchmark for nvtext ngrams functions ([#7693](https://github.com/rapidsai/cudf/pull/7693)) [@davidwendt](https://github.com/davidwendt)
- Misc Python/Cython optimizations ([#7686](https://github.com/rapidsai/cudf/pull/7686)) [@shwina](https://github.com/shwina)
- Add gbenchmark for nvtext tokenize functions ([#7684](https://github.com/rapidsai/cudf/pull/7684)) [@davidwendt](https://github.com/davidwendt)
- Add column_device_view to orc writer ([#7676](https://github.com/rapidsai/cudf/pull/7676)) [@kaatish](https://github.com/kaatish)
- cudf_kafka now uses cuDF CMake export targets (CPM) ([#7674](https://github.com/rapidsai/cudf/pull/7674)) [@robertmaynard](https://github.com/robertmaynard)
- Add gbenchmark for nvtext normalize functions ([#7668](https://github.com/rapidsai/cudf/pull/7668)) [@davidwendt](https://github.com/davidwendt)
- Resolve unnecessary import of thrust/optional.hpp in types.hpp ([#7667](https://github.com/rapidsai/cudf/pull/7667)) [@vyasr](https://github.com/vyasr)
- Feature/optimize accessor copy ([#7660](https://github.com/rapidsai/cudf/pull/7660)) [@vyasr](https://github.com/vyasr)
- Fix `find_package(cudf)` ([#7658](https://github.com/rapidsai/cudf/pull/7658)) [@trxcllnt](https://github.com/trxcllnt)
- Work-around for gcc7 compile error on Centos7 ([#7652](https://github.com/rapidsai/cudf/pull/7652)) [@davidwendt](https://github.com/davidwendt)
- Add in JNI support for count_elements ([#7651](https://github.com/rapidsai/cudf/pull/7651)) [@revans2](https://github.com/revans2)
- Fix issues with building cudf in a non-conda environment ([#7647](https://github.com/rapidsai/cudf/pull/7647)) [@galipremsagar](https://github.com/galipremsagar)
- Refactor ConfigureCUDA to not conditionally insert compiler flags ([#7643](https://github.com/rapidsai/cudf/pull/7643)) [@robertmaynard](https://github.com/robertmaynard)
- Add gbenchmark for converting strings to/from timestamps ([#7641](https://github.com/rapidsai/cudf/pull/7641)) [@davidwendt](https://github.com/davidwendt)
- Handle constructing a `cudf.Scalar` from a `cudf.Scalar` ([#7639](https://github.com/rapidsai/cudf/pull/7639)) [@shwina](https://github.com/shwina)
- Add in JNI support for table partition ([#7637](https://github.com/rapidsai/cudf/pull/7637)) [@revans2](https://github.com/revans2)
- Add explicit fixed_point merge test ([#7635](https://github.com/rapidsai/cudf/pull/7635)) [@codereport](https://github.com/codereport)
- Add JNI support for IDENTITY hash partitioning ([#7626](https://github.com/rapidsai/cudf/pull/7626)) [@revans2](https://github.com/revans2)
- Java support on explode_outer ([#7625](https://github.com/rapidsai/cudf/pull/7625)) [@sperlingxx](https://github.com/sperlingxx)
- Java support of casting string from/to decimal ([#7623](https://github.com/rapidsai/cudf/pull/7623)) [@sperlingxx](https://github.com/sperlingxx)
- Convert cudf::concatenate APIs to use spans and device_uvector ([#7621](https://github.com/rapidsai/cudf/pull/7621)) [@harrism](https://github.com/harrism)
- Add gbenchmark for cudf::strings::translate function ([#7617](https://github.com/rapidsai/cudf/pull/7617)) [@davidwendt](https://github.com/davidwendt)
- Use file(COPY ) over file(INSTALL ) so cmake output is reduced ([#7616](https://github.com/rapidsai/cudf/pull/7616)) [@robertmaynard](https://github.com/robertmaynard)
- Use rmm::device_uvector in place of rmm::device_vector for ORC reader/writer and cudf::io::column_buffer ([#7614](https://github.com/rapidsai/cudf/pull/7614)) [@vuule](https://github.com/vuule)
- Refactor Java host-side buffer concatenation to expose separate steps ([#7610](https://github.com/rapidsai/cudf/pull/7610)) [@jlowe](https://github.com/jlowe)
- Add gbenchmarks for string substrings functions ([#7603](https://github.com/rapidsai/cudf/pull/7603)) [@davidwendt](https://github.com/davidwendt)
- Refactor string conversion check ([#7599](https://github.com/rapidsai/cudf/pull/7599)) [@ttnghia](https://github.com/ttnghia)
- JNI: Pass names of children struct columns to native Arrow IPC writer ([#7598](https://github.com/rapidsai/cudf/pull/7598)) [@firestarman](https://github.com/firestarman)
- Revert "ENH Fix stale GHA and prevent duplicates " ([#7595](https://github.com/rapidsai/cudf/pull/7595)) [@mike-wendt](https://github.com/mike-wendt)
- ENH Fix stale GHA and prevent duplicates ([#7594](https://github.com/rapidsai/cudf/pull/7594)) [@mike-wendt](https://github.com/mike-wendt)
- Fix auto-detecting GPU architectures ([#7593](https://github.com/rapidsai/cudf/pull/7593)) [@trxcllnt](https://github.com/trxcllnt)
- Reduce cudf library size ([#7583](https://github.com/rapidsai/cudf/pull/7583)) [@robertmaynard](https://github.com/robertmaynard)
- Optimize cudf::make_strings_column for long strings ([#7576](https://github.com/rapidsai/cudf/pull/7576)) [@davidwendt](https://github.com/davidwendt)
- Always build and export the cudf::cudftestutil target ([#7574](https://github.com/rapidsai/cudf/pull/7574)) [@trxcllnt](https://github.com/trxcllnt)
- Eliminate literal parameters to uvector::set_element_async and device_scalar::set_value ([#7563](https://github.com/rapidsai/cudf/pull/7563)) [@harrism](https://github.com/harrism)
- Add gbenchmark for strings::concatenate ([#7560](https://github.com/rapidsai/cudf/pull/7560)) [@davidwendt](https://github.com/davidwendt)
- Update Changelog Link ([#7550](https://github.com/rapidsai/cudf/pull/7550)) [@ajschmidt8](https://github.com/ajschmidt8)
- Add gbenchmarks for strings replace regex functions ([#7541](https://github.com/rapidsai/cudf/pull/7541)) [@davidwendt](https://github.com/davidwendt)
- Add `__repr__` for Column and ColumnAccessor ([#7531](https://github.com/rapidsai/cudf/pull/7531)) [@shwina](https://github.com/shwina)
- Support Decimal DIV changes in cudf ([#7527](https://github.com/rapidsai/cudf/pull/7527)) [@razajafri](https://github.com/razajafri)
- Remove unneeded step parameter from strings::detail::copy_slice ([#7525](https://github.com/rapidsai/cudf/pull/7525)) [@davidwendt](https://github.com/davidwendt)
- Use device_uvector, device_span in sort groupby ([#7523](https://github.com/rapidsai/cudf/pull/7523)) [@karthikeyann](https://github.com/karthikeyann)
- Add gbenchmarks for strings extract function ([#7522](https://github.com/rapidsai/cudf/pull/7522)) [@davidwendt](https://github.com/davidwendt)
- Rename ARROW_STATIC_LIB because it conflicts with one in FindArrow.cmake ([#7518](https://github.com/rapidsai/cudf/pull/7518)) [@trxcllnt](https://github.com/trxcllnt)
- Reduce compile time/size for scan.cu ([#7516](https://github.com/rapidsai/cudf/pull/7516)) [@davidwendt](https://github.com/davidwendt)
- Change device_vector to device_uvector in nvtext source files ([#7512](https://github.com/rapidsai/cudf/pull/7512)) [@davidwendt](https://github.com/davidwendt)
- Removed unneeded includes from traits.hpp ([#7509](https://github.com/rapidsai/cudf/pull/7509)) [@davidwendt](https://github.com/davidwendt)
- FIX Remove random build directory generation for ccache ([#7508](https://github.com/rapidsai/cudf/pull/7508)) [@dillon-cullinan](https://github.com/dillon-cullinan)
- xfail failing pytest in pandas 1.2.3 ([#7507](https://github.com/rapidsai/cudf/pull/7507)) [@galipremsagar](https://github.com/galipremsagar)
- JNI bit cast ([#7493](https://github.com/rapidsai/cudf/pull/7493)) [@revans2](https://github.com/revans2)
- Combine rolling window function tests ([#7480](https://github.com/rapidsai/cudf/pull/7480)) [@mythrocks](https://github.com/mythrocks)
- Prepare Changelog for Automation ([#7477](https://github.com/rapidsai/cudf/pull/7477)) [@ajschmidt8](https://github.com/ajschmidt8)
- Java support for explode position ([#7471](https://github.com/rapidsai/cudf/pull/7471)) [@sperlingxx](https://github.com/sperlingxx)
- Update 0.18 changelog entry ([#7463](https://github.com/rapidsai/cudf/pull/7463)) [@ajschmidt8](https://github.com/ajschmidt8)
- JNI: Support skipping nulls for collect aggregation ([#7457](https://github.com/rapidsai/cudf/pull/7457)) [@firestarman](https://github.com/firestarman)
- Join APIs that return gathermaps ([#7454](https://github.com/rapidsai/cudf/pull/7454)) [@shwina](https://github.com/shwina)
- Remove dependence on managed memory for multimap test ([#7451](https://github.com/rapidsai/cudf/pull/7451)) [@jrhemstad](https://github.com/jrhemstad)
- Use cuFile for Parquet IO when available ([#7444](https://github.com/rapidsai/cudf/pull/7444)) [@vuule](https://github.com/vuule)
- Statistics cleanup ([#7439](https://github.com/rapidsai/cudf/pull/7439)) [@kaatish](https://github.com/kaatish)
- Add gbenchmarks for strings filter functions ([#7438](https://github.com/rapidsai/cudf/pull/7438)) [@davidwendt](https://github.com/davidwendt)
- `fixed_point` + `cudf::binary_operation` API Changes ([#7435](https://github.com/rapidsai/cudf/pull/7435)) [@codereport](https://github.com/codereport)
- Improve string gather performance ([#7433](https://github.com/rapidsai/cudf/pull/7433)) [@jlowe](https://github.com/jlowe)
- Don't use user resource for a temporary allocation in sort_by_key ([#7431](https://github.com/rapidsai/cudf/pull/7431)) [@magnatelee](https://github.com/magnatelee)
- Detail APIs for datetime functions ([#7430](https://github.com/rapidsai/cudf/pull/7430)) [@magnatelee](https://github.com/magnatelee)
- Replace thrust::max_element with thrust::reduce in strings findall_re ([#7428](https://github.com/rapidsai/cudf/pull/7428)) [@davidwendt](https://github.com/davidwendt)
- Add gbenchmark for strings split/split_record functions ([#7427](https://github.com/rapidsai/cudf/pull/7427)) [@davidwendt](https://github.com/davidwendt)
- Update JNI build to use CMAKE_CUDA_ARCHITECTURES ([#7425](https://github.com/rapidsai/cudf/pull/7425)) [@jlowe](https://github.com/jlowe)
- Change nvtext::load_vocabulary_file to return a unique ptr ([#7424](https://github.com/rapidsai/cudf/pull/7424)) [@davidwendt](https://github.com/davidwendt)
- Simplify type dispatch with `device_storage_dispatch` ([#7419](https://github.com/rapidsai/cudf/pull/7419)) [@codereport](https://github.com/codereport)
- Java support for casting of nested child columns ([#7417](https://github.com/rapidsai/cudf/pull/7417)) [@razajafri](https://github.com/razajafri)
- Improve scalar string replace performance for long strings ([#7415](https://github.com/rapidsai/cudf/pull/7415)) [@jlowe](https://github.com/jlowe)
- Remove unneeded temporary device vector for strings scatter specialization ([#7409](https://github.com/rapidsai/cudf/pull/7409)) [@davidwendt](https://github.com/davidwendt)
- bitmask_or implementation with bitmask refactor ([#7406](https://github.com/rapidsai/cudf/pull/7406)) [@rwlee](https://github.com/rwlee)
- Add other cudf::strings::replace functions to current strings replace gbenchmark ([#7403](https://github.com/rapidsai/cudf/pull/7403)) [@davidwendt](https://github.com/davidwendt)
- Clean up included headers in `device_operators.cuh` ([#7401](https://github.com/rapidsai/cudf/pull/7401)) [@codereport](https://github.com/codereport)
- Move nullable index iterator to indexalator factory ([#7399](https://github.com/rapidsai/cudf/pull/7399)) [@davidwendt](https://github.com/davidwendt)
- ENH Pass ccache variables to conda recipe & use Ninja in CI ([#7398](https://github.com/rapidsai/cudf/pull/7398)) [@Ethyling](https://github.com/Ethyling)
- upgrade maven-antrun-plugin to support maven parallel builds ([#7393](https://github.com/rapidsai/cudf/pull/7393)) [@rongou](https://github.com/rongou)
- Add gbenchmark for strings find/contains functions ([#7392](https://github.com/rapidsai/cudf/pull/7392)) [@davidwendt](https://github.com/davidwendt)
- Use CMAKE_CUDA_ARCHITECTURES ([#7391](https://github.com/rapidsai/cudf/pull/7391)) [@robertmaynard](https://github.com/robertmaynard)
- Refactor libcudf strings::replace to use make_strings_children utility ([#7384](https://github.com/rapidsai/cudf/pull/7384)) [@davidwendt](https://github.com/davidwendt)
- Added in JNI support for out of core sort algorithm ([#7381](https://github.com/rapidsai/cudf/pull/7381)) [@revans2](https://github.com/revans2)
- Upgrade pandas to 1.2 ([#7375](https://github.com/rapidsai/cudf/pull/7375)) [@galipremsagar](https://github.com/galipremsagar)
- Rename `logical_cast` to `bit_cast` and allow additional conversions ([#7373](https://github.com/rapidsai/cudf/pull/7373)) [@ttnghia](https://github.com/ttnghia)
- jitify 2 support ([#7372](https://github.com/rapidsai/cudf/pull/7372)) [@cwharris](https://github.com/cwharris)
- compile_udf: Cache PTX for similar functions ([#7371](https://github.com/rapidsai/cudf/pull/7371)) [@gmarkall](https://github.com/gmarkall)
- Add string scalar replace benchmark ([#7369](https://github.com/rapidsai/cudf/pull/7369)) [@jlowe](https://github.com/jlowe)
- Add gbenchmark for strings contains_re/count_re functions ([#7366](https://github.com/rapidsai/cudf/pull/7366)) [@davidwendt](https://github.com/davidwendt)
- Update orc reader and writer fuzz tests ([#7357](https://github.com/rapidsai/cudf/pull/7357)) [@galipremsagar](https://github.com/galipremsagar)
- Improve url_decode performance for long strings ([#7353](https://github.com/rapidsai/cudf/pull/7353)) [@jlowe](https://github.com/jlowe)
- `cudf::ast` Small Refactorings ([#7352](https://github.com/rapidsai/cudf/pull/7352)) [@codereport](https://github.com/codereport)
- Remove std::cout and print in the scatter test function EmptyListsOfNullableStrings. ([#7342](https://github.com/rapidsai/cudf/pull/7342)) [@ttnghia](https://github.com/ttnghia)
- Use `cudf::detail::make_counting_transform_iterator` ([#7338](https://github.com/rapidsai/cudf/pull/7338)) [@codereport](https://github.com/codereport)
- Change block size parameter from a global to a template param. ([#7333](https://github.com/rapidsai/cudf/pull/7333)) [@nvdbaranec](https://github.com/nvdbaranec)
- Partial clean up of ORC writer ([#7324](https://github.com/rapidsai/cudf/pull/7324)) [@vuule](https://github.com/vuule)
- Add gbenchmark for cudf::strings::to_lower ([#7316](https://github.com/rapidsai/cudf/pull/7316)) [@davidwendt](https://github.com/davidwendt)
- Update Java bindings version to 0.19-SNAPSHOT ([#7307](https://github.com/rapidsai/cudf/pull/7307)) [@pxLi](https://github.com/pxLi)
- Move `cudf::test::make_counting_transform_iterator` to `cudf/detail/iterator.cuh` ([#7306](https://github.com/rapidsai/cudf/pull/7306)) [@codereport](https://github.com/codereport)
- Use string literals in `fixed_point` `release_assert`s ([#7303](https://github.com/rapidsai/cudf/pull/7303)) [@codereport](https://github.com/codereport)
- Fix merge conflicts for #7295 ([#7297](https://github.com/rapidsai/cudf/pull/7297)) [@ajschmidt8](https://github.com/ajschmidt8)
- Add UTF-8 chars to create_random_column<string_view> benchmark utility ([#7292](https://github.com/rapidsai/cudf/pull/7292)) [@davidwendt](https://github.com/davidwendt)
- Abstracting block reduce and block scan from cuIO kernels with `cub` apis ([#7278](https://github.com/rapidsai/cudf/pull/7278)) [@rgsl888prabhu](https://github.com/rgsl888prabhu)
- Build.sh use cmake --build to drive build system invocation ([#7270](https://github.com/rapidsai/cudf/pull/7270)) [@robertmaynard](https://github.com/robertmaynard)
- Refactor dictionary support for reductions any/all ([#7242](https://github.com/rapidsai/cudf/pull/7242)) [@davidwendt](https://github.com/davidwendt)
- Replace stream.value() with stream for stream_view args ([#7236](https://github.com/rapidsai/cudf/pull/7236)) [@karthikeyann](https://github.com/karthikeyann)
- Interval index and interval_range ([#7182](https://github.com/rapidsai/cudf/pull/7182)) [@marlenezw](https://github.com/marlenezw)
- avro reader integration tests ([#7156](https://github.com/rapidsai/cudf/pull/7156)) [@cwharris](https://github.com/cwharris)
- Rework libcudf CMakeLists.txt to export targets for CPM ([#7107](https://github.com/rapidsai/cudf/pull/7107)) [@trxcllnt](https://github.com/trxcllnt)
- Adding Interval Dtype ([#6984](https://github.com/rapidsai/cudf/pull/6984)) [@marlenezw](https://github.com/marlenezw)
- Cleaning up `for` loops with `make_(counting_)transform_iterator` ([#6546](https://github.com/rapidsai/cudf/pull/6546)) [@codereport](https://github.com/codereport)
# cuDF 0.18.0 (24 Feb 2021)
## Breaking Changes π¨
- Default `groupby` to `sort=False` (#7180) @isVoid
- Add libcudf API for parsing of ORC statistics (#7136) @vuule
- Replace ORC writer api with class (#7099) @rgsl888prabhu
- Pack/unpack functionality to convert tables to and from a serialized format. (#7096) @nvdbaranec
- Replace parquet writer api with class (#7058) @rgsl888prabhu
- Add days check to cudf::is_timestamp using cuda::std::chrono classes (#7028) @davidwendt
- Fix default parameter values of `write_csv` and `write_parquet` (#6967) @vuule
- Align `Series.groupby` API to match Pandas (#6964) @kkraus14
- Share `factorize` implementation with Index and cudf module (#6885) @brandon-b-miller
## Bug Fixes π
- Remove incorrect std::move call on return variable (#7319) @davidwendt
- Fix failing CI ORC test (#7313) @vuule
- Disallow constructing frames from a ColumnAccessor (#7298) @shwina
- fix java cuFile tests (#7296) @rongou
- Fix style issues related to NumPy (#7279) @shwina
- Fix bug when `iloc` slice terminates at before-the-zero position (#7277) @isVoid
- Fix copying dtype metadata after calling libcudf functions (#7271) @shwina
- Move lists utility function definition out of header (#7266) @mythrocks
- Throw if bool column would cause incorrect result when writing to ORC (#7261) @vuule
- Use `uvector` in `replace_nulls`; Fix `sort_helper::grouped_value` doc (#7256) @isVoid
- Remove floating point types from cudf::sort fast-path (#7250) @davidwendt
- Disallow picking output columns from nested columns. (#7248) @devavret
- Fix `loc` for Series with a MultiIndex (#7243) @shwina
- Fix Arrow column test leaks (#7241) @tgravescs
- Fix test column vector leak (#7238) @kuhushukla
- Fix some bugs in java scalar support for decimal (#7237) @revans2
- Improve `assert_eq` handling of scalar (#7220) @isVoid
- Fix missing null_count() comparison in test framework and related failures (#7219) @nvdbaranec
- Remove floating point types from radix sort fast-path (#7215) @davidwendt
- Fixing parquet benchmarks (#7214) @rgsl888prabhu
- Handle various parameter combinations in `replace` API (#7207) @galipremsagar
- Export mock aws credentials for s3 tests (#7176) @ayushdg
- Add `MultiIndex.rename` API (#7172) @isVoid
- Fix importing list & struct types in `from_arrow` (#7162) @galipremsagar
- Fixing parquet precision writing failing if scale is equal to precision (#7146) @hyperbolic2346
- Update s3 tests to use moto_server (#7144) @ayushdg
- Fix JIT cache multi-process test flakiness in slow drives (#7142) @devavret
- Fix compilation errors in libcudf (#7138) @galipremsagar
- Fix compilation failure caused by `-Wall` addition. (#7134) @codereport
- Add informative error message for `sep` in CSV writer (#7095) @galipremsagar
- Add JIT cache per compute capability (#7090) @devavret
- Implement `__hash__` method for ListDtype (#7081) @galipremsagar
- Only upload packages that were built (#7077) @raydouglass
- Fix comparisons between Series and cudf.NA (#7072) @brandon-b-miller
- Handle `nan` values correctly in `Series.one_hot_encoding` (#7059) @galipremsagar
- Add `unstack()` support for non-multiindexed dataframes (#7054) @isVoid
- Fix `read_orc` for decimal type (#7034) @rgsl888prabhu
- Fix backward compatibility of loading a 0.16 pkl file (#7033) @galipremsagar
- Decimal casts in JNI became a NOOP (#7032) @revans2
- Restore usual instance/subclass checking to cudf.DateOffset (#7029) @shwina
- Add days check to cudf::is_timestamp using cuda::std::chrono classes (#7028) @davidwendt
- Fix to_csv delimiter handling of timestamp format (#7023) @davidwendt
- Pin librdkakfa to gcc 7 compatible version (#7021) @raydouglass
- Fix `fillna` & `dropna` to also consider `np.nan` as a missing value (#7019) @galipremsagar
- Fix round operator's HALF_EVEN computation for negative integers (#7014) @nartal1
- Skip Thrust sort patch if already applied (#7009) @harrism
- Fix `cudf::hash_partition` for `decimal32` and `decimal64` (#7006) @codereport
- Fix Thrust unroll patch command (#7002) @harrism
- Fix loc behaviour when key of incorrect type is used (#6993) @shwina
- Fix int to datetime conversion in csv_read (#6991) @kaatish
- fix excluding cufile tests by default (#6988) @rongou
- Fix java cufile tests when cufile is not installed (#6987) @revans2
- Make `cudf::round` for `fixed_point` when `scale = -decimal_places` a no-op (#6975) @codereport
- Fix type comparison for java (#6970) @revans2
- Fix default parameter values of `write_csv` and `write_parquet` (#6967) @vuule
- Align `Series.groupby` API to match Pandas (#6964) @kkraus14
- Fix timestamp parsing in ORC reader for timezones without transitions (#6959) @vuule
- Fix typo in numerical.py (#6957) @rgsl888prabhu
- `fixed_point_value` double-shifts in `fixed_point` construction (#6950) @codereport
- fix libcu++ include path for jni (#6948) @rongou
- Fix groupby agg/apply behaviour when no key columns are provided (#6945) @shwina
- Avoid inserting null elements into join hash table when nulls are treated as unequal (#6943) @hyperbolic2346
- Fix cudf::merge gtest for dictionary columns (#6942) @davidwendt
- Pass numeric scalars of the same dtype through numeric binops (#6938) @brandon-b-miller
- Fix N/A detection for empty fields in CSV reader (#6922) @vuule
- Fix rmm_mode=managed parameter for gtests (#6912) @davidwendt
- Fix nullmask offset handling in parquet and orc writer (#6889) @kaatish
- Correct the sampling range when sampling with replacement (#6884) @ChrisJar
- Handle nested string columns with no children in contiguous_split. (#6864) @nvdbaranec
- Fix `columns` & `index` handling in dataframe constructor (#6838) @galipremsagar
## Documentation π
- Update readme (#7318) @shwina
- Fix typo in cudf.core.column.string.extract docs (#7253) @adelevie
- Update doxyfile project number (#7161) @davidwendt
- Update 10 minutes to cuDF and CuPy with new APIs (#7158) @ChrisJar
- Cross link RMM & libcudf Doxygen docs (#7149) @ajschmidt8
- Add documentation for support dtypes in all IO formats (#7139) @galipremsagar
- Add groupby docs (#7100) @shwina
- Update cudf python docstrings with new null representation (`<NA>`) (#7050) @galipremsagar
- Make Doxygen comments formatting consistent (#7041) @vuule
- Add docs for working with missing data (#7010) @galipremsagar
- Remove warning in from_dlpack and to_dlpack methods (#7001) @miguelusque
- libcudf Developer Guide (#6977) @harrism
- Add JNI wrapper for the cuFile API (GDS) (#6940) @rongou
## New Features π
- Support `numeric_only` field for `rank()` (#7213) @isVoid
- Add support for `cudf::binary_operation` `TRUE_DIV` for `decimal32` and `decimal64` (#7198) @codereport
- Implement COLLECT rolling window aggregation (#7189) @mythrocks
- Add support for array-like inputs in `cudf.get_dummies` (#7181) @galipremsagar
- Default `groupby` to `sort=False` (#7180) @isVoid
- Add libcudf lists column count_elements API (#7173) @davidwendt
- Implement `cudf::group_by` (sort) for `decimal32` and `decimal64` (#7169) @codereport
- Add encoding and compression argument to CSV writer (#7168) @VibhuJawa
- `cudf::rolling_window` `SUM` support for `decimal32` and `decimal64` (#7147) @codereport
- Adding support for explode to cuDF (#7140) @hyperbolic2346
- Add libcudf API for parsing of ORC statistics (#7136) @vuule
- update GDS/cuFile location for 0.9 release (#7131) @rongou
- Add Segmented sort (#7122) @karthikeyann
- Add `cudf::binary_operation` `NULL_MIN`, `NULL_MAX` & `NULL_EQUALS` for `decimal32` and `decimal64` (#7119) @codereport
- Add `scale` and `value` methods to `fixed_point` (#7109) @codereport
- Replace ORC writer api with class (#7099) @rgsl888prabhu
- Pack/unpack functionality to convert tables to and from a serialized format. (#7096) @nvdbaranec
- Improve `digitize` API (#7071) @isVoid
- Add List types support in data generator (#7064) @galipremsagar
- `cudf::scan` support for `decimal32` and `decimal64` (#7063) @codereport
- `cudf::rolling` `ROW_NUMBER` support for `decimal32` and `decimal64` (#7061) @codereport
- Replace parquet writer api with class (#7058) @rgsl888prabhu
- Support contains() on lists of primitives (#7039) @mythrocks
- Implement `cudf::rolling` for `decimal32` and `decimal64` (#7037) @codereport
- Add `ffill` and `bfill` to string columns (#7036) @isVoid
- Enable round in cudf for DataFrame and Series (#7022) @ChrisJar
- Extend `replace_nulls_policy` to `string` and `dictionary` type (#7004) @isVoid
- Add segmented_gather(list_column, gather_list) (#7003) @karthikeyann
- Add `method` field to `fillna` for fixed width columns (#6998) @isVoid
- Manual merge of branch 0.17 into branch 0.18 (#6995) @shwina
- Implement `cudf::reduce` for `decimal32` and `decimal64` (part 2) (#6980) @codereport
- Add Ufunc alias look up for appropriate numpy ufunc dispatching (#6973) @VibhuJawa
- Add pytest-xdist to dev environment.yml (#6958) @galipremsagar
- Add `Index.set_names` api (#6929) @galipremsagar
- Add `replace_null` API with `replace_policy` parameter, `fixed_width` column support (#6907) @isVoid
- Share `factorize` implementation with Index and cudf module (#6885) @brandon-b-miller
- Implement update() function (#6883) @skirui-source
- Add groupby idxmin, idxmax aggregation (#6856) @karthikeyann
- Implement `cudf::reduce` for `decimal32` and `decimal64` (part 1) (#6814) @codereport
- Implement cudf.DateOffset for months (#6775) @brandon-b-miller
- Add Python DecimalColumn (#6715) @shwina
- Add dictionary support to libcudf groupby functions (#6585) @davidwendt
## Improvements π οΈ
- Update stale GHA with exemptions & new labels (#7395) @mike-wendt
- Add GHA to mark issues/prs as stale/rotten (#7388) @Ethyling
- Unpin from numpy < 1.20 (#7335) @shwina
- Prepare Changelog for Automation (#7309) @galipremsagar
- Prepare Changelog for Automation (#7272) @ajschmidt8
- Add JNI support for converting Arrow buffers to CUDF ColumnVectors (#7222) @tgravescs
- Add coverage for `skiprows` and `num_rows` in parquet reader fuzz testing (#7216) @galipremsagar
- Define and implement more behavior for merging on categorical variables (#7209) @brandon-b-miller
- Add CudfSeriesGroupBy to optimize dask_cudf groupby-mean (#7194) @rjzamora
- Add dictionary column support to rolling_window (#7186) @davidwendt
- Modify the semantics of `end` pointers in cuIO to match standard library (#7179) @vuule
- Adding unit tests for `fixed_point` with extremely large `scale`s (#7178) @codereport
- Fast path single column sort (#7167) @davidwendt
- Fix -Werror=sign-compare errors in device code (#7164) @trxcllnt
- Refactor cudf::string_view host and device code (#7159) @davidwendt
- Enable logic for GPU auto-detection in cudfjni (#7155) @gerashegalov
- Java bindings for Fixed-point type support for Parquet (#7153) @razajafri
- Add Java interface for the new API 'explode' (#7151) @firestarman
- Replace offsets with iterators in cuIO utilities and CSV parser (#7150) @vuule
- Add gbenchmarks for reduction aggregations any() and all() (#7129) @davidwendt
- Update JNI for contiguous_split packed results (#7127) @jlowe
- Add JNI and Java bindings for list_contains (#7125) @kuhushukla
- Add Java unit tests for window aggregate 'collect' (#7121) @firestarman
- verify window operations on decimal with java tests (#7120) @sperlingxx
- Adds in JNI support for creating an list column from existing columns (#7112) @revans2
- Build libcudf with -Wall (#7105) @trxcllnt
- Add column_device_view pointers to EncColumnDesc (#7097) @kaatish
- Add `pyorc` to dev environment (#7085) @galipremsagar
- JNI support for creating struct column from existing columns and fixed bug in struct with no children (#7084) @revans2
- Fastpath single strings column in cudf::sort (#7075) @davidwendt
- Upgrade nvcomp to 1.2.1 (#7069) @rongou
- Refactor ORC `ProtobufReader` to make it more extendable (#7055) @vuule
- Add Java tests for decimal casts (#7051) @sperlingxx
- Auto-label PRs based on their content (#7044) @jolorunyomi
- Create sort gbenchmark for strings column (#7040) @davidwendt
- Refactor io memory fetches to use hostdevice_vector methods (#7035) @ChrisJar
- Spark Murmur3 hash functionality (#7024) @rwlee
- Fix libcudf strings logic where size_type is used to access INT32 column data (#7020) @davidwendt
- Adding decimal writing support to parquet (#7017) @hyperbolic2346
- Add compression="infer" as default for dask_cudf.read_csv (#7013) @rjzamora
- Correct ORC docstring; other minor cuIO improvements (#7012) @vuule
- Reduce number of hostdevice_vector allocations in parquet reader (#7005) @devavret
- Check output size overflow on strings gather (#6997) @davidwendt
- Improve representation of `MultiIndex` (#6992) @galipremsagar
- Disable some pragma unroll statements in thrust sort.h (#6982) @davidwendt
- Minor `cudf::round` internal refactoring (#6976) @codereport
- Add Java bindings for URL conversion (#6972) @jlowe
- Enable strict_decimal_types in parquet reading (#6969) @sperlingxx
- Add in basic support to JNI for logical_cast (#6954) @revans2
- Remove duplicate file array_tests.cpp (#6953) @karthikeyann
- Add null mask `fixed_point_column_wrapper` constructors (#6951) @codereport
- Update Java bindings version to 0.18-SNAPSHOT (#6949) @jlowe
- Use simplified `rmm::exec_policy` (#6939) @harrism
- Add null count test for apply_boolean_mask (#6903) @harrism
- Implement DataFrame.quantile for datetime and timedelta data types (#6902) @ChrisJar
- Remove **kwargs from string/categorical methods (#6750) @shwina
- Refactor rolling.cu to reduce compile time (#6512) @mythrocks
- Add static type checking via Mypy (#6381) @shwina
- Update to official libcu++ on Github (#6275) @trxcllnt
# cuDF 0.17.0 (10 Dec 2020)
## New Features
- PR #6116 Add `filters` parameter to Python `read_orc` function or filtering
- PR #6848 Added Java bindings for writing parquet files with INT96 timestamps
- PR #6460 Add is_timestamp format check API
- PR #6647 Implement `cudf::round` floating point and integer types (`HALF_EVEN`)
- PR #6562 Implement `cudf::round` floating point and integer types (`HALF_UP`)
- PR #6685 Implement `cudf::round` `decimal32` & `decimal64` (`HALF_UP` and `HALF_EVEN`)
- PR #6711 Implement `cudf::cast` for `decimal32/64` to/from integer and floating point
- PR #6777 Implement `cudf::unary_operation` for `decimal32` & `decimal64`
- PR #6729 Implement `cudf::cast` for `decimal32/64` to/from different `type_id`
- PR #6792 Implement `cudf::clamp` for `decimal32` and `decimal64`
- PR #6845 Implement `cudf::copy_if_else` for `decimal32` and `decimal64`
- PR #6805 Implement `cudf::detail::copy_if` for `decimal32` and `decimal64`
- PR #6843 Implement `cudf::copy_range` for `decimal32` and `decimal64`
- PR #6528 Enable `fixed_point` binary operations
- PR #6460 Add is_timestamp format check API
- PR #6568 Add function to create hashed vocabulary file from raw vocabulary
- PR #6142 Add Python `read_orc_statistics` function for reading file- and stripe-level statistics
- PR #6581 Add JNI API to check if PTDS is enabled
- PR #6615 Add support for list and struct types to contiguous_split
- PR #6625 Add INT96 timestamp writing option to parquet writer
- PR #6592 Add `cudf.to_numeric` function
- PR #6598 Add strings::contains API with target column parameter
- PR #6638 Add support for `pipe` API
- PR #6737 New build process (Project Flash)
- PR #6652 Add support for struct columns in concatenate
- PR #6675 Add DecimalDtype to cuDF
- PR #6739 Add Java bindings for is_timestamp
- PR #6808 Add support for reading decimal32 and decimal64 from parquet
- PR #6781 Add serial murmur3 hashing
- PR #6811 First class support for unbounded window function bounds
- PR #6768 Add support for scatter() on list columns
- PR #6796 Add create_metadata_file in dask_cudf
- PR #6765 Cupy fallback for __array_function__ and __array_ufunc__ for cudf.Series
- PR #6817 Add support for scatter() on lists-of-struct columns
- PR #6805 Implement `cudf::detail::copy_if` for `decimal32` and `decimal64`
- PR #6483 Add `agg` function to aggregate dataframe using one or more operations
- PR #6726 Support selecting different hash functions in hash_partition
- PR #6619 Improve Dockerfile
- PR #6831 Added parquet chunked writing ability for list columns
## Improvements
- PR #6430 Add struct type support to `to_arrow` and `from_arrow`
- PR #6384 Add CSV fuzz tests with varying function parameters
- PR #6385 Add JSON fuzz tests with varying function parameters
- PR #6398 Remove function constructor macros in parquet reader
- PR #6432 Add dictionary support to `cudf::upper_bound` and `cudf::lower_bound`
- PR #6461 Replace index type-dispatch call with indexalator in cudf::scatter
- PR #6415 Support `datetime64` in row-wise op
- PR #6457 Replace index type-dispatch call with indexalator in `cudf::gather`
- PR #6413 Replace Python NVTX package with conda-forge source
- PR #6442 Remove deprecated `DataFrame.from_gpu_matrix`, `DataFrame.to_gpu_matrix`, `DataFrame.add_column` APIs and method parameters
- PR #6502 Add dictionary support to `cudf::merge`
- PR #6471 Replace index type-dispatch call with indexalator in cudf::strings::substring
- PR #6485 Add File IO to cuIO benchmarks
- PR #6504 Update Java bindings version to 0.17-SNAPSHOT
- PR #6875 Remove bounds check for `cudf::gather`
- PR #6489 Add `AVRO` fuzz tests with varying function parameters
- PR #6540 Add dictionary support to `cudf::unary_operation`
- PR #6537 Refactor ORC timezone
- PR #6527 Refactor DeviceColumnViewAccess to avoid JNI returning an array
- PR #6690 Explicitly set legacy or per-thread default stream in JNI
- PR #6545 Pin cmake policies to cmake 3.17 version
- PR #6556 Add dictionary support to `cudf::inner_join`, `cudf::left_join` and `cudf::full_join`
- PR #6557 Support nullable timestamp columns in time range window functions
- PR #6566 Remove `reinterpret_cast` conversions between pointer types in ORC
- PR #6544 Remove `fixed_point` precise round
- PR #6552 Use `assert_exceptions_equal` to assert exceptions in pytests
- PR #6555 Adapt JNI build to libcudf composition of multiple libraries
- PR #6559 Refactoring cooperative loading with single thread loading.
- PR #6564 Load JNI library dependencies with a thread pool
- PR #6571 Add ORC fuzz tests with varying function parameters
- PR #6578 Add in java column to row conversion
- PR #6573 Create `cudf::detail::byte_cast` for `cudf::byte_cast`
- PR #6597 Use thread-local to track CUDA device in JNI
- PR #6599 Replace `size()==0` with `empty()`, `is_empty()`
- PR #6514 Initial work for decimal type in Java/JNI
- PR #6605 Reduce HtoD copies in `cudf::concatenate` of string columns
- PR #6608 Improve subword tokenizer docs
- PR #6610 Add ability to set scalar values in `cudf.DataFrame`
- PR #6612 Update JNI to new RMM cuda_stream_view API
- PR #6646 Replace `cudaStream_t` with `rmm::cuda_stream_view` (part 1)
- PR #6648 Replace `cudaStream_t` with `rmm::cuda_stream_view` (part 2)
- PR #6744 Replace `cudaStream_t` with `rmm::cuda_stream_view` (part 3)
- PR #6579 Update scatter APIs to use reference wrapper / const scalar
- PR #6614 Add support for conversion to Pandas nullable dtypes and fix related issue in `cudf.to_json`
- PR #6622 Update `to_pandas` api docs
- PR #6623 Add operator overloading to column and clean up error messages
- PR #6644 Cover different CSV reader/writer options in benchmarks
- PR #6741 Cover different ORC and Parquet reader/writer options in benchmarks
- PR #6651 Add cudf::dictionary::make_dictionary_pair_iterator
- PR #6666 Add dictionary support to `cudf::reduce`
- PR #6635 Add cudf::test::dictionary_column_wrapper class
- PR #6702 Fix orc read corruption on boolean column
- PR #6676 Add dictionary support to `cudf::quantile`
- PR #6673 Parameterize avro and json benchmark
- PR #6609 Support fixed-point decimal for HostColumnVector
- PR #6703 Add list column statistics writing to Parquet writer
- PR #6662 `RangeIndex` supports `step` parameter
- PR #6712 Remove `reinterpret_cast` conversions between pointer types in Avro
- PR #6705 Add nested type support to Java table serialization
- PR #6709 Raise informative error while converting a pandas dataframe with duplicate columns
- PR #6727 Remove 2nd type-dispatcher call from cudf::reduce
- PR #6749 Update nested JNI builder so we can do it incrementally
- PR #6748 Add Java API to concatenate serialized tables to ContiguousTable
- PR #6764 Add dictionary support to `cudf::minmax`
- PR #6734 Binary operations support for decimal type in cudf Java
- PR #6761 Add Java/JNI bindings for round
- PR #6776 Use `void` return type for kernel wrapper functions instead of returning `cudaError_t`
- PR #6786 Add nested type support to ColumnVector#getDeviceMemorySize
- PR #6780 Move `cudf::cast` tests to separate test file
- PR #6809 size_type overflow checking when concatenating columns
- PR #6789 Rename `unary_op` to `unary_operator`
- PR #6770 Support building decimal columns with Table.TestBuilder
- PR #6815 Add wildcard path support to `read_parquet`
- PR #6800 Push DeviceScalar to cython-only
- PR #6822 Split out `cudf::distinct_count` from `drop_duplicates.cu`
- PR #6813 Enable `expand=False` in `.str.split` and `.str.rsplit`
- PR #6829 Enable workaround to write categorical columns in csv
- PR #6819 Use CMake 3.19 for RMM when building cuDF jar
- PR #6833 Use settings.xml if existing for internal build
- PR #6839 Handle index when dispatching __array_function__ and __array_ufunc__ to cupy for cudf.Series
- PR #6835 Move template param to member var to improve compile of hash/groupby.cu
- PR #6837 Avoid gather when copying strings view from start of strings column
- PR #6859 Move align_ptr_for_type() from cuda.cuh to alignment.hpp
- PR #6807 Refactor `std::array` usage in row group index writing in ORC
- PR #6914 Enable groupby `list` aggregation for strings
- PR #6908 Parquet option for strictly decimal reading
## Bug Fixes
- PR #6446 Fix integer parsing in CSV and JSON for values outside of int64 range
- PR #6506 Fix DateTime type value truncation while writing to csv
- PR #6509 Disable JITIFY log printing
- PR #6517 Handle index equality in `Series` and `DataFrame` equality checks
- PR #6519 Fix end-of-string marking boundary condition in subword-tokenizer
- PR #6543 Handle `np.nan` values in `isna`/`isnull`/`notna`/`notnull`
- PR #6549 Fix memory_usage calls for list columns
- PR #6575 Fix JNI RMM initialize with no pool allocator limit
- PR #6636 Fix orc boolean column corruption issue
- PR #6582 Add missing `device_scalar` stream parameters
- PR #6596 Fix memory usage calculation
- PR #6595 Fix JNI build, broken by to_arrow() signature change
- PR #6601 Fix timezone offset when reading ORC files
- PR #6603 Use correct stream in hash_join.
- PR #6616 Block `fixed_point` `cudf::concatenate` with different scales
- PR #6607 Fix integer overflow in ORC encoder
- PR #6617 Fix JNI native dependency load order
- PR #6621 Fix subword tokenizer metadata for token count equal to max_sequence_length
- PR #6629 Fix JNI CMake
- PR #6633 Fix Java HostColumnVector unnecessarily loading native dependencies
- PR #6643 Fix csv writer handling embedded comma delimiter
- PR #6640 Add error message for unsupported `axis` parameter in DataFrame APIs
- PR #6686 Fix output size for orc read for skip_rows option
- PR #6710 Fix an out-of-bounds indexing error in gather() for lists
- PR #6670 Fix a bug where PTX parser fails to correctly parse a python lambda generated UDF
- PR #6687 Fix issue where index name of caller object is being modified in csv writer
- PR #6735 Fix hash join where row hash values would end up equal to the reserved empty key value
- PR #6696 Fix release_assert.
- PR #6692 Fix handling of empty column name in csv writer
- PR #6693 Fix issue related to `na_values` input in `read_csv`
- PR #6701 Fix issue when `numpy.str_` is given as input to string parameters in io APIs
- PR #6704 Fix leak warnings in JNI unit tests
- PR #6713 Fix missing call to cudaStreamSynchronize in get_value
- PR #6708 Apply `na_rep` to column names in csv writer
- PR #6720 Fix implementation of `dtype` parameter in `cudf.read_csv`
- PR #6721 Add missing serialization methods for ListColumn
- PR #6722 Fix index=False bug in dask_cudf.read_parquet
- PR #6766 Fix race conditions in parquet
- PR #6728 Fix cudf python docs and associated build warnings
- PR #6732 Fix cuDF benchmarks build with static Arrow lib and fix rapids-compose cuDF JNI build
- PR #6742 Fix concat bug in dask_cudf Series/Index creation
- PR #6632 Fix DataFrame initialization from list of dicts
- PR #6767 Fix sort order of parameters in `test_scalar_invalid_implicit_conversion` pytest
- PR #6771 Fix index handling in parquet reader and writer
- PR #6787 Update java reduction APIs to reflect C++ changes
- PR #6790 Fix result representation in groupby.apply
- PR #6794 Fix AVRO reader issues with empty input
- PR #6798 Fix `read_avro` docs
- PR #6824 Fix JNI build
- PR #6826 Fix resource management in Java ColumnBuilder
- PR #6830 Fix categorical scalar insertion
- PR #6844 Fix uint32_t undefined errors
- PR #6854 Fix the parameter order of writeParquetBufferBegin
- PR #6855 Fix `.str.replace_with_backrefs` docs examples
- PR #6853 Fix contiguous split of null string columns
- PR #6860 Move codecov upload to build script
- PR #6861 Fix compile error in type_dispatch_benchmark.cu
- PR #6864 Handle contiguous_split corner case for nested string columns with no children
- PR #6869 Avoid dependency resolution failure in latest version of pip by explicitly specifying versions for dask and distributed
- PR #6806 Force install of local conda artifacts
- PR #6887 Fix typo and `0-d` numpy array handling in binary operation
- PR #6898 Fix missing clone overrides on derived aggregations
- PR #6899 Update JNI to new gather boundary check API
# cuDF 0.16.0 (21 Oct 2020)
## New Features
- PR #5779 Add DataFrame.pivot() and DataFrame.unstack()
- PR #5975 Add strings `filter_characters` API
- PR #5843 Add `filters` parameter to Python `read_parquet` function for filtering row groups
- PR #5974 Use libcudf instead of cupy for `arange` or column creation from a scalar.
- PR #5494 Add Abstract Syntax Tree (AST) evaluator.
- PR #6076 Add durations type support for csv writer, reader
- PR #5874 Add `COLLECT` groupby aggregation
- PR #6330 Add ability to query if PTDS is enabled
- PR #6119 Add support for `dayofweek` property in `DateTimeIndex` and `DatetimeProperties`
- PR #6171 Java and Jni support for Struct columns
- PR #6125 Add support for `Series.mode` and `DataFrame.mode`
- PR #6271 Add support to deep-copy struct columns from struct column-view
- PR #6262 Add nth_element series aggregation with null handling
- PR #6316 Add StructColumn to Python API
- PR #6247 Add `minmax` reduction function
- PR #6232 `Json` and `Avro` benchmarking in python
- PR #6139 Add column conversion to big endian byte list.
- PR #6220 Add `list_topics()` to supply list of underlying Kafka connection topics
- PR #6254 Add `cudf::make_dictionary_from_scalar` factory function
- PR #6262 Add nth_element series aggregation with null handling
- PR #6277 Add support for LEAD/LAG window functions for fixed-width types
- PR #6318 Add support for reading Struct and map types from Parquet files
- PR #6315 Native code for string-map lookups, for cudf-java
- PR #6302 Add custom dataframe accessors
- PR #6301 Add JNI bindings to nvcomp
- PR #6328 Java and JNI bindings for getMapValue/map_lookup
- PR #6371 Use ColumnViewAccess on Host side
- PR #6392 add hash based groupby mean aggregation
- PR #6511 Add LogicalType to Parquet reader
- PR #6297 cuDF Python Scalars
- PR #6723 Support creating decimal vectors from scalar
## Improvements
- PR #6393 Fix some misspelled words
- PR #6292 Remove individual size tracking from JNI tracking resource adaptor
- PR #5946 Add cython and python support for libcudf `to_arrow` and `from_arrow`
- PR #5919 Remove max_strings and max_chars from nvtext::subword_tokenize
- PR #5956 Add/Update tests for cuStreamz
- PR #5953 Use stable sort when doing a sort groupby
- PR #5973 Link to the Code of Conduct in CONTRIBUTING.md
- PR #6354 Perform shallow clone of external projects
- PR #6388 Add documentation for building `libboost_filesystem.a` from source
- PR #5917 Just use `None` for `strides` in `Buffer`
- PR #6015 Upgrade CUB/Thrust to the latest commit
- PR #5971 Add cuStreamz README for basic installation and use
- PR #6024 Expose selecting multiple ORC stripes to read from Python
- PR #6155 Use the CUB submodule in Thrust instead of fetching CUB separately
- PR #6321 Add option in JNI code to use `arena_memory_resource`
- PR #6002 Add Java bindings for md5
- PR #6311 Switch Thrust to use the NVIDIA/thrust repo
- PR #6060 Add support for all types in `Series.describe` and `DataFrame.describe`
- PR #6051 Add builder API for cuIO `parquet_writer_options` and `parquet_reader_options`
- PR #6067 Added compute codes for aarch64 devices
- PR #5861 `fixed_point` Column Optimization (store `scale` in `data_type`)
- PR #6083 Small cleanup
- PR #6355 Make sure PTDS mode is compatible between libcudf and JNI
- PR #6120 Consolidate functionality in NestedHostColumnVector and HostColumnVector
- PR #6092 Add `name` and `dtype` field to `Index.copy`
- PR #5984 Support gather() on CUDF struct columns
- PR #6103 Small refactor of `print_differences`
- PR #6124 Fix gcc-9 compilation errors on tests
- PR #6122 Add builder API for cuIO `csv_writer_options` and `csv_reader_options`
- PR #6141 Fix typo in custreamz README that was a result of recent changes
- PR #6162 Reduce output parameters in cuio csv and json reader internals
- PR #6146 Added element/validity pair constructors for fixed_width and string wrappers
- PR #6143 General improvements for java arrow IPC.
- PR #6138 Add builder API for cuIO `orc_writer_options` and `orc_reader_options`
- PR #6152 Change dictionary indices to uint32
- PR #6099 Add fluent builder apis to `json_reader_options` and `avro_reader_options`
- PR #6163 Use `Column.full` instead of `scalar_broadcast_to` or `cupy.zeros`
- PR #6176 Fix cmake warnings for GoogleTest, GoogleBenchmark, and Arrow external projects
- PR #6149 Update to Arrow v1.0.1
- PR #6421 Use `pandas.testing` in `cudf`
- PR #6357 Use `pandas.testing` in `dask-cudf`
- PR #6201 Expose libcudf test utilities headers for external project use.
- PR #6174 Data profile support in random data generator; Expand cuIO benchmarks
- PR #6189 Avoid deprecated pyarrow.compat for parquet
- PR #6184 Add cuda 11 dev environment.yml
- PR #6186 Update JNI to look for cub in new location
- PR #6194 Remove unnecessary memory-resource parameter in `cudf::contains` API
- PR #6195 Update JNI to use parquet options builder
- PR #6190 Avoid reading full csv files for metadata in dask_cudf
- PR #6197 Remove librmm dependency for libcudf
- PR #6205 Add dictionary support to cudf::contains
- PR #6213 Reduce subscript usage in cuio in favor of pointer dereferencing
- PR #6230 Support any unsigned int type for dictionary indices
- PR #6202 Add additional parameter support to `DataFrame.drop`
- PR #6214 Small clean up to use more algorithms
- PR #6209 Remove CXX11 ABI handling from CMake
- PR #6223 Remove CXX11 ABI flag from JNI build
- PR #6114 Implement Fuzz tests for cuIO
- PR #6231 Adds `inplace`, `append`, `verify_integrity` fields to `DataFrame.set_index`
- PR #6215 Add cmake command-line setting for spdlog logging level
- PR #6242 Added cudf::detail::host_span and device_span
- PR #6240 Don't shallow copy index in as_index() unless necessary
- PR #6204 Add dockerfile and script to build cuDF jar
- PR #6248 Optimize groupby-agg in dask_cudf
- PR #6243 Move `equals()` logic to `Frame`
- PR #6245 Split up replace.cu into multiple source files
- PR #6218 increase visibility/consistency for cuio reader writer private member variable names.
- PR #6268 Add file tags to libcudf doxygen
- PR #6265 Update JNI to use ORC options builder
- PR #6273 Update JNI to use ORC options builder
- PR #6293 Replace shuffle warp reduce with cub calls
- PR #6287 Make java aggregate API follow C++ API
- PR #6303 Use cudf test dtypes so timedelta tests are deterministic
- PR #6329 Update and clean-up gpuCI scripts
- PR #6299 Add lead and lag to java
- PR #6327 Add dictionary specialization to `cudf::replace_nulls`
- PR #6306 Remove cpw macros from page encode kernels
- PR #6375 Parallelize Cython compilation in addition to Cythonization
- PR #6303 Use cudf test dtypes so timedelta tests are deterministic
- PR #6326 Simplify internal csv/json kernel parameters
- PR #6308 Add dictionary support to cudf::scatter with scalar
- PR #6367 Add JNI bindings for byte casting
- PR #6312 Conda recipe dependency cleanup
- PR #6346 Remove macros from CompactProtocolWriter
- PR #6347 Add dictionary support to cudf::copy_range
- PR #6352 Add specific Topic support for Kafka "list_topics()" metadata requests
- PR #6332 Add support to return csv as string when `path=None` in `to_csv`
- PR #6358 Add Parquet fuzz tests with varying function parameters
- PR #6369 Add dictionary support to `cudf::find_and_replace`
- PR #6373 Add dictionary support to `cudf::clamp`
- PR #6377 Update ci/local/README.md
- PR #6383 Removed `move.pxd`, use standard library `move`
- PR #6400 Removed unused variables
- PR #6409 Allow CuPy 8.x
- PR #6407 Add RMM_LOGGING_LEVEL flag to Java docker build
- PR #6425 Factor out csv parse_options creation to pure function
- PR #6438 Fetch nvcomp v1.1.0 for JNI build
- PR #6459 Add `map` method to series
- PR #6379 Add list hashing functionality to MD5
- PR #6498 Add helper method to ColumnBuilder with some nits
- PR #6336 Add `join` functionality in cudf concat
- PR #6653 Replaced SHFL_XOR calls with cub::WarpReduce
- PR #6751 Rework ColumnViewAccess and its usage
- PR #6698 Remove macros from ORC reader and writer
- PR #6782 Replace cuio macros with constexpr and inline functions
## Bug Fixes
- PR #6073 Fix issue related to `.loc` in case of `DatetimeIndex`
- PR #6081 Fix issue where fsspec thinks it has a protocol string
- PR #6100 Fix issue in `Series.factorize` to correctly pick `na_sentinel` value
- PR #6106 Fix datetime limit in csv due to 32-bit arithmetic
- PR #6113 Fix to_timestamp to initialize default year to 1970
- PR #6110 Handle `format` for other input types in `to_datetime`
- PR #6118 Fix Java build for ORC read args change and update package version
- PR #6121 Replace calls to get_default_resource with get_current_device_resource
- PR #6128 Add support for numpy RandomState handling in `sample`
- PR #6134 Fix CUDA C/C++ debug builds
- PR #6137 Fix issue where `np.nan` is being return instead of `NAT` for datetime/duration types
- PR #6298 Fix gcc-9 compilation error in dictionary/remove_keys.cu
- PR #6172 Fix slice issue with empty column
- PR #6342 Fix array out-of-bound errors in Orc writer
- PR #6154 Warnings on row-wise op only when non-numeric columns are found.
- PR #6150 Fix issue related to inferring `datetime64` format with UTC timezone in string data
- PR #6179 `make_elements` copies to `iterator` without adjusting `size`
- PR #6387 Remove extra `std::move` call in java/src/main/native/src/map_lookup.cu
- PR #6182 Fix cmake build of arrow
- PR #6288 Fix gcc-9 compilation error with `ColumnVectorJni.cpp`
- PR #6173 Fix normalize_characters offset logic on sliced strings column
- PR #6159 Fix issue related to empty `Dataframe` with columns input to `DataFrame.append`
- PR #6199 Fix index preservation for dask_cudf parquet
- PR #6207 Remove shared libs from Java sources jar
- PR #6217 Fixed missing bounds checking when storing validity in parquet reader
- PR #6212 Update codeowners file
- PR #6389 Fix RMM logging level so that it can be turned off from the command line
- PR #6157 Fix issue related to `Series.concat` to concat a non-empty and empty series.
- PR #6226 Add in some JNI checks for null handles
- PR #6183 Fix issues related to `Series.acos` for consistent output regardless of dtype
- PR #6234 Add float infinity parsing in csv reader
- PR #6251 Replace remaining calls to RMM `get_default_resource`
- PR #6257 Support truncated fractions in `cudf::strings::to_timestamp`
- PR #6259 Fix compilation error with GCC 8
- PR #6258 Pin libcudf conda recipe to boost 1.72.0
- PR #6264 Remove include statement for missing rmm/mr/device/default_memory_resource.hpp file
- PR #6296 Handle double quote and escape character in json
- PR #6294 Fix read parquet key error when reading empty pandas DataFrame with cudf
- PR #6285 Removed unsafe `reinterpret_cast` and implicit pointer-to-bool casts
- PR #6281 Fix unreachable code warning in datetime.cuh
- PR #6286 Fix `read_csv` `int32` overflow
- PR #6466 Fix ORC reader issue with decimal type
- PR #6310 Replace a misspelled reference to `master` branch with `main` branch in a comment in changelog.sh
- PR #6289 Revert #6206
- PR #6291 Fix issue related to row-wise operations in `cudf.DataFrame`
- PR #6304 Fix span_tests.cu includes
- PR #6331 Avoids materializing `RangeIndex` during frame concatnation (when not needed)
- PR #6278 Add filter tests for struct columns
- PR #6344 Fix rolling-window count for null input
- PR #6353 Rename `skip_rows` parameter to `skiprows` in `read_parquet`, `read_avro` and `read_orc`
- PR #6361 Detect overflow in hash join
- PR #6386 Removed c-style pointer casts and redundant `reinterpret_cast`s in cudf::io
- PR #6397 Fix `build.sh` when `PARALLEL_LEVEL` environment variable isn't set
- PR #6366 Fix Warp Reduce calls in cuio statistics calculation to account for NaNs
- PR #6345 Fix ambiguous constructor compile error with devtoolset
- PR #6335 Fix conda commands for outdated python version
- PR #6372 Fix issue related to reading a nullable boolean column in `read_parquet` when `engine=pyarrow`
- PR #6378 Fix index handling in `fillna` and incorrect pytests
- PR #6380 Avoid problematic column-index check in dask_cudf.read_parquet test
- PR #6403 Fix error handling in notebook tests
- PR #6408 Avoid empty offset list in hash_partition output
- PR #6402 Update JNI build to pull fixed nvcomp commit
- PR #6410 Fix uses of dangerous default values in Python code
- PR #6424 Check for null data in close for ColumnBuilder
- PR #6426 Fix `RuntimeError` when `np.bool_` is passed as `header` in `to_csv`
- PR #6443 Make java apis getList and getStruct public
- PR #6445 Add `dlpack` to run section of libcudf conda recipe to fix downstream build issues
- PR #6450 Make java Column Builder row agnostic
- PR #6309 Make all CI `.sh` scripts have a consistent set of permissions
- PR #6491 Remove repo URL from Java build-info
- PR #6462 Bug fixes for ColumnBuilder
- PR #6497 Fixes a data corruption issue reading list columns from Parquet files with multiple row groups.
# cuDF 0.15.0 (26 Aug 2020)
## New Features
- PR #5292 Add unsigned int type columns to libcudf
- PR #5287 Add `index.join` support
- PR #5222 Adding clip feature support to DataFrame and Series
- PR #5318 Support/leverage DataFrame.shuffle in dask_cudf
- PR #4546 Support pandas 1.0+
- PR #5331 Add `cudf::drop_nans`
- PR #5327 Add `cudf::cross_join` feature
- PR #5204 Concatenate strings columns using row separator as strings column
- PR #5342 Add support for `StringMethods.__getitem__`
- PR #5358 Add zero-copy `column_view` cast for compatible types
- PR #3504 Add External Kafka Datasource
- PR #5356 Use `size_type` instead of `scalar` in `cudf::repeat`.
- PR #5397 Add internal implementation of nested loop equijoins.
- PR #5303 Add slice_strings functionality using delimiter string
- PR #5394 Enable cast and binops with duration types (builds on PR 5359)
- PR #5301 Add Java bindings for `zfill`
- PR #5411 Enable metadata collection for chunked parquet writer
- PR #5359 Add duration types
- PR #5364 Validate array interface during buffer construction
- PR #5418 Add support for `DataFrame.info`
- PR #5425 Add Python `Groupby.rolling()`
- PR #5434 Add nvtext function generate_character_grams
- PR #5442 Add support for `cudf.isclose`
- PR #5444 Remove usage of deprecated RMM APIs and headers.
- PR #5463 Add `.str.byte_count` python api and cython(bindings)
- PR #5488 Add plumbings for `.str.replace_tokens`
- PR #5502 Add Unsigned int types support in dlpack
- PR #5497 Add `.str.isinteger` & `.str.isfloat`
- PR #5511 Port of clx subword tokenizer to cudf
- PR #5528 Add unsigned int reading and writing support to parquet
- PR #5510 Add support for `cudf.Index` to create Indexes
- PR #5618 Add Kafka as a cudf datasource
- PR #5668 Adding support for `cudf.testing`
- PR #5460 Add support to write to remote filesystems
- PR #5454 Add support for `DataFrame.append`, `Index.append`, `Index.difference` and `Index.empty`
- PR #5536 Parquet reader - add support for multiple sources
- PR #5654 Adding support for `cudf.DataFrame.sample` and `cudf.Series.sample`
- PR #5607 Add Java bindings for duration types
- PR #5612 Add `is_hex` strings API
- PR #5625 String conversion to and from duration types
- PR #5659 Added support for rapids-compose for Java bindings and other enhancements
- PR #5637 Parameterize Null comparator behaviour in Joins
- PR #5623 Add `is_ipv4` strings API
- PR #5723 Parquet reader - add support for nested LIST columns
- PR #5669 Add support for reading JSON files with missing or out-of-order fields
- PR #5674 Support JIT backend on PowerPC64
- PR #5629 Add `ListColumn` and `ListDtype`
- PR #5658 Add `filter_tokens` nvtext API
- PR #5666 Add `filter_characters_of_type` strings API
- PR #5778 Add support for `cudf::table` to `arrow::Table` and `arrow::Table` to `cudf::table`
- PR #5673 Always build and test with per-thread default stream enabled in the GPU CI build
- PR #5438 Add MD5 hash support
- PR #5704 Initial `fixed_point` Column Support
- PR #5716 Add `double_type_dispatcher` to libcudf
- PR #5739 Add `nvtext::detokenize` API
- PR #5645 Enforce pd.NA and Pandas nullable dtype parity
- PR #5729 Create nvtext normalize_characters API from the subword_tokenize internal function
- PR #5572 Add `cudf::encode` API.
- PR #5767 Add `nvtext::porter_stemmer_measure` and `nvtext::is_letter` APIs
- PR #5753 Add `cudf::lists::extract_list_element` API
- PR #5568 Add support for `Series.keys()` and `DataFrame.keys()`
- PR #5782 Add Kafka support to custreamz
- PR #5642 Add `GroupBy.groups()`
- PR #5811 Add `nvtext::edit_distance` API
- PR #5789 Add groupby support for duration types
- PR #5810 Make Cython subdirs packages and simplify package_data
- PR #6005 Add support for Ampere
- PR #5807 Initial support for struct columns
- PR #5817 Enable more `fixed_point` unit tests by introducing "scale-less" constructor
- PR #5822 Add `cudf_kafka` to `custreamz` run time conda dependency and fix bash syntax issue
- PR #5903 Add duration support for Parquet reader, writer
- PR #5845 Add support for `mask_to_bools`
- PR #5851 Add support for `Index.sort_values`
- PR #5904 Add slice/split support for LIST columns
- PR #5857 Add dtypes information page in python docs
- PR #5859 Add conversion form `fixed_point` to `bool`
- PR #5781 Add duration types support in cudf(python/cython)
- PR #5815 LIST Support for ColumnVector
- PR #5931 Support for `add_calendrical_months` API
- PR #5992 Add support for `.dt.strftime`
- PR #6075 Parquet writer - add support for nested LIST columns
## Improvements
- PR #5492 compile_udf: compile straight to PTX instead of using @jit
- PR #5605 Automatically flush RMM allocate/free logs in JNI
- PR #5632 Switch JNI code to use `pool_memory_resource` instead of CNMeM
- PR #5486 Link Boost libraries statically in the Java build
- PR #5479 Link Arrow libraries statically
- PR #5414 Use new release of Thrust/CUB in the JNI build
- PR #5403 Update required CMake version to 3.14 in contribution guide
- PR #5245 Add column reduction benchmark
- PR #5315 Use CMake `FetchContent` to obtain `cub` and `thrust`
- PR #5398 Use CMake `FetchContent` to obtain `jitify` and `libcudacxx`
- PR #5268 Rely on NumPy arrays for out-of-band pickling
- PR #5288 Drop `auto_pickle` decorator #5288
- PR #5231 Type `Buffer` as `uint8`
- PR #5305 Add support for `numpy`/`cupy` array in `DataFrame` construction
- PR #5308 Coerce frames to `Buffer`s in deserialization
- PR #5309 Handle host frames in serialization
- PR #5312 Test serializing `Series` after `slice`
- PR #5248 Support interleave_columns for string types
- PR #5332 Remove outdated dask-xgboost docs
- PR #5349 Improve libcudf documentation CSS style
- PR #5317 Optimize fixed_point rounding shift for integers
- PR #5386 Remove `cub` from `include_dirs` in `setup.py`
- PR #5373 Remove legacy nvstrings/nvcategory/nvtext
- PR #5362 Remove dependency on `rmm._DevicePointer`
- PR #5302 Add missing comparison operators to `fixed_point` type
- PR #5824 Mark host frames as not needing to be writeable
- PR #5354 Split Dask deserialization methods by dask/cuda
- PR #5363 Handle `0-dim` inputs while broadcasting to a column
- PR #5396 Remove legacy tests env variable from build.sh
- PR #5374 Port nvtext character_tokenize API to libcudf
- PR #5389 Expose typed accessors for Java HostMemoryBuffer
- PR #5379 Avoid chaining `Buffer`s
- PR #5387 Port nvtext replace_tokens API to libcudf
- PR #5381 Change numpy usages to cupy in `10min.ipynb`
- PR #5408 Update pyrrow and arrow-cpp to 0.17.1
- PR #5366 Add benchmarks for cuIO writers
- PR #5913 Call cudaMemcpyAsync/cudaMemsetAsync in JNI
- PR #5405 Add Error message to `StringColumn.unary_operator`
- PR #5424 Add python plumbing for `.str.character_tokenize`
- PR #5420 Aligning signature of `Series.value_counts` to Pandas
- PR #5535 Update document for XGBoost usage with dask-cuda
- PR #5431 Adding support for unsigned int
- PR #5426 Refactor strings code to minimize calls to regex
- PR #5433 Add support for column inputs in `strings::starts_with` and `strings::ends_with`
- PR #5427 Add Java bindings for unsigned data types
- PR #5429 Improve text wrapping in libcudf documentation
- PR #5443 Remove unused `is_simple` trait
- PR #5441 Update Java HostMemoryBuffer to only load native libs when necessary
- PR #5452 Add support for strings conversion using negative timestamps
- PR #5437 Improve libcudf join documentation
- PR #5458 Install meta packages for dependencies
- PR #5467 Move doc customization scripts to Jenkins
- PR #5468 Add cudf::unique_count(table_view)
- PR #5482 Use rmm::device_uvector in place of rmm::device_vector in copy_if
- PR #5483 Add NVTX range calls to dictionary APIs
- PR #5477 Add `is_index_type` trait
- PR #5487 Use sorted lists instead of sets for pytest parameterization
- PR #5491 allow build libcudf in custom dir
- PR #5501 Adding only unsigned types support for categorical column codes
- PR #5570 Add Index APIs such as `Int64Index`, `UInt64Index` and others
- PR #5503 Change `unique_count` to `distinct_count`
- PR #5514 `convert_datetime.cu` Small Cleanup
- PR #5496 Rename .cu tests (zero cuda kernels) to .cpp files
- PR #5518 split iterator and gather tests to speedup build tests
- PR #5526 Change `type_id` to enum class
- PR #5559 Java APIs for missing date/time operators
- PR #5582 Add support for axis and other parameters to `DataFrame.sort_index` and fix other bunch of issues.
- PR #5562 Add missing join type for java
- PR #5584 Refactor `CompactProtocolReader::InitSchema`
- PR #5591 Add `__arrow_array__` protocol and raise a descriptive error message
- PR #5635 Ad cuIO reader benchmarks for CSV, ORC and Parquet
- PR #5601 Instantiate Table instances in `Frame._concat` to avoid `DF.insert()` overhead
- PR #5602 Add support for concatenation of `Series` & `DataFrame` in `cudf.concat` when `axis=0`
- PR #5603 Refactor JIT `parser.cpp`
- PR #5643 Update `isort` to 5.0.4
- PR #5648 OO interface for hash join with explicit `build/probe` semantic
- PR #5662 Make Java ColumnVector(long nativePointer) constructor public
- PR #5681 Pin black, flake8 and isort
- PR #5679 Use `pickle5` to test older Python versions
- PR #5684 Use `pickle5` in `Serializable` (when available)
- PR #5419 Support rolling, groupby_rolling for durations
- PR #5687 Change strings::split_record to return a lists column
- PR #5708 Add support for `dummy_na` in `get_dummies`
- PR #5709 Update java build to help cu-spacial with java bindings
- PR #5713 Remove old NVTX utilities
- PR #5726 Replace use of `assert_frame_equal` in tests with `assert_eq`
- PR #5720 Replace owning raw pointers with std::unique_ptr
- PR #5702 Add inherited methods to python docs and other docs fixes
- PR #5733 Add support for `size` property in `DataFrame`/ `Series` / `Index`/ `MultiIndex`
- PR #5735 Force timestamp creation only with duration
- PR #5743 Reduce number of test cases in concatenate benchmark
- PR #5748 Disable `tolist` API in `Series` & `Index` and add `tolist` dispatch in `dask-cudf`
- PR #5744 Reduce number of test cases in reduction benchmark
- PR #5756 Switch JNI code to use the RMM owning wrapper
- PR #5725 Integrate Gbenchmarks into CI
- PR #5752 Add cuDF internals documentation (ColumnAccessor)
- PR #5759 Fix documentation describing JIT cache default location
- PR #5780 Add Java bindings for pad
- PR #5775 Update dask_cudf.read_parquet to align with upstream improvements
- PR #5785 Enable computing views of ListColumns
- PR #5791 Get nullable_pd_dtype from kwargs if provided in assert_eq
- PR #5786 JNI Header Cleanup for cuSpatial
- PR #5800 Expose arrow datasource instead of directly taking a RandomAccessFile
- PR #5795 Clarify documentation on Boost dependency
- PR #5803 Add in Java support for the repeat command
- PR #5806 Expose the error message from native exception when throwing an OOM exception
- PR #5825 Enable ORC statistics generation by default
- PR #5771 Enable gather/slicing/joins with ListColumns in Python
- PR #5834 Add support for dictionary column in concatenate
- PR #5832 Make dictionary_wrapper constructor from a value explicit
- PR #5833 Pin `dask` and `distributed` version to `2.22.0`
- PR #5856 Bump Pandas support to >=1.0,<1.2
- PR #5855 Java interface to limit RMM maximum pool size
- PR #5853 Disable `fixed_point` for use in `copy_if`
- PR #5854 Raise informative error in `DataFrame.iterrows` and `DataFrame.itertuples`
- PR #5864 Replace cnmem with pool_memory_resource in test/benchmark fixtures
- PR #5863 Explicitly require `ucx-py` on CI
- PR #5879 Added support of sub-types and object wrappers in concat()
- PR #5884 Use S3 bucket directly for benchmark plugni
- PR #5881 Add in JVM extractListElement and stringSplitRecord
- PR #5885 Add in java support for merge sort
- PR #5894 Small code improvement / cleanup
- PR #5899 Add in gather support for Java
- PR #5906 Add macros for showing line of failures in unit tests
- PR #5933 Add in APIs to read/write arrow IPC formatted data from java
- PR #3918 Update cuDF internals doc
- PR #5970 Map data to pandas through arrow, always
- PR #6012 Remove `cudf._cuda` and replace usages with `rmm._cuda`
- PR #6045 Parametrize parquet_reader_list tests
- PR #6053 Import traits.hpp for cudftestutils consumers
## Bug Fixes
- PR #6034 Specify `--basetemp` for `py.test` run
- PR #5793 Fix leak in mutable_table_device_view by deleting _descendant_storage in table_device_view_base::destroy
- PR #5525 Make sure to allocate bitmasks of string columns only once
- PR #5336 Initialize conversion tables on a per-context basis
- PR #5283 Fix strings::ipv4_to_integers overflow to negative
- PR #5269 Explicitly require NumPy
- PR #5271 Fix issue when different dtype values are passed to `.cat.add_categories`
- PR #5333 Fix `DataFrame.loc` issue with list like argument
- PR #5299 Update package version for Java bindings
- PR #5300 Add support to ignore `None` in `cudf.concat` input
- PR #5334 Fix pickling sizeof test
- PR #5337 Fix broken alias from DataFrame.{at,iat} to {loc, iloc}
- PR #5347 Fix APPLY_BOOLEAN_MASK_BENCH segfault
- PR #5368 Fix loc indexing issue with `datetime` type index
- PR #5367 Fix API for `cudf::repeat` in `cudf::cross_join`
- PR #5377 Handle array of cupy scalars in to_column
- PR #5326 Fix `DataFrame.__init__` for list of scalar inputs and related dask issue
- PR #5383 Fix cython `type_id` enum mismatch
- PR #5982 Fix gcc-9 compile errors under CUDA 11
- PR #5382 Fix CategoricalDtype equality comparisons
- PR #5989 Fix gcc-9 warnings on narrowing conversion
- PR #5385 Fix index issues in `DataFrame.from_gpu_matrix`
- PR #5390 Fix Java data type IDs and string interleave test
- PR #5392 Fix documentation links
- PR #5978 Fix option to turn off NVTX
- PR #5410 Fix compile warning by disallowing bool column type for slice_strings
- PR #5404 Fix issue with column creation when chunked arrays are passed
- PR #5409 Use the correct memory resource when creating empty null masks
- PR #5399 Fix cpp compiler warnings of unreachable code
- PR #5439 Fix nvtext ngrams_tokenize performance for multi-byte UTF8
- PR #5446 Fix compile error caused by out-of-date PR merge (4990)
- PR #5983 Fix JNI gcc-9 compile error under CUDA 11
- PR #5423 Fix any() reduction ignore nulls
- PR #5459 Fix str.translate to convert table characters to UTF-8
- PR #5480 Fix merge sort docs
- PR #5465 Fix benchmark out of memory errors due to multiple initialization
- PR #5473 Fix RLEv2 patched base in ORC reader
- PR #5472 Fix str concat issue with indexed series
- PR #5478 Fix `loc` and `iloc` doc
- PR #5484 Ensure flat index after groupby if nlevels == 1
- PR #5489 Fix drop_nulls/boolean_mask corruption for large columns
- PR #5504 Remove some java assertions that are not needed
- PR #5516 Update gpuCI image in local build script
- PR #5529 Fix issue with negative timestamp in orc writer
- PR #5523 Handle `dtype` of `Buffer` objects when not passed explicitly
- PR #5534 Fix the java build around type_id
- PR #5564 Fix CudfEngine.read_metadata API in dask_cudf
- PR #5537 Fix issue related to using `set_index` on a string series
- PR #5561 Fix `copy_bitmask` issue with offset
- PR #5609 Fix loc and iloc issue with column like input
- PR #5578 Fix getattr logic in GroupBy
- PR #5490 Fix python column view
- PR #5613 Fix assigning an equal length object into a masked out Series
- PR #5608 Fix issue related to string types being represented as binary types
- PR #5619 Fix issue related to typecasting when using a `CategoricalDtype`
- PR #5649 Fix issue when empty Dataframe with index are passed to `cudf.concat`
- PR #5644 Fix issue related to Dataframe init when passing in `columns`
- PR #5340 Disable iteration in cudf objects and add support for `DataFrame` initialization with list of `Series`
- PR #5663 Move Duration types under Timestamps in doxygen Modules page
- PR #5664 Update conda upload versions for new supported CUDA/Python
- PR #5656 Fix issue with incorrect docker image being used in local build script
- PR #5671 Fix chunksize issue with `DataFrame.to_csv`
- PR #5672 Fix crash in parquet writer while writing large string data
- PR #5675 Allow lists_column_wrappers to be constructed from incomplete hierarchies.
- PR #5691 Raise error on incompatible mixed-type input for a column
- PR #5692 Fix compilation issue with gcc 7.4.0 and CUDA 10.1
- PR #5693 Add fix missing from PR 5656 to update local docker image to py3.7
- PR #5703 Small fix for dataframe constructor with cuda array interface objects that don't have `descr` field
- PR #5727 Fix `Index.__repr__` to allow representation of null values
- PR #5719 Fix Frame._concat() with categorical columns
- PR #5736 Disable unsigned type in ORC writer benchmarks
- PR #5745 Update JNI cast for inability to cast timestamp and integer types
- PR #5750 Add RMM_ROOT/include to the spdlog search path in JNI build
- PR #5763 Update Java slf4j version to match Spark 3.0
- PR #5816 Always preserve list column hierarchies across operations.
- PR #5766 Fix issue related to `iloc` and slicing a `DataFrame`
- PR #5827 Revert fallback for `tolist` being absent
- PR #5774 Add fallback for when `tolist` is absent
- PR #5319 Disallow SUM and specialize MEAN of timestamp types
- PR #5797 Fix a missing data issue in some Parquet files
- PR #5787 Fix column create from dictionary column view
- PR #5764 Remove repetition of install instructions
- PR #5926 Fix SeriesGroupBy.nunique() to return a Series
- PR #5813 Fix normalizer exception with all-null strings column
- PR #5820 Fix ListColumn.to_arrow for all null case
- PR #5837 Bash syntax error in prebuild.sh preventing `cudf_kafka` and `libcudf_kafka` from being uploaded to Anaconda
- PR #5841 Added custreamz functions that were missing in interface layer
- PR #5844 Fix `.str.cat` when objects with different index are passed
- PR #5849 Modify custreamz api to integrate seamlessly with python streamz
- PR #5866 cudf_kafka python version inconsistencies in Anaconda packages
- PR #5872 libcudf_kafka r_path is causing docker build failures on centos7
- PR #5869 Fix bug in parquet writer in writing string column with offset
- PR #5910 Propagate `CUDA` insufficient driver error to the user
- PR #5914 Link CUDA against libcudf_kafka
- PR #5895 Do not break kafka client consumption loop on local client timeout
- PR #5915 Fix reference count on Java DeviceMemoryBuffer after contiguousSplit
- PR #5941 Fix issue related to `string` to `datetime64` column typecast
- PR #5927 Fix return type of `MultiIndex.argsort`
- PR #5942 Fix JIT cache multiprocess test failure
- PR #5929 Revised assertEquals for List Columns in java tests
- PR #5947 Fix null count for child device column vector
- PR #5951 Fix mkdir error in benchmark build
- PR #5949 Find Arrow include directory for JNI builds
- PR #5964 Fix API doc page title tag
- PR #5981 Handle `nat` in `fillna` for datetime and timedelta types
- PR #6016 Fix benchmark fixture segfault
- PR #6003 Fix concurrent JSON reads crash
- PR #6032 Change black version to 19.10b0 in .pre-commit-config.yaml
- PR #6041 Fix Java memory resource handler to rethrow original exception object
- PR #6057 Fix issue in parquet reader with reading columns out of file-order
- PR #6098 Patch Thrust to workaround CUDA_CUB_RET_IF_FAIL macro clearing CUDA errors
# cuDF 0.14.0 (03 Jun 2020)
## New Features
- PR #5042 Use RMM for Numba
- PR #4472 Add new `partition` API to replace `scatter_to_tables`.
- PR #4626 LogBase binops
- PR #4750 Normalize NANs and Zeroes (JNI Bindings)
- PR #4689 Compute last day of the month for a given date
- PR #4771 Added in an option to statically link against cudart
- PR #4788 Add cudf::day_of_year API
- PR #4789 Disallow timestamp sum and diffs via binary ops
- PR #4815 Add JNI total memory allocated API
- PR #4906 Add Java bindings for interleave_columns
- PR #4900 Add `get_element` to obtain scalar from a column given an index
- PR #4938 Add Java bindings for strip
- PR #4923 Add Java and JNI bindings for string split
- PR #4972 Add list_view (cudf::LIST) type
- PR #4990 Add lists_column_view, list_column_wrapper, lists support for concatenate
- PR #5073 gather support for cudf::LIST columns
- PR #5004 Added a null considering min/max binary op
- PR #4992 Add Java bindings for converting nans to nulls
- PR #4975 Add Java bindings for first and last aggregate expressions based on nth
- PR #5036 Add positive remainder binary op functionality
- PR #5055 Add atan2 binary op
- PR #5099 Add git commit hook for clang-format
- PR #5072 Adding cython binding to `get_element`
- PR #5092 Add `cudf::replace_nans`
- PR #4881 Support row_number in rolling_window
- PR #5068 Add Java bindings for arctan2
- PR #5132 Support out-of-band buffers in Python pickling
- PR #5139 Add ``Serializable`` ABC for Python
- PR #5149 Add Java bindings for PMOD
- PR #5153 Add Java bindings for extract
- PR #5196 Add Java bindings for NULL_EQUALS, NULL_MAX and NULL_MIN
- PR #5192 Add support for `cudf.to_datetime`
- PR #5203 Add Java bindings for is_integer and is_float
- PR #5205 Add ci test for libcudf, libnvstrings headers existence check in meta.yml
- PR #5239 Support for custom cuIO datasource classes
- PR #5293 Add Java bindings for replace_with_backrefs
## Improvements
- PR #5235 Make DataFrame.clean_renderable_dataframe() and DataFrame.get_renderable_dataframe non-public methods
- PR #4995 Add CMake option for per-thread default stream
- PR #5033 Fix Numba deprecations warnings with Numba 0.49+
- PR #4950 Fix import errors with Numba 0.49+
- PR #4825 Update the iloc exp in dataframe.py
- PR #4450 Parquet writer: add parameter to retrieve the raw file metadata
- PR #4531 Add doc note on conda `channel_priority`
- PR #4479 Adding cuda 10.2 support via conda environment file addition
- PR #4486 Remove explicit template parameter from detail::scatter.
- PR #4471 Consolidate partitioning functionality into a single header.
- PR #4483 Add support fill() on dictionary columns
- PR #4498 Adds in support for chunked writers to java
- PR #4073 Enable contiguous split java test
- PR #4527 Add JNI and java bindings for matches_re
- PR #4606 Fix `scan` unit test and upgrade to more appropriate algorithms
- PR #4527 Add JNI and java bindings for `matches_re`
- PR #4532 Parquet reader: add support for multiple pandas index columns
- PR #4599 Add Java and JNI bindings for string replace
- PR #4655 Raise error for list like dtypes in cudf
- PR #4548 Remove string_view is_null method
- PR #4645 Add Alias for `kurtosis` as `kurt`
- PR #4703 Optimize strings concatenate for many columns
- PR #4769 Remove legacy code from libcudf
- PR #4668 Add Java bindings for log2/log10 unary ops and log_base binary op
- PR #4616 Enable different RMM allocation modes in unit tests
- PR #4520 Fix several single char -> single char case mapping values. Add support for single -> multi char mappings.
- PR #4700 Expose events and more stream functionality in java
- PR #4699 Make Java's MemoryBuffer public and add MemoryBuffer.slice
- PR #4691 Fix compiler argument syntax for ccache
- PR #4792 Port `gather`, `scatter`, and `type_dispatcher` benchmarks to libcudf++
- PR #3581 Remove `bool8`
- PR #4692 Add GPU and CUDA validations
- PR #4705 quantile cython bindings
- PR #4627 Remove legacy Cython
- PR #4688 Add Java count aggregation to include null values
- PR #4331 Improved test for double that considers an epsilon
- PR #4731 Avoid redundant host->device copies when reading the entire CSV/JSON file
- PR #4739 Add missing aggregations for cudf::experimental::reduce
- PR #4738 Remove stop-gaps in StringMethods and enable related tests
- PR #4745 Fix `fsspec` related issue and upgrade `fsspec` version
- PR #4779 Allow reading arbitrary stripes/rowgroup lists in CPP columnar readers
- PR #4766 Update to use header-only NVTX v3 and remove need to link against nvtx.
- PR #4716 Remove direct calls to RMM_ALLOC/RMM_FREE
- PR #4765 Add in java support for sequence
- PR #4772 Cleanup `dask_cudf` `to_parquet` and enable `"_metadata"` creation
- PR #4733 Fix `isin` docs for `DataFrame`, `Series`, `Index`, and add `DataFrame.isin` support
- PR #4767 Remove linking against `gtest_main` and `gmock_main` in unit tests
- PR #4660 Port `cudf::partition` api to python/cython
- PR #4799 Remove null_count() and has_nulls() from column_device_view
- PR #4778 Remove `scatter_to_tables` from libcudf, cython and python
- PR #4783 Add support for child columns to mutable_column_device_view
- PR #4802 Refactor `cudf::transpose` to increase performance.
- PR #4776 Improve doxygen comments for libcudf string/timestamp conversion formats
- PR #4793 Add `cudf._cuda` to setup.py
- PR #4790 Replace the use of deprecated rmm APIs in the test environment
- PR #4809 Improve libcudf doc rendering and add a new main page
- PR #4811 Add precision to subsecond specifier in timestamp/string conversion format
- PR #4543 Add `inplace` parameter support for `Series.replace` & `DataFrame.replace`
- PR #4816 Remove java API use of deprecated RMM APIs
- PR #4817 Fix `fixed_point` documentation
- PR #4844 Change Doxygen color to RAPIDS purple and documentation improvement
- PR #4840 Add docs for `T`, `empty` & `values`
- PR #4841 Remove unused `single_lane_block_popc_reduce` function
- PR #4842 Added Java bindings for titlizing a String column
- PR #4847 Replace legacy NVTX calls with "standalone" NVTX bindings calls
- PR #4851 Performance improvements relating to `concat`
- PR #4852 Add NVTX range calls to strings and nvtext APIs
- PR #4849 Update Java bindings to use new NVTX API
- PR #4845 Add CUDF_FUNC_RANGE to top-level cuIO function APIs
- PR #4848 Side step `unique_count` calculation in `scatter_by_map`
- PR #4863 Create is_integer/is_float functions for checking characters before calling to_integers/to_floats
- PR #4864 Add support for `__array__` method in cuDF
- PR #4853 Added CUDA_TRY to multiple places in libcudf code
- PR #4870 Add chunked parquet file writing from python
- PR #4865 Add docs and clarify limitations of `applymap`
- PR #4867 Parquet reader: coalesce adjacent column chunk reads
- PR #4871 Add in the build information when building the java jar file
- PR #4869 Expose contiguous table when deserializing from Java
- PR #4878 Remove obsolete string_from_host utility
- PR #4873 Prevent mutable_view() from invoking null count
- PR #4806 Modify doc and correct cupy array conversions in `10min-cudf-cupy.ipynb`
- PR #4877 Fix `DataFrame.mask` and align `mask` & `where` behavior with pandas
- PR #4884 Add more NVTX annotations in cuDF Python
- PR #4902 Use ContextDecorator instead of contextmanager for nvtx.annotate
- PR #4894 Add annotations for the `.columns` property and setter
- PR #4901 Improve unit tests for casting Java numeric types to string
- PR #4888 Handle dropping of nan's & nulls using `skipna` parameter in Statistical reduction ops
- PR #4903 Improve internal documentation of cudf-io compression/decompression kernels
- PR #4905 Get decorated function name as message when annotating
- PR #4907 Reuse EventAttributes across NVTX annotations
- PR #4912 Drop old `valid` check in `element_indexing`
- PR #4924 Properly handle npartition argument in rearrange_by_hash
- PR #4918 Adding support for `cupy.ndarray` in `series.loc`
- PR #4909 Added ability to transform a column using cuda method in Java bindings
- PR #3259 Add .clang-format file & format all files
- PR #4943 Fix-up error handling in GPU detection
- PR #4917 Add support for casting unsupported `dtypes` of same kind
- PR #4928 Misc performance improvements for `scatter_by_map`
- PR #4927 Use stack for memory in `deviceGetName`
- P# #4933 Enable nop annotate
- PR #4929 Java methods ensure calling thread's CUDA device matches RMM device
- PR #4956 Dropping `find_first_value` and `find_last_value`
- PR #4962 Add missing parameters to `DataFrame.replace` & `Series.replace`
- PR #4960 Return the result of `to_json`
- PR #4963 Use `cudaDeviceAttr` in `getDeviceAttribute`
- PR #4953 add documentation for supported NVIDIA GPUs and CUDA versions for cuDF
- PR #4967 Add more comments to top-level gpuinflate and debrotli kernels
- PR #4968 Add CODE_OF_CONDUCT.md
- PR #4980 Change Java HostMemoryBuffer default to prefer pinned memory
- PR #4994 clang-format "cpp/tests" directory
- PR #4993 Remove Java memory prediction code
- PR #4985 Add null_count to Python Column ctors and use already computed null_count when possible
- PR #4998 Clean up dispatch of aggregation methods in result_cache
- PR #5000 Performance improvements in `isin` and dask_cudf backend
- PR #5002 Fix Column.__reduce__ to accept `null_count`
- PR #5006 Add Java bindings for strip, lstrip and rstrip
- PR #5047 Add Cython binding for libcudf++ CSV reader
- PR #5027 Move nvstrings standalone docs pages to libcudf doxygen pages
- PR #4947 Add support for `CategoricalColumn` to be type-casted with different categories
- PR #4822 Add constructor to `pq_chunked_state` to enable using RAII idiom
- PR #5024 CSV reader input stage optimizations
- PR #5061 Add support for writing parquet to python file-like objects
- PR #5034 Use loc to apply boolmask to frame efficiently when constructing query result
- PR #5039 Make `annotate` picklable
- PR #5045 Remove call to `unique()` in concat when `axis=1`
- PR #5023 Object oriented join and column agnostic typecasting
- PR #5049 Add grouping of libcudf apis into doxygen modules
- PR #5069 Remove duplicate documentation from detail headers
- PR #5075 Add simple row-group aggregation mechanism in dask_cudf read_parquet
- PR #5084 Improve downcasting in `Series.label_encoding()` to reduce memory usage
- PR #5085 Print more precise numerical strings in unit tests
- PR #5028 Add Docker 19 support to local gpuci build
- PR #5093 Add `.cat.as_known` related test in `dask_cudf`
- PR #5100 Add documentation on libcudf doxygen guidelines
- PR #5106 Add detail API for `cudf::concatenate` with tables
- PR #5104 Add missing `.inl` files to clang-format and git commit hook
- PR #5112 Adding `htoi` and `ip2int` support to `StringMethods`
- PR #5101 Add POSITION_INDEPENDENT_CODE flag to static cudftestutil library
- PR #5109 Update CONTRIBUTING.md for `clang-format` pre-commit hook
- PR #5054 Change String typecasting to be inline with Pandas
- PR #5123 Display more useful info on `clang-format` CI Failure
- PR #5058 Adding cython binding for CSV writer
- PR #5156 Raise error when applying boolean mask containing null values.
- PR #5137 Add java bindings for getSizeInBytes in DType
- PR #5194 Update Series.fillna to reflect dtype behavior
- PR #5159 Add `make_meta_object` in `dask_cudf` backend and add `str.split` test
- PR #5147 Use logging_resource_adaptor from RMM in the JNI code
- PR #5184 Fix style checks
- PR #5198 Add detail headers for strings converter functions
- PR #5199 Add index support in `DataFrame.query`
- PR #5227 Refactor `detail::gather` API to make use of scoped enumerators
- PR #5218 Reduce memory usage when categorifying column with null values.
- PR #5209 Add `nan_as_null` support to `cudf.from_pandas`
- PR #5207 Break up backref_re.cu into multiple source files to improve compile time
- PR #5155 Fix cudf documentation misspellings
- PR #5208 Port search and join benchmark to libcudf++
- PR #5214 Move docs build script into repository
- PR #5219 Add per context cache for JIT kernels
- PR #5250 Improve `to_csv()` support for writing to buffers
- PR #5233 Remove experimental namespace used during libcudf++ refactor
- PR #5213 Documentation enhancements to `cudf` python APIs
- PR #5251 Fix more mispellings in cpp comments and strings
- PR #5261 Add short git commit to conda package name
- PR #5254 Deprecate nvstrings, nvcategory and nvtext
- PR #5270 Add support to check for "NaT" and "None" strings while typecasting to `datetime64`
- PR #5298 Remove unused native deps from java library
- PR #5216 Make documentation uniform for params
## Bug Fixes
- PR #5221 Fix the use of user-provided resource on temporary values
- PR #5181 Allocate null count using the default resource in `copy_if`
- PR #5141 Use user-provided resource correctly in `unary_operation()` and `shift()`
- PR #5064 Fix `hash()` and `construct_join_output_df()` to use user-provided memory resource correctly
- PR #4386 Update Java package to 0.14
- PR #4466 Fix merge key column sorting
- PR #4402 Fix `cudf::strings::join_strings` logic with all-null strings and null narep
- PR #4610 Fix validity bug in string scalar factory
- PR #4570 Fixing loc ordering issue in dataframe
- PR #4612 Fix invalid index handling in cudf:dictionary:add-keys call to gather
- PR #4614 Fix cuda-memcheck errors found in column_tests.cu and copying/utility_tests.cu
- PR #4614 Fix cuda-memcheck errors found in `column_tests.cu` and `copying/utility_tests.cu`
- PR #4639 Fix java column of empty strings issue
- PR #4613 Fix issue related to downcasting in `.loc`
- PR #4615 Fix potential OOB write in ORC writer compression stage
- PR #4587 Fix non-regex libcudf contains methods to return true when target is an empty string
- PR #4617 Fix memory leak in aggregation object destructor
- PR #4633 String concatenation fix in `DataFrame.rename`
- PR #4609 Fix to handle `Series.factorize` when index is set
- PR #4659 Fix strings::replace_re handling empty regex pattern
- PR #4652 Fix misaligned error when computing regex device structs
- PR #4651 Fix hashing benchmark missing includes
- PR #4672 Fix docs for `value_counts` and update test cases
- PR #4672 Fix `__setitem__` handling list of column names
- PR #4673 Fix regex infinite loop while parsing invalid quantifier pattern
- PR #4679 Fix comments for make_dictionary_column factory functions
- PR #4711 Fix column leaks in Java unit test
- pR #4721 Fix string binop to update nulls appropriately
- PR #4722 Fix strings::pad when using pad::both with odd width
- PR #4743 Fix loc issue with Multiindex on DataFrame and Series
- PR #4725 Fix issue java with not setting GPU on background thread
- PR #4701 Fix issue related to mixed input types in `as_column`
- PR #4748 Fix strings::all_characters_of_type to allow verify-types mask
- PR #4747 Fix random failures of decompression gtests
- PR #4749 Setting `nan_as_null=True` while creating a column in DataFrame creation
- PR #4761 Fix issues with `nan_as_null` in certain case
- PR #4650 Fix type mismatch & result format issue in `searchsorted`
- PR #4755 Fix Java build to deal with new quantiles API
- PR #4720 Fix issue related to `dtype` param not being adhered in case of cuda arrays
- PR #4756 Fix regex error checking for valid quantifier condition
- PR #4777 Fix data pointer for column slices of zero length
- PR #4770 Fix readonly flag in `Column. __cuda_array_interface__`
- PR #4800 Fix dataframe slicing with strides
- PR #4796 Fix groupby apply for operations that fail on empty groups
- PR #4801 gitignore `_cuda/*.cpp` files
- PR #4805 Fix hash_object_dispatch definitions in dask_cudf
- PR #4813 Fix `GenericIndex` printing
- PR #4804 Fix issue related `repartition` during hash based repartition
- PR #4814 Raise error if `to_csv` does not get `filename/path`
- PR #4821 Port apply_boolean_mask_benchmark to new cudf::column types
- PR #4826 Move memory resource from RmmTestEnvironment to the custom gtest main() scope
- PR #4839 Update Java bindings for timestamp cast formatting changes
- PR #4797 Fix string timestamp to datetime conversion with `ms` and `ns`
- PR #4854 Fix several cases of incorrect downcasting of operands in binops
- PR #4834 Fix bug in transform in handling single line UDFs
- PR #4857 Change JIT cache default directory to $HOME/.cudf
- PR #4807 Fix `categories` duplication in `dask_cudf`
- PR #4846 Fix CSV parsing with byte_range parameter and string columns
- PR #4883 Fix series get/set to match pandas
- PR #4861 Fix to_integers illegal-memory-access with all-empty strings column
- PR #4860 Fix issues in HostMemoryBufferTest, and testNormalizeNANsAndZeros
- PR #4879 Fix output for `cudf.concat` with `axis=1` for pandas parity
- PR #4838 Fix to support empty inputs to `replace` method
- PR #4859 JSON reader: fix data type inference for string columns
- PR #4868 Temporary fix to skip validation on Dask related runs
- PR #4872 Fix broken column wrapper constructors in merge benchmark
- PR #4875 Fix cudf::strings::from_integer logic converting min integer to string
- PR #4876 Mark Java cleaner objects as being cleaned even if exception is thrown
- PR #4780 Handle nulls in Statistical column operations
- PR #4886 Minimize regex-find calls in multi-replace cudf::strings::replace_re function
- PR #4887 Remove `developer.rst` and any links
- PR #4915 Fix to `reset_index` inplace in MultiIndex and other places
- PR #4899 Fix series inplace handling
- PR #4940 Fix boolean mask issue with large sized Dataframe
- PR #4889 Fix multi-index merging
- PR #4922 Fix cudf::strings:split logic for many columns
- PR #4949 Fix scatter, gather benchmark constructor call
- PR #4958 Fix strings::replace perf for long strings
- PR #4965 Raise Error when there are duplicate columns sent to `cudf.concat`
- PR #4983 Fix from_cudf in dask_cudf
- PR #4996 Parquet writer: fix potentially zero-sized string dictionary
- PR #5009 Fix pickling for string and categorical columns
- PR #4984 Fix groupby nth aggregation negative n and exclude nulls
- PR #5011 Fix DataFrame loc issue with boolean masking
- PR #4977 Fix compilation of cuDF benchmarks with build.sh
- PR #5018 Fix crash when JIT cache dir inaccessible. Fix inter version cache clash for custom cache path.
- PR #5005 Fix CSV reader error when only one of the row selection parameters is set
- PR #5022 Add timestamp header to transform
- PR #5021 Fix bug with unsigned right shift and scalar lhs
- PR #5020 Fix `conda install pre_commit` not found when setting up dev environment
- PR #5030 Fix Groupby sort=True
- PR #5029 Change temporary dir to working dir for cudf io tests
- PR #5040 Fix `make_scalar_iterator()` and `make_pair_iterator(scalar)` to not copy values to host
- PR #5041 Fix invalid java test for shift right unsigned
- PR #5043 Remove invalid examples page libcudf doxygen
- PR #5060 Fix unsigned char limits issue in JIT by updating Jitify
- PR #5070 Fix libcudf++ csv reader support for hex dtypes, doublequotes and empty columns
- PR #5057 Fix metadata_out parameter not reaching parquet `write_all`
- PR #5076 Fix JNI code for null_policy enum change
- PR #5031 grouped_time_range_rolling_window assumes ASC sort order
- PR #5032 grouped_time_range_rolling_window should permit invocation without specifying grouping_keys
- PR #5103 Fix `read_csv` issue with names and header
- PR #5090 Fix losing nulls while creating DataFrame from dictionary
- PR #5089 Return false for sign-only string in libcudf is_float and is_integer
- PR #5124 `DataFrame.rename` support for renaming indexes w/ default for `index`
- PR #5108 Fix float-to-string convert for -0.0
- PR #5111 Fix header not being included in legacy jit transform.
- PR #5115 Fix hex-to-integer logic when string has prefix '0x'
- PR #5118 Fix naming for java string length operators
- PR #5129 Fix missed reference in tests from 5118
- PR #5122 Fix `clang-format` `custrings` bug
- PR #5138 Install `contextvars` backport on Python 3.6
- PR #5145 Fix an issue with calling an aggregation operation on `SeriesGroupBy`
- PR #5148 Fix JNI build for GCC 8
- PR #5162 Fix issues related to empty `Dataframe` in `as_gpu_matrix` & `astype`
- PR #5167 Fix regex extract match to return empty string
- PR #5163 Fix parquet INT96 timestamps before the epoch
- PR #5165 Fix potentially missing last row in libcudf++ csv reader
- PR #5185 Fix flake8 configuration and issues from new flake8 version
- PR #5193 Fix OOB read in csv reader
- PR #5191 Fix the use of the device memory resource
- PR #5212 Fix memory leak in `dlpack.pyx:from_dlpack()`
- PR #5224 Add new headers from 5198 to libcudf/meta.yaml
- PR #5228 Fix datetime64 scalar dtype handling for unsupported time units
- PR #5256 ORC reader: fix loading individual timestamp columns
- PR #5285 Fix DEBUG compilation failure due to `fixed_point.hpp`
# cuDF 0.13.0 (31 Mar 2020)
## New Features
- PR #4360 Added Java bindings for bitwise shift operators
- PR #3577 Add initial dictionary support to column classes
- PR #3777 Add support for dictionary column in gather
- PR #3693 add string support, skipna to scan operation
- PR #3662 Define and implement `shift`.
- PR #3861 Added Series.sum feature for String
- PR #4069 Added cast of numeric columns from/to String
- PR #3681 Add cudf::experimental::boolean_mask_scatter
- PR #4040 Add support for n-way merge of sorted tables
- PR #4053 Multi-column quantiles.
- PR #4100 Add set_keys function for dictionary columns
- PR #3894 Add remove_keys functions for dictionary columns
- PR #4107 Add groupby nunique aggregation
- PR #4235 Port nvtx.pyx to use non-legacy libcudf APIs
- PR #4153 Support Dask serialization protocol on cuDF objects
- PR #4127 Add python API for n-way sorted merge (merge_sorted)
- PR #4164 Add Buffer "constructor-kwargs" header
- PR #4172 Add groupby nth aggregation
- PR #4159 Add COUNT aggregation that includes null values
- PR #4190 Add libcudf++ transpose Cython implementation
- PR #4063 Define and implement string capitalize and title API
- PR #4217 Add libcudf++ quantiles Cython implementation
- PR #4216 Add cudf.Scalar Python type
- PR #3782 Add `fixed_point` class to support DecimalType
- PR #4272 Add stable sorted order
- PR #4129 Add libcudf++ interleave_columns and tile Cython implementation
- PR #4262 Port unaryops.pyx to use libcudf++ APIs
- PR #4276 Port avro.pyx to libcudf++
- PR #4259 Ability to create Java host buffers from memory-mapped files
- PR #4240 Add groupby::groups()
- PR #4294 Add Series rank and Dataframe rank
- PR #4304 Add new NVTX infrastructure and add ranges to all top-level compute APIs.
- PR #4319 Add repartition_by_hash API to dask_cudf
- PR #4315 ShiftLeft, ShiftRight, ShiftRightUnsigned binops
- PR #4321 Expose Python Semi and Anti Joins
- PR #4291 Add Java callback support for RMM events
- PR #4298 Port orc.pyx to libcudf++
- PR #4344 Port concat.pyx to libcudf++
- PR #4329 Add support for dictionary columns in scatter
- PR #4352 Add factory function make_column_from_scalar
- PR #4381 Add Java support for copying buffers with asynchronous streams
- PR #4288 Add libcudf++ shift Cython implementation
- PR #4338 Add cudf::sequence() for generating an incrementing list of numeric values
- PR #4456 Add argmin/max and string min/max to sort groupby
- PR #4564 Added Java bindings for clamp operator.
- PR #4602 Add Cython bindings for functions in `datetime.hpp`
- PR #4670 Add java and JNI bindings for contains_re
- PR #4363 Grouped Rolling Window support
- PR #4798 Add UDF support to grouped rolling window
- PR #3917 Add dictionary add_keys function
- PR #3842 ORC writer: add support for column statistics
- PR #4088 Added asString() on ColumnVector in Java that takes a format string
- PR #4484 Port CSV writer to libcudf++
## Improvements
- PR #4641 Add replace example in dataframe.py and update 10min.ipynb
- PR #4140 Add cudf series examples and corr() method for dataframe in dataframe.py
- PR #4187 exposed getNativeView method in Java bindings
- PR #3525 build.sh option to disable nvtx
- PR #3748 Optimize hash_partition using shared memory
- PR #3808 Optimize hash_partition using shared memory and cub block scan
- PR #3698 Add count_(un)set_bits functions taking multiple ranges and updated slice to compute null counts at once.
- PR #3909 Move java backend to libcudf++
- PR #3971 Adding `as_table` to convert Column to Table in python
- PR #3910 Adding sinh, cosh, tanh, asinh, acosh, atanh cube root and rint unary support.
- PR #3972 Add Java bindings for left_semi_join and left_anti_join
- PR #3975 Simplify and generalize data handling in `Buffer`
- PR #3985 Update RMM include files and remove extraneously included header files.
- PR #3601 Port UDF functionality for rolling windows to libcudf++
- PR #3911 Adding null boolean handling for copy_if_else
- PR #4003 Drop old `to_device` utility wrapper function
- PR #4002 Adding to_frame and fix for categorical column issue
- PR #4009 build script update to enable cudf build without installing
- PR #3897 Port cuIO JSON reader to cudf::column types
- PR #4008 Eliminate extra copy in column constructor
- PR #4013 Add cython definition for io readers cudf/io/io_types.hpp
- PR #4028 Port json.pyx to use new libcudf APIs
- PR #4014 ORC/Parquet: add count parameter to stripe/rowgroup-based reader API
- PR #3880 Add aggregation infrastructure support for cudf::reduce
- PR #4059 Add aggregation infrastructure support for cudf::scan
- PR #4021 Change quantiles signature for clarity.
- PR #4057 Handle offsets in cython Column class
- PR #4045 Reorganize `libxx` directory
- PR #4029 Port stream_compaction.pyx to use libcudf++ APIs
- PR #4031 Docs build scripts and instructions update
- PR #4062 Improve how java classifiers are produced
- PR #4038 JNI and Java support for is_nan and is_not_nan
- PR #3786 Adding string support to rolling_windows
- PR #4067 Removed unused `CATEGORY` type ID.
- PR #3891 Port NVStrings (r)split_record to contiguous_(r)split_record
- PR #4070 Port NVText normalize_spaces to use libcudf strings column
- PR #4072 Allow round_robin_partition to single partition
- PR #4064 Add cudaGetDeviceCount to JNI layer
- PR #4075 Port nvtext ngrams-tokenize to libcudf++
- PR #4087 Add support for writing large Parquet files in a chunked manner.
- PR #3716 Update cudf.to_parquet to use new GPU accelerated Parquet writer
- PR #4083 Use two partitions in test_groupby_multiindex_reset_index
- PR #4071 Add Java bindings for round robin partition
- PR #4079 Simply use `mask.size` to create the array view
- PR #4092 Keep mask on GPU for bit unpacking
- PR #4081 Copy from `Buffer`'s pointer directly to host
- PR #4105 Change threshold of using optimized hash partition code
- PR #4101 Redux serialize `Buffer` directly with `__cuda_array_interface__`
- PR #4098 Remove legacy calls from libcudf strings column code
- PR #4044 Port join.pyx to use libcudf++ APIs
- PR #4111 Use `Buffer`'s to serialize `StringColumn`
- PR #4567 Optimize `__reduce__` in `StringColumn`
- PR #4590 Register a few more types for Dask serialization
- PR #4113 Get `len` of `StringColumn`s without `nvstrings`
- PR #4147 Remove workaround for UNKNOWN_NULL_COUNT in contiguous_split.
- PR #4130 Renames in-place `cudf::experimental::fill` to `cudf::experimental::fill_in_place`
- PR #4136 Add `Index.names` property
- PR #4139 Port rolling.pyx to new libcudf APIs
- PR #4143 Renames in-place `cudf::experimental::copy_range` to `cudf::experimental::copy_range_in_place`
- PR #4144 Release GIL when calling libcudf++ functions
- PR #4082 Rework MultiColumns in cuDF
- PR #4149 Use "type-serialized" for pickled types like Dask
- PR #4174 Port hash groupby to libcudf++
- PR #4171 Split java host and device vectors to make a vector truly immutable
- PR #4167 Port `search` to libcudf++ (support multi-column searchsorted)
- PR #4163 Assert Dask CUDA serializers have `Buffer` frames
- PR #4165 List serializable classes once
- PR #4168 IO readers: do not create null mask for non-nullable columns
- PR #4177 Use `uint8` type for host array copy of `Buffer`
- PR #4183 Update Google Test Execution
- PR #4182 Rename cuDF serialize functions to be more generic
- PR #4176 Add option to parallelize setup.py's cythonize
- PR #4191 Porting sort.pyx to use new libcudf APIs
- PR #4196 reduce CHANGELOG.md merge conflicts
- PR #4197 Added notebook testing to gpuCI gpu build
- PR #4220 Port strings wrap functionality.
- PR #4204 Port nvtext create-ngrams function
- PR #4219 Port dlpack.pyx to use new libcudf APIs
- PR #4225 Remove stale notebooks
- PR #4233 Porting replace.pyx to use new libcudf APIs
- PR #4223 Fix a few of the Cython warnings
- PR #4224 Optimize concatenate for many columns
- PR #4234 Add BUILD_LEGACY_TESTS cmake option
- PR #4231 Support for custom cuIO data_sink classes.
- PR #4251 Add class to docs in `dask-cudf` `derived_from`
- PR #4261 libxx Cython reorganization
- PR #4274 Support negative position values in slice_strings
- PR #4282 Porting nvstrings conversion functions from new libcudf++ to Python/Cython
- PR #4290 Port Parquet to use new libcudf APIs
- PR #4299 Convert cudf::shift to column-based api
- PR #4301 Add support for writing large ORC files in a chunked manner
- PR #4306 Use libcudf++ `unary.pyx` cast instead of legacy cast
- PR #4295 Port reduce.pyx to libcudf++ API
- PR #4305 Move gpuarrow.pyx and related libarrow_cuda files into `_libxx`
- PR #4244 Port nvstrings Substring Gather/Scatter functions to cuDF Python/Cython
- PR #4280 Port nvstrings Numeric Handling functions to cuDF Python/Cython
- PR #4278 Port filling.pyx to libcudf++ API
- PR #4328 Add memory threshold callbacks for Java RMM event handler
- PR #4336 Move a bunch of internal nvstrings code to use native StringColumns
- PR #4166 Port `is_sorted.pyx` to use libcudf++ APIs
- PR #4351 Remove a bunch of internal usage of Numba; set rmm as cupy allocator
- PR #4333 nvstrings case/capitalization cython bindings
- PR #4345 Removed an undesirable backwards include from /include to /src in cuIO writers.hpp
- PR #4367 Port copying.pyx to use new libcudf
- PR #4362 Move pq_chunked_state struct into it's own header to match how orc writer is doing it.
- PR #4339 Port libcudf strings `wrap` api to cython/python
- PR #4236 Update dask_cudf.io.to_parquet to use cudf to_parquet
- PR #4311 Port nvstrings String Manipulations functions to cuDF Python/Cython
- PR #4373 Port nvstrings Regular Expressions functions to cuDF Python/Cython
- PR #4308 Replace dask_cudf sort_values and improve set_index
- PR #4407 Enable `.str.slice` & `.str.get` and `.str.zfill` unit-tests
- PR #4412 Require Dask + Distributed 2.12.0+
- PR #4377 Support loading avro files that contain nested arrays
- PR #4436 Enable `.str.cat` and fix `.str.split` on python side
- PR #4405 Port nvstrings (Sub)string Comparisons functions to cuDF Python/Cython
- PR #4316 Add Java and JNI bindings for substring expression
- PR #4314 Add Java and JNI bindings for string contains
- PR #4461 Port nvstrings Miscellaneous functions to cuDF Python/Cython
- PR #4495 Port nvtext to cuDF Python/Cython
- PR #4503 Port binaryop.pyx to libcudf++ API
- PR #4499 Adding changes to handle include `keep_index` and `RangeIndex`
- PR #4533 Import `tlz` for optional `cytoolz` support
- PR #4493 Skip legacy testing in CI
- PR #4346 Port groupby Cython/Python to use libcudf++ API
- PR #4524 Updating `__setitem__` for DataFrame to use scalar scatter
- PR #4611 Fix to use direct slicing in iloc for multiindex than using gather under `_get_row_major`
- PR #4534 Disable deprecation warnings as errors.
- PR #4542 Remove RMM init/finalize in cudf test fixture.
- PR #4506 Check for multi-dimensional data in column/Series creation
- PR #4549 Add option to disable deprecation warnings.
- PR #4516 Add negative value support for `.str.get`
- PR #4563 Remove copying to host for metadata generation in `generate_pandas_metadata`
- PR #4554 Removed raw RMM allocation from `column_device_view`
- PR #4619 Remove usage of `nvstrings` in `data_array_view`
- PR #4654 Upgrade version of `numba` required to `>=0.48.0`
- PR #4035 Port NVText tokenize function to libcudf++
- PR #4042 Port cudf/io/functions.hpp to Cython for use in IO bindings
- PR #4058 Port hash.pyx to use libcudf++ APIs
- PR #4133 Mask cleanup and fixes: use `int32` dtype, ensure 64 byte padding, handle offsets
## Bug Fixes
- PR #3888 Drop `ptr=None` from `DeviceBuffer` call
- PR #3976 Fix string serialization and memory_usage method to be consistent
- PR #3902 Fix conversion of large size GPU array to dataframe
- PR #3953 Fix overflow in column_buffer when computing the device buffer size
- PR #3959 Add missing hash-dispatch function for cudf.Series
- PR #3970 Fix for Series Pickle
- PR #3964 Restore legacy NVStrings and NVCategory dependencies in Java jar
- PR #3982 Fix java unary op enum and add missing ops
- PR #3999 Fix issue serializing empty string columns (java)
- PR #3979 Add `name` to Series serialize and deserialize
- PR #4005 Fix null mask allocation bug in gather_bitmask
- PR #4000 Fix dask_cudf sort_values performance for single partitions
- PR #4007 Fix for copy_bitmask issue with uninitialized device_buffer
- PR #4037 Fix JNI quantile compile issue
- PR #4054 Fixed JNI to deal with reduction API changes
- PR #4052 Fix for round-robin when num_partitions divides nrows.
- PR #4061 Add NDEBUG guard on `constexpr_assert`.
- PR #4049 Fix `cudf::split` issue returning one less than expected column vectors
- PR #4065 Parquet writer: fix for out-of-range dictionary indices
- PR #4066 Fixed mismatch with dtype enums
- PR #4078 Fix joins for when column_in_common input parameter is empty
- PR #4080 Fix multi-index dask test with sort issue
- PR #4084 Update Java for removal of CATEGORY type
- PR #4086 ORC reader: fix potentially incorrect timestamp decoding in the last rowgroup
- PR #4089 Fix dask groupby mutliindex test case issues in join
- PR #4097 Fix strings concatenate logic with column offsets
- PR #4076 All null string entries should have null data buffer
- PR #4109 Use rmm::device_vector instead of thrust::device_vector
- PR #4113 Use `.nvstrings` in `StringColumn.sum(...)`
- PR #4116 Fix a bug in contiguous_split() where tables with mixed column types could corrupt string output
- PR #4125 Fix type enum to account for added Dictionary type in `types.hpp`
- PR #4132 Fix `hash_partition` null mask allocation
- PR #4137 Update Java for mutating fill and rolling window changes
- PR #4184 Add missing except+ to Cython bindings
- PR #4141 Fix NVStrings test_convert failure in 10.2 build
- PR #4156 Make fill/copy_range no-op on empty columns
- PR #4158 Fix merge issue with empty table return if one of the two tables are empty
- PR #4162 Properly handle no index metadata generation for to_parquet
- PR #4175 Fix `__sizeof__` calculation in `StringColumn`
- PR #4155 Update groupby group_offsets size and fix unnecessary device dispatch.
- PR #4186 Fix from_timestamps 12-hour specifiers support
- PR #4198 Fix constructing `RangeIndex` from `range`
- PR #4192 Parquet writer: fix OOB read when computing string hash
- PR #4201 Fix java window tests
- PR #4199 Fix potential race condition in memcpy_block
- PR #4221 Fix series dict alignment to not drop index name
- PR #4218 Fix `get_aggregation` definition with `except *`
- PR #4215 Fix performance regression in strings::detail::concatenate
- PR #4214 Alter ValueError exception for GPU accelerated Parquet writer to properly report `categorical` columns are not supported.
- PR #4232 Fix handling empty tuples of children in string columns
- PR #4222 Fix no-return compile error in binop-null-test
- PR #4242 Fix for rolling tests CI failure
- PR #4245 Fix race condition in parquet reader
- PR #4253 Fix dictionary decode and set_keys with column offset
- PR #4258 Fix dask-cudf losing index name in `reset_index`
- PR #4268 Fix java build for hash aggregate
- PR #4275 Fix bug in searching nullable values in non-nullable search space in `upper_bound`
- PR #4273 Fix losing `StringIndex` name in dask `_meta_nonempty`
- PR #4279 Fix converting `np.float64` to Scalar
- PR #4285 Add init files for cython pkgs and fix `setup.py`
- PR #4287 Parquet reader: fix empty string potentially read as null
- PR #4310 Fix empty values case in groupby
- PR #4297 Fix specification of package_data in setup.py
- PR #4302 Fix `_is_local_filesystem` check
- PR #4303 Parquet reader: fix empty columns missing from table
- PR #4317 Fix fill() when using string_scalar with an empty string
- PR #4324 Fix slice_strings for out-of-range start position value
- PR #4115 Serialize an empty column table with non zero rows
- PR #4327 Preemptive dispatch fix for changes in dask#5973
- PR #4379 Correct regex reclass count variable to number of pairs instead of the number of literals
- PR #4364 Fix libcudf zfill strings to ignore '+/-' chars
- PR #4358 Fix strings::concat where narep is an empty string
- PR #4369 Fix race condition in gpuinflate
- PR #4390 Disable ScatterValid and ScatterNull legacy tests
- PR #4399 Make scalar destructor virtual.
- PR #4398 Fixes the failure in groupby in MIN/MAX on strings when some groups are empty
- PR #4406 Fix sorted merge issue with null values and ascending=False
- PR #4445 Fix string issue for parquet reader and support `keep_index` for `scatter_to_tables`
- PR #4423 Tighten up Dask serialization checks
- PR #4537 Use `elif` in Dask deserialize check
- PR #4682 Include frame lengths in Dask serialized header
- PR #4438 Fix repl-template error for replace_with_backrefs
- PR #4434 Fix join_strings logic with all-null strings and non-null narep
- PR #4465 Fix use_pandas_index having no effect in libcudf++ parquet reader
- PR #4464 Update Cmake to always link in libnvToolsExt
- PR #4467 Fix dropna issue for a DataFrame having np.nan
- PR #4480 Fix string_scalar.value to return an empty string_view for empty string-scalar
- PR #4474 Fix to not materialize RangeIndex in copy_categories
- PR #4496 Skip tests which require 2+ GPUs
- PR #4494 Update Java memory event handler for new RMM resource API
- PR #4505 Fix 0 length buffers during serialization
- PR #4482 Fix `.str.rsplit`, `.str.split`, `.str.find`, `.str.rfind`, `.str.index`, `.str.rindex` and enable related tests
- PR #4513 Backport scalar virtual destructor fix
- PR #4519 Remove `n` validation for `nlargest` & `nsmallest` and add negative support for `n`
- PR #4596 Fix `_popn` issue with performance
- PR #4526 Fix index slicing issue for index in case of an empty dataframe
- PR #4538 Fix cudf::strings::slice_strings(step=-1) for empty strings
- PR #4557 Disable compile-errors on deprecation warnings, for JNI
- PR #4669 Fix `dask_cudf` categorical nonempty meta handling
- PR #4576 Fix typo in `serialize.py`
- PR #4571 Load JNI native dependencies for Scalar class
- PR #4598 Fix to handle `pd.DataFrame` in `DataFrame.__init__`
- PR #4594 Fix exec dangling pointer issue in legacy groupby
- PR #4591 Fix issue when reading consecutive rowgroups
- PR #4600 Fix missing include in benchmark_fixture.hpp
- PR #4588 Fix ordering issue in `MultiIndex`
- PR #4632 Fix handling of empty inputs to concatenate
- PR #4630 Remove dangling reference to RMM exec policy in drop duplicates tests.
- PR #4625 Fix hash-based repartition bug in dask_cudf
- PR #4662 Fix to handle `keep_index` in `partition_by_hash`
- PR #4683 Fix Slicing issue with categorical column in DataFrame
- PR #4676 Fix bug in `_shuffle_group` for repartition
- PR #4681 Fix `test_repr` tests that were generating a `RangeIndex` for column names
- PR #4729 Fix `fsspec` versioning to prevent dask test failures
- PR #4145 Support empty index case in DataFrame._from_table
- PR #4108 Fix dtype bugs in dask_cudf metadata (metadata_nonempty overhaul)
- PR #4138 Really fix strings concatenate logic with column offsets
- PR #4119 Fix binary ops slowdown using jitify -remove-unused-globals
# cuDF 0.12.0 (04 Feb 2020)
## New Features
- PR #3759 Updated 10 Minutes with clarification on how `dask_cudf` uses `cudf` API
- PR #3224 Define and implement new join APIs.
- PR #3284 Add gpu-accelerated parquet writer
- PR #3254 Python redesign for libcudf++
- PR #3336 Add `from_dlpack` and `to_dlpack`
- PR #3555 Add column names support to libcudf++ io readers and writers
- PR #3527 Add string functionality for merge API
- PR #3610 Add memory_usage to DataFrame and Series APIs
- PR #3557 Add contiguous_split() function.
- PR #3619 Support CuPy 7
- PR #3604 Add nvtext ngrams-tokenize function
- PR #3403 Define and implement new stack + tile APIs
- PR #3627 Adding cudf::sort and cudf::sort_by_key
- PR #3597 Implement new sort based groupby
- PR #3776 Add column equivalence comparator (using epsilon for float equality)
- PR #3667 Define and implement round-robin partition API.
- PR #3690 Add bools_to_mask
- PR #3761 Introduce a Frame class and make Index, DataFrame and Series subclasses
- PR #3538 Define and implement left semi join and left anti join
- PR #3683 Added support for multiple delimiters in `nvtext.token_count()`
- PR #3792 Adding is_nan and is_notnan
- PR #3594 Adding clamp support to libcudf++
## Improvements
- PR #3124 Add support for grand-children in cudf column classes
- PR #3292 Port NVStrings regex contains function
- PR #3409 Port NVStrings regex replace function
- PR #3417 Port NVStrings regex findall function
- PR #3351 Add warning when filepath resolves to multiple files in cudf readers
- PR #3370 Port NVStrings strip functions
- PR #3453 Port NVStrings IPv4 convert functions to cudf strings column
- PR #3441 Port NVStrings url encode/decode to cudf strings column
- PR #3364 Port NVStrings split functions
- PR #3463 Port NVStrings partition/rpartition to cudf strings column
- PR #3502 ORC reader: add option to read DECIMALs as INT64
- PR #3461 Add a new overload to allocate_like() that takes explicit type and size params.
- PR #3590 Specialize hash functions for floating point
- PR #3569 Use `np.asarray` in `StringColumn.deserialize`
- PR #3553 Support Python NoneType in numeric binops
- PR #3511 Support DataFrame / Series mixed arithmetic
- PR #3567 Include `strides` in `__cuda_array_interface__`
- PR #3608 Update OPS codeowner group name
- PR #3431 Port NVStrings translate to cudf strings column
- PR #3507 Define and implement new binary operation APIs
- PR #3620 Add stream parameter to unary ops detail API
- PR #3593 Adding begin/end for mutable_column_device_view
- PR #3587 Merge CHECK_STREAM & CUDA_CHECK_LAST to CHECK_CUDA
- PR #3733 Rework `hash_partition` API
- PR #3655 Use move with make_pair to avoid copy construction
- PR #3402 Define and implement new quantiles APIs
- PR #3612 Add ability to customize the JIT kernel cache path
- PR #3647 Remove PatchedNumbaDeviceArray with CuPy 6.6.0
- PR #3641 Remove duplicate definitions of CUDA_DEVICE_CALLABLE
- PR #3640 Enable memory_usage in dask_cudf (also adds pd.Index from_pandas)
- PR #3654 Update Jitify submodule ref to include gcc-8 fix
- PR #3639 Define and implement `nans_to_nulls`
- PR #3561 Rework contains implementation in search
- PR #3616 Add aggregation infrastructure for argmax/argmin.
- PR #3673 Parquet reader: improve rounding of timestamp conversion to seconds
- PR #3699 Stringify libcudacxx headers for binary op JIT
- PR #3697 Improve column insert performance for wide frames
- PR #3653 Make `gather_bitmask_kernel` more reusable.
- PR #3710 Remove multiple CMake configuration steps from root build script
- PR #3657 Define and implement compiled binops for string column comparisons
- PR #3520 Change read_parquet defaults and add warnings
- PR #3780 Java APIs for selecting a GPU
- PR #3796 Improve on round-robin with the case when number partitions greater than number of rows.
- PR #3805 Avoid CuPy 7.1.0 for now
- PR #3758 detail::scatter variant with map iterator support
- PR #3882 Fail loudly when creating a StringColumn from nvstrings with > MAX_VAL(int32) bytes
- PR #3823 Add header file for detail search functions
- PR #2438 Build GBench Benchmarks in CI
- PR #3713 Adding aggregation support to rolling_window
- PR #3875 Add abstract sink for IO writers, used by ORC and Parquet writers for now
- PR #3916 Refactor gather bindings
## Bug Fixes
- PR #3618 Update 10 minutes to cudf and cupy to hide warning that were being shown in the docs
- PR #3550 Update Java package to 0.12
- PR #3549 Fix index name issue with iloc with RangeIndex
- PR #3562 Fix 4GB limit for gzipped-compressed csv files
- PR #2981 enable build.sh to build all targets without installation
- PR #3563 Use `__cuda_array_interface__` for serialization
- PR #3564 Fix cuda memory access error in gather_bitmask_kernel
- PR #3548 Replaced CUDA_RT_CALL with CUDA_TRY
- PR #3486 Pandas > 0.25 compatibility
- PR #3622 Fix new warnings and errors when building with gcc-8
- PR #3588 Remove avro reader column order reversal
- PR #3629 Fix hash map test failure
- PR #3637 Fix sorted set_index operations in dask_cudf
- PR #3663 Fix libcudf++ ORC reader microseconds and milliseconds conversion
- PR #3668 Fixing CHECK_CUDA debug build issue
- PR #3684 Fix ends_with logic for matching string case
- PR #3691 Fix create_offsets to handle offset correctly
- PR #3687 Fixed bug while passing input GPU memory pointer in `nvtext.scatter_count()`
- PR #3701 Fix hash_partition hashing all columns instead of columns_to_hash
- PR #3694 Allow for null columns parameter in `csv_writer`
- PR #3706 Removed extra type-dispatcher call from merge
- PR #3704 Changed the default delimiter to `whitespace` for nvtext methods.
- PR #3741 Construct DataFrame from dict-of-Series with alignment
- PR #3724 Update rmm version to match release
- PR #3743 Fix for `None` data in `__array_interface__`
- PR #3731 Fix performance of zero sized dataframe slice
- PR #3709 Fix inner_join incorrect result issue
- PR #3734 Update numba to 0.46 in conda files
- PR #3738 Update libxx cython types.hpp path
- PR #3672 Fix to_host issue with column_view having offset
- PR #3730 CSV reader: Set invalid float values to NaN/null
- PR #3670 Floor when casting between timestamps of different precisions
- PR #3728 Fix apply_boolean_mask issue with non-null string column
- PR #3769 Don't look for a `name` attribute in column
- PR #3783 Bind cuDF operators to Dask Dataframe
- PR #3775 Fix segfault when reading compressed CSV files larger than 4GB
- PR #3799 Align indices of Series inputs when adding as columns to DataFrame
- PR #3803 Keep name when unpickling Index objects
- PR #3804 Fix cuda crash in AVRO reader
- PR #3766 Remove references to cudf::type_id::CATEGORY from IO code
- PR #3817 Don't always deepcopy an index
- PR #3821 Fix OOB read in gpuinflate prefetcher
- PR #3829 Parquet writer: fix empty dataframe causing cuda launch errors
- PR #3835 Fix memory leak in Cython when dealing with nulls in string columns
- PR #3866 Remove unnecessary if check in NVStrings.create_offsets
- PR #3858 Fixes the broken debug build after #3728
- PR #3850 Fix merge typecast scope issue and resulting memory leak
- PR #3855 Fix MultiColumn recreation with reset_index
- PR #3869 Fixed size calculation in NVStrings::byte_count()
- PR #3868 Fix apply_grouped moving average example
- PR #3900 Properly link `NVStrings` and `NVCategory` into tests
- PR #3868 Fix apply_grouped moving average example
- PR #3871 Fix `split_out` error
- PR #3886 Fix string column materialization from column view
- PR #3893 Parquet reader: fix segfault reading empty parquet file
- PR #3931 Dask-cudf groupby `.agg` multicolumn handling fix
- PR #4017 Fix memory leaks in `GDF_STRING` cython handling and `nans_to_nulls` cython
# cuDF 0.11.0 (11 Dec 2019)
## New Features
- PR #2905 Added `Series.median()` and null support for `Series.quantile()`
- PR #2930 JSON Reader: Support ARROW_RANDOM_FILE input
- PR #2956 Add `cudf::stack` and `cudf::tile`
- PR #2980 Added nvtext is_vowel/is_consonant functions
- PR #2987 Add `inplace` arg to `DataFrame.reset_index` and `Series`
- PR #3011 Added libcudf++ transition guide
- PR #3129 Add strings column factory from `std::vector`s
- PR #3054 Add parquet reader support for decimal data types
- PR #3022 adds DataFrame.astype for cuDF dataframes
- PR #2962 Add isnull(), notnull() and related functions
- PR #3025 Move search files to legacy
- PR #3068 Add `scalar` class
- PR #3094 Adding `any` and `all` support from libcudf
- PR #3130 Define and implement new `column_wrapper`
- PR #3143 Define and implement new copying APIs `slice` and `split`
- PR #3161 Move merge files to legacy
- PR #3079 Added support to write ORC files given a local path
- PR #3192 Add dtype param to cast `DataFrame` on init
- PR #3213 Port cuIO to libcudf++
- PR #3222 Add nvtext character tokenizer
- PR #3223 Java expose underlying buffers
- PR #3300 Add `DataFrame.insert`
- PR #3263 Define and implement new `valid_if`
- PR #3278 Add `to_host` utility to copy `column_view` to host
- PR #3087 Add new cudf::experimental bool8 wrapper
- PR #3219 Construct column from column_view
- PR #3250 Define and implement new merge APIs
- PR #3144 Define and implement new hashing APIs `hash` and `hash_partition`
- PR #3229 Define and implement new search APIs
- PR #3308 java add API for memory usage callbacks
- PR #2691 Row-wise reduction and scan operations via CuPy
- PR #3291 Add normalize_nans_and_zeros
- PR #3187 Define and implement new replace APIs
- PR #3356 Add vertical concatenation for table/columns
- PR #3344 java split API
- PR #2791 Add `groupby.std()`
- PR #3368 Enable dropna argument in dask_cudf groupby
- PR #3298 add null replacement iterator for column_device_view
- PR #3297 Define and implement new groupby API.
- PR #3396 Update device_atomics with new bool8 and timestamp specializations
- PR #3411 Java host memory management API
- PR #3393 Implement df.cov and enable covariance/correlation in dask_cudf
- PR #3401 Add dask_cudf ORC writer (to_orc)
- PR #3331 Add copy_if_else
- PR #3427 Define and Implement new multi-search API
- PR #3442 Add Bool-index + Multi column + DataFrame support for set-item
- PR #3172 Define and implement new fill/repeat/copy_range APIs
- PR #3490 Add pair iterators for columns
- PR #3497 Add DataFrame.drop(..., inplace=False) argument
- PR #3469 Add string functionality for replace API
- PR #3273 Define and implement new reduction APIs
## Improvements
- PR #2904 Move gpu decompressors to cudf::io namespace
- PR #2977 Moved old C++ test utilities to legacy directory.
- PR #2965 Fix slow orc reader perf with large uncompressed blocks
- PR #2995 Move JIT type utilities to legacy directory
- PR #2927 Add ``Table`` and ``TableView`` extension classes that wrap legacy cudf::table
- PR #3005 Renames `cudf::exp` namespace to `cudf::experimental`
- PR #3008 Make safe versions of `is_null` and `is_valid` in `column_device_view`
- PR #3026 Move fill and repeat files to legacy
- PR #3027 Move copying.hpp and related source to legacy folder
- PR #3014 Snappy decompression optimizations
- PR #3032 Use `asarray` to coerce indices to a NumPy array
- PR #2996 IO Readers: Replace `cuio::device_buffer` with `rmm::device_buffer`
- PR #3051 Specialized hash function for strings column
- PR #3065 Select and Concat for cudf::experimental::table
- PR #3080 Move `valid_if.cuh` to `legacy/`
- PR #3052 Moved replace.hpp functionality to legacy
- PR #3091 Move join files to legacy
- PR #3092 Implicitly init RMM if Java allocates before init
- PR #3029 Update gdf_ numeric types with stdint and move to cudf namespace
- PR #3052 Moved replace.hpp functionality to legacy
- PR #2955 Add cmake option to only build for present GPU architecture
- PR #3070 Move functions.h and related source to legacy
- PR #2951 Allow set_index to handle a list of column names
- PR #3093 Move groupby files to legacy
- PR #2988 Removing GIS functionality (now part of cuSpatial library)
- PR #3067 Java method to return size of device memory buffer
- PR #3083 Improved some binary operation tests to include null testing.
- PR #3084 Update to arrow-cpp and pyarrow 0.15.0
- PR #3071 Move cuIO to legacy
- PR #3126 Round 2 of snappy decompression optimizations
- PR #3046 Define and implement new copying APIs `empty_like` and `allocate_like`
- PR #3128 Support MultiIndex in DataFrame.join
- PR #2971 Added initial gather and scatter methods for strings_column_view
- PR #3133 Port NVStrings to cudf column: count_characters and count_bytes
- PR #2991 Added strings column functions concatenate and join_strings
- PR #3028 Define and implement new `gather` APIs.
- PR #3135 Add nvtx utilities to cudf::nvtx namespace
- PR #3021 Java host side concat of serialized buffers
- PR #3138 Move unary files to legacy
- PR #3170 Port NVStrings substring functions to cudf strings column
- PR #3159 Port NVStrings is-chars-types function to cudf strings column
- PR #3154 Make `table_view_base.column()` const and add `mutable_table_view.column()`
- PR #3175 Set cmake cuda version variables
- PR #3171 Move deprecated error macros to legacy
- PR #3191 Port NVStrings integer convert ops to cudf column
- PR #3189 Port NVStrings find ops to cudf column
- PR #3352 Port NVStrings convert float functions to cudf strings column
- PR #3193 Add cuPy as a formal dependency
- PR #3195 Support for zero columned `table_view`
- PR #3165 Java device memory size for string category
- PR #3205 Move transform files to legacy
- PR #3202 Rename and move error.hpp to public headers
- PR #2878 Use upstream merge code in dask_cudf
- PR #3217 Port NVStrings upper and lower case conversion functions
- PR #3350 Port NVStrings booleans convert functions
- PR #3231 Add `column::release()` to give up ownership of contents.
- PR #3157 Use enum class rather than enum for mask_allocation_policy
- PR #3232 Port NVStrings datetime conversion to cudf strings column
- PR #3136 Define and implement new transpose API
- PR #3237 Define and implement new transform APIs
- PR #3245 Move binaryop files to legacy
- PR #3241 Move stream_compaction files to legacy
- PR #3166 Move reductions to legacy
- PR #3261 Small cleanup: remove `== true`
- PR #3271 Update rmm API based on `rmm.reinitialize(...)` change
- PR #3266 Remove optional checks for CuPy
- PR #3268 Adding null ordering per column feature when sorting
- PR #3239 Adding floating point specialization to comparators for NaNs
- PR #3270 Move predicates files to legacy
- PR #3281 Add to_host specialization for strings in column test utilities
- PR #3282 Add `num_bitmask_words`
- PR #3252 Add new factory methods to include passing an existing null mask
- PR #3288 Make `bit.cuh` utilities usable from host code.
- PR #3287 Move rolling windows files to legacy
- PR #3182 Define and implement new unary APIs `is_null` and `is_not_null`
- PR #3314 Drop `cython` from run requirements
- PR #3301 Add tests for empty column wrapper.
- PR #3294 Update to arrow-cpp and pyarrow 0.15.1
- PR #3310 Add `row_hasher` and `element_hasher` utilities
- PR #3272 Support non-default streams when creating/destroying hash maps
- PR #3286 Clean up the starter code on README
- PR #3332 Port NVStrings replace to cudf strings column
- PR #3354 Define and implement new `scatter` APIs
- PR #3322 Port NVStrings pad operations to cudf strings column
- PR #3345 Add cache member for number of characters in string_view class
- PR #3299 Define and implement new `is_sorted` APIs
- PR #3328 Partition by stripes in dask_cudf ORC reader
- PR #3243 Use upstream join code in dask_cudf
- PR #3371 Add `select` method to `table_view`
- PR #3309 Add java and JNI bindings for search bounds
- PR #3305 Define and implement new rolling window APIs
- PR #3380 Concatenate columns of strings
- PR #3382 Add fill function for strings column
- PR #3391 Move device_atomics_tests.cu files to legacy
- PR #3303 Define and implement new stream compaction APIs `copy_if`, `drop_nulls`,
`apply_boolean_mask`, `drop_duplicate` and `unique_count`.
- PR #3387 Strings column gather function
- PR #3440 Strings column scatter function
- PR #3389 Move quantiles.hpp + group_quantiles.hpp files to legacy
- PR #3397 Port unary cast to libcudf++
- PR #3398 Move reshape.hpp files to legacy
- PR #3395 Port NVStrings regex extract to cudf strings column
- PR #3423 Port NVStrings htoi to cudf strings column
- PR #3425 Strings column copy_if_else implementation
- PR #3422 Move utilities to legacy
- PR #3201 Define and implement new datetime_ops APIs
- PR #3421 Port NVStrings find_multiple to cudf strings column
- PR #3448 Port scatter_to_tables to libcudf++
- PR #3458 Update strings sections in the transition guide
- PR #3462 Add `make_empty_column` and update `empty_like`.
- PR #3465 Port `aggregation` traits and utilities.
- PR #3214 Define and implement new unary operations APIs
- PR #3475 Add `bitmask_to_host` column utility
- PR #3487 Add is_boolean trait and random timestamp generator for testing
- PR #3492 Small cleanup (remove std::abs) and comment
- PR #3407 Allow multiple row-groups per task in dask_cudf read_parquet
- PR #3512 Remove unused CUDA conda labels
- PR #3500 cudf::fill()/cudf::repeat() support for strings columns.
- PR #3438 Update scalar and scalar_device_view to better support strings
- PR #3414 Add copy_range function for strings column
- PR #3685 Add string support to contiguous_split.
- PR #3471 Add scalar/column, column/scalar and scalar/scalar overloads to copy_if_else.
- PR #3451 Add support for implicit typecasting of join columns
## Bug Fixes
- PR #2895 Fixed dask_cudf group_split behavior to handle upstream rearrange_by_divisions
- PR #3048 Support for zero columned tables
- PR #3030 Fix snappy decoding regression in PR #3014
- PR #3041 Fixed exp to experimental namespace name change issue
- PR #3056 Add additional cmake hint for finding local build of RMM files
- PR #3060 Move copying.hpp includes to legacy
- PR #3139 Fixed java RMM auto initialization
- PR #3141 Java fix for relocated IO headers
- PR #3149 Rename column_wrapper.cuh to column_wrapper.hpp
- PR #3168 Fix mutable_column_device_view head const_cast
- PR #3199 Update JNI includes for legacy moves
- PR #3204 ORC writer: Fix ByteRLE encoding of NULLs
- PR #2994 Fix split_out-support but with hash_object_dispatch
- PR #3212 Fix string to date casting when format is not specified
- PR #3218 Fixes `row_lexicographic_comparator` issue with handling two tables
- PR #3228 Default initialize RMM when Java native dependencies are loaded
- PR #3012 replacing instances of `to_gpu_array` with `mem`
- PR #3236 Fix Numba 0.46+/CuPy 6.3 interface compatibility
- PR #3276 Update JNI includes for legacy moves
- PR #3256 Fix orc writer crash with multiple string columns
- PR #3211 Fix breaking change caused by rapidsai/rmm#167
- PR #3265 Fix dangling pointer in `is_sorted`
- PR #3267 ORC writer: fix incorrect ByteRLE encoding of long literal runs
- PR #3277 Fix invalid reference to deleted temporary in `is_sorted`.
- PR #3274 ORC writer: fix integer RLEv2 mode2 unsigned base value encoding
- PR #3279 Fix shutdown hang issues with pinned memory pool init executor
- PR #3280 Invalid children check in mutable_column_device_view
- PR #3289 fix java memory usage API for empty columns
- PR #3293 Fix loading of csv files zipped on MacOS (disabled zip min version check)
- PR #3295 Fix storing storing invalid RMM exec policies.
- PR #3307 Add pd.RangeIndex to from_pandas to fix dask_cudf meta_nonempty bug
- PR #3313 Fix public headers including non-public headers
- PR #3318 Revert arrow to 0.15.0 temporarily to unblock downstream projects CI
- PR #3317 Fix index-argument bug in dask_cudf parquet reader
- PR #3323 Fix `insert` non-assert test case
- PR #3341 Fix `Series` constructor converting NoneType to "None"
- PR #3326 Fix and test for detail::gather map iterator type inference
- PR #3334 Remove zero-size exception check from make_strings_column factories
- PR #3333 Fix compilation issues with `constexpr` functions not marked `__device__`
- PR #3340 Make all benchmarks use cudf base fixture to initialize RMM pool
- PR #3337 Fix Java to pad validity buffers to 64-byte boundary
- PR #3362 Fix `find_and_replace` upcasting series for python scalars and lists
- PR #3357 Disabling `column_view` iterators for non fixed-width types
- PR #3383 Fix : properly compute null counts for rolling_window.
- PR #3386 Removing external includes from `column_view.hpp`
- PR #3369 Add write_partition to dask_cudf to fix to_parquet bug
- PR #3388 Support getitem with bools when DataFrame has a MultiIndex
- PR #3408 Fix String and Column (De-)Serialization
- PR #3372 Fix dask-distributed scatter_by_map bug
- PR #3419 Fix a bug in parse_into_parts (incomplete input causing walking past the end of string).
- PR #3413 Fix dask_cudf read_csv file-list bug
- PR #3416 Fix memory leak in ColumnVector when pulling strings off the GPU
- PR #3424 Fix benchmark build by adding libcudacxx to benchmark's CMakeLists.txt
- PR #3435 Fix diff and shift for empty series
- PR #3439 Fix index-name bug in StringColumn concat
- PR #3445 Fix ORC Writer default stripe size
- PR #3459 Fix printing of invalid entries
- PR #3466 Fix gather null mask allocation for invalid index
- PR #3468 Fix memory leak issue in `drop_duplicates`
- PR #3474 Fix small doc error in capitalize Docs
- PR #3491 Fix more doc errors in NVStrings
- PR #3478 Fix as_index deep copy via Index.rename inplace arg
- PR #3476 Fix ORC reader timezone conversion
- PR #3188 Repr slices up large DataFrames
- PR #3519 Fix strings column concatenate handling zero-sized columns
- PR #3530 Fix copy_if_else test case fail issue
- PR #3523 Fix lgenfe issue with debug build
- PR #3532 Fix potential use-after-free in cudf parquet reader
- PR #3540 Fix unary_op null_mask bug and add missing test cases
- PR #3559 Use HighLevelGraph api in DataFrame constructor (Fix upstream compatibility)
- PR #3572 Fix CI Issue with hypothesis tests that are flaky
# cuDF 0.10.0 (16 Oct 2019)
## New Features
- PR #2423 Added `groupby.quantile()`
- PR #2522 Add Java bindings for NVStrings backed upper and lower case mutators
- PR #2605 Added Sort based groupby in libcudf
- PR #2607 Add Java bindings for parsing JSON
- PR #2629 Add dropna= parameter to groupby
- PR #2585 ORC & Parquet Readers: Remove millisecond timestamp restriction
- PR #2507 Add GPU-accelerated ORC Writer
- PR #2559 Add Series.tolist()
- PR #2653 Add Java bindings for rolling window operations
- PR #2480 Merge `custreamz` codebase into `cudf` repo
- PR #2674 Add __contains__ for Index/Series/Column
- PR #2635 Add support to read from remote and cloud sources like s3, gcs, hdfs
- PR #2722 Add Java bindings for NVTX ranges
- PR #2702 Add make_bool to dataset generation functions
- PR #2394 Move `rapidsai/custrings` into `cudf`
- PR #2734 Final sync of custrings source into cudf
- PR #2724 Add libcudf support for __contains__
- PR #2777 Add python bindings for porter stemmer measure functionality
- PR #2781 Add issorted to is_monotonic
- PR #2685 Add cudf::scatter_to_tables and cython binding
- PR #2743 Add Java bindings for NVStrings timestamp2long as part of String ColumnVector casting
- PR #2785 Add nvstrings Python docs
- PR #2786 Add benchmarks option to root build.sh
- PR #2802 Add `cudf::repeat()` and `cudf.Series.repeat()`
- PR #2773 Add Fisher's unbiased kurtosis and skew for Series/DataFrame
- PR #2748 Parquet Reader: Add option to specify loading of PANDAS index
- PR #2807 Add scatter_by_map to DataFrame python API
- PR #2836 Add nvstrings.code_points method
- PR #2844 Add Series/DataFrame notnull
- PR #2858 Add GTest type list utilities
- PR #2870 Add support for grouping by Series of arbitrary length
- PR #2719 Series covariance and Pearson correlation
- PR #2207 Beginning of libcudf overhaul: introduce new column and table types
- PR #2869 Add `cudf.CategoricalDtype`
- PR #2838 CSV Reader: Support ARROW_RANDOM_FILE input
- PR #2655 CuPy-based Series and Dataframe .values property
- PR #2803 Added `edit_distance_matrix()` function to calculate pairwise edit distance for each string on a given nvstrings object.
- PR #2811 Start of cudf strings column work based on 2207
- PR #2872 Add Java pinned memory pool allocator
- PR #2969 Add findAndReplaceAll to ColumnVector
- PR #2814 Add Datetimeindex.weekday
- PR #2999 Add timestamp conversion support for string categories
- PR #2918 Add cudf::column timestamp wrapper types
## Improvements
- PR #2578 Update legacy_groupby to use libcudf group_by_without_aggregation
- PR #2581 Removed `managed` allocator from hash map classes.
- PR #2571 Remove unnecessary managed memory from gdf_column_concat
- PR #2648 Cython/Python reorg
- PR #2588 Update Series.append documentation
- PR #2632 Replace dask-cudf set_index code with upstream
- PR #2682 Add cudf.set_allocator() function for easier allocator init
- PR #2642 Improve null printing and testing
- PR #2747 Add missing Cython headers / cudftestutil lib to conda package for cuspatial build
- PR #2706 Compute CSV format in device code to speedup performance
- PR #2673 Add support for np.longlong type
- PR #2703 move dask serialization dispatch into cudf
- PR #2728 Add YYMMDD to version tag for nightly conda packages
- PR #2729 Handle file-handle input in to_csv
- PR #2741 CSV Reader: Move kernel functions into its own file
- PR #2766 Improve nvstrings python cmake flexibility
- PR #2756 Add out_time_unit option to csv reader, support timestamp resolutions
- PR #2771 Stopgap alias for to_gpu_matrix()
- PR #2783 Support mapping input columns to function arguments in apply kernels
- PR #2645 libcudf unique_count for Series.nunique
- PR #2817 Dask-cudf: `read_parquet` support for remote filesystems
- PR #2823 improve java data movement debugging
- PR #2806 CSV Reader: Clean-up row offset operations
- PR #2640 Add dask wait/persist example to 10 minute guide
- PR #2828 Optimizations of kernel launch configuration for `DataFrame.apply_rows` and `DataFrame.apply_chunks`
- PR #2831 Add `column` argument to `DataFrame.drop`
- PR #2775 Various optimizations to improve __getitem__ and __setitem__ performance
- PR #2810 cudf::allocate_like can optionally always allocate a mask.
- PR #2833 Parquet reader: align page data allocation sizes to 4-bytes to satisfy cuda-memcheck
- PR #2832 Using the new Python bindings for UCX
- PR #2856 Update group_split_cudf to use scatter_by_map
- PR #2890 Optionally keep serialized table data on the host.
- PR #2778 Doc: Updated and fixed some docstrings that were formatted incorrectly.
- PR #2830 Use YYMMDD tag in custreamz nightly build
- PR #2875 Java: Remove synchronized from register methods in MemoryCleaner
- PR #2887 Minor snappy decompression optimization
- PR #2899 Use new RMM API based on Cython
- PR #2788 Guide to Python UDFs
- PR #2919 Change java API to use operators in groupby namespace
- PR #2909 CSV Reader: Avoid row offsets host vector default init
- PR #2834 DataFrame supports setting columns via attribute syntax `df.x = col`
- PR #3147 DataFrame can be initialized from rows via list of tuples
- PR #3539 Restrict CuPy to 6
## Bug Fixes
- PR #2584 ORC Reader: fix parsing of `DECIMAL` index positions
- PR #2619 Fix groupby serialization/deserialization
- PR #2614 Update Java version to match
- PR #2601 Fixes nlargest(1) issue in Series and Dataframe
- PR #2610 Fix a bug in index serialization (properly pass DeviceNDArray)
- PR #2621 Fixes the floordiv issue of not promoting float type when rhs is 0
- PR #2611 Types Test: fix static casting from negative int to string
- PR #2618 IO Readers: Fix datasource memory map failure for multiple reads
- PR #2628 groupby_without_aggregation non-nullable input table produces non-nullable output
- PR #2615 fix string category partitioning in java API
- PR #2641 fix string category and timeunit concat in the java API
- PR #2649 Fix groupby issue resulting from column_empty bug
- PR #2658 Fix astype() for null categorical columns
- PR #2660 fix column string category and timeunit concat in the java API
- PR #2664 ORC reader: fix `skip_rows` larger than first stripe
- PR #2654 Allow Java gdfOrderBy to work with string categories
- PR #2669 AVRO reader: fix non-deterministic output
- PR #2668 Update Java bindings to specify timestamp units for ORC and Parquet readers
- PR #2679 AVRO reader: fix cuda errors when decoding compressed streams
- PR #2692 Add concatenation for data-frame with different headers (empty and non-empty)
- PR #2651 Remove nvidia driver installation from ci/cpu/build.sh
- PR #2697 Ensure csv reader sets datetime column time units
- PR #2698 Return RangeIndex from contiguous slice of RangeIndex
- PR #2672 Fix null and integer handling in round
- PR #2704 Parquet Reader: Fix crash when loading string column with nulls
- PR #2725 Fix Jitify issue with running on Turing using CUDA version < 10
- PR #2731 Fix building of benchmarks
- PR #2738 Fix java to find new NVStrings locations
- PR #2736 Pin Jitify branch to v0.10 version
- PR #2742 IO Readers: Fix possible silent failures when creating `NvStrings` instance
- PR #2753 Fix java quantile API calls
- PR #2762 Fix validity processing for time in java
- PR #2796 Fix handling string slicing and other nvstrings delegated methods with dask
- PR #2769 Fix link to API docs in README.md
- PR #2772 Handle multiindex pandas Series #2772
- PR #2749 Fix apply_rows/apply_chunks pessimistic null mask to use in_cols null masks only
- PR #2752 CSV Reader: Fix exception when there's no rows to process
- PR #2716 Added Exception for `StringMethods` in string methods
- PR #2787 Fix Broadcasting `None` to `cudf-series`
- PR #2794 Fix async race in NVCategory::get_value and get_value_bounds
- PR #2795 Fix java build/cast error
- PR #2496 Fix improper merge of two dataframes when names differ
- PR #2824 Fix issue with incorrect result when Numeric Series replace is called several times
- PR #2751 Replace value with null
- PR #2765 Fix Java inequality comparisons for string category
- PR #2818 Fix java join API to use new C++ join API
- PR #2841 Fix nvstrings.slice and slice_from for range (0,0)
- PR #2837 Fix join benchmark
- PR #2809 Add hash_df and group_split dispatch functions for dask
- PR #2843 Parquet reader: fix skip_rows when not aligned with page or row_group boundaries
- PR #2851 Deleted existing dask-cudf/record.txt
- PR #2854 Fix column creation from ephemeral objects exposing __cuda_array_interface__
- PR #2860 Fix boolean indexing when the result is a single row
- PR #2859 Fix tail method issue for string columns
- PR #2852 Fixed `cumsum()` and `cumprod()` on boolean series.
- PR #2865 DaskIO: Fix `read_csv` and `read_orc` when input is list of files
- PR #2750 Fixed casting values to cudf::bool8 so non-zero values always cast to true
- PR #2873 Fixed dask_cudf read_partition bug by generating ParquetDatasetPiece
- PR #2850 Fixes dask_cudf.read_parquet on partitioned datasets
- PR #2896 Properly handle `axis` string keywords in `concat`
- PR #2926 Update rounding algorithm to avoid using fmod
- PR #2968 Fix Java dependency loading when using NVTX
- PR #2963 Fix ORC writer uncompressed block indexing
- PR #2928 CSV Reader: Fix using `byte_range` for large datasets
- PR #2983 Fix sm_70+ race condition in gpu_unsnap
- PR #2964 ORC Writer: Segfault when writing mixed numeric and string columns
- PR #3007 Java: Remove unit test that frees RMM invalid pointer
- PR #3009 Fix orc reader RLEv2 patch position regression from PR #2507
- PR #3002 Fix CUDA invalid configuration errors reported after loading an ORC file without data
- PR #3035 Update update-version.sh for new docs locations
- PR #3038 Fix uninitialized stream parameter in device_table deleter
- PR #3064 Fixes groupby performance issue
- PR #3061 Add rmmInitialize to nvstrings gtests
- PR #3058 Fix UDF doc markdown formatting
- PR #3059 Add nvstrings python build instructions to contributing.md
# cuDF 0.9.0 (21 Aug 2019)
## New Features
- PR #1993 Add CUDA-accelerated series aggregations: mean, var, std
- PR #2111 IO Readers: Support memory buffer, file-like object, and URL inputs
- PR #2012 Add `reindex()` to DataFrame and Series
- PR #2097 Add GPU-accelerated AVRO reader
- PR #2098 Support binary ops on DFs and Series with mismatched indices
- PR #2160 Merge `dask-cudf` codebase into `cudf` repo
- PR #2149 CSV Reader: Add `hex` dtype for explicit hexadecimal parsing
- PR #2156 Add `upper_bound()` and `lower_bound()` for libcudf tables and `searchsorted()` for cuDF Series
- PR #2158 CSV Reader: Support single, non-list/dict argument for `dtype`
- PR #2177 CSV Reader: Add `parse_dates` parameter for explicit date inference
- PR #1744 cudf::apply_boolean_mask and cudf::drop_nulls support for cudf::table inputs (multi-column)
- PR #2196 Add `DataFrame.dropna()`
- PR #2197 CSV Writer: add `chunksize` parameter for `to_csv`
- PR #2215 `type_dispatcher` benchmark
- PR #2179 Add Java quantiles
- PR #2157 Add __array_function__ to DataFrame and Series
- PR #2212 Java support for ORC reader
- PR #2224 Add DataFrame isna, isnull, notna functions
- PR #2236 Add Series.drop_duplicates
- PR #2105 Add hash-based join benchmark
- PR #2316 Add unique, nunique, and value_counts for datetime columns
- PR #2337 Add Java support for slicing a ColumnVector
- PR #2049 Add cudf::merge (sorted merge)
- PR #2368 Full cudf+dask Parquet Support
- PR #2380 New cudf::is_sorted checks whether cudf::table is sorted
- PR #2356 Java column vector standard deviation support
- PR #2221 MultiIndex full indexing - Support iloc and wildcards for loc
- PR #2429 Java support for getting length of strings in a ColumnVector
- PR #2415 Add `value_counts` for series of any type
- PR #2446 Add __array_function__ for index
- PR #2437 ORC reader: Add 'use_np_dtypes' option
- PR #2382 Add CategoricalAccessor add, remove, rename, and ordering methods
- PR #2464 Native implement `__cuda_array_interface__` for Series/Index/Column objects
- PR #2425 Rolling window now accepts array-based user-defined functions
- PR #2442 Add __setitem__
- PR #2449 Java support for getting byte count of strings in a ColumnVector
- PR #2492 Add groupby.size() method
- PR #2358 Add cudf::nans_to_nulls: convert floating point column into bitmask
- PR #2489 Add drop argument to set_index
- PR #2491 Add Java bindings for ORC reader 'use_np_dtypes' option
- PR #2213 Support s/ms/us/ns DatetimeColumn time unit resolutions
- PR #2536 Add _constructor properties to Series and DataFrame
## Improvements
- PR #2103 Move old `column` and `bitmask` files into `legacy/` directory
- PR #2109 added name to Python column classes
- PR #1947 Cleanup serialization code
- PR #2125 More aggregate in java API
- PR #2127 Add in java Scalar tests
- PR #2088 Refactor of Python groupby code
- PR #2130 Java serialization and deserialization of tables.
- PR #2131 Chunk rows logic added to csv_writer
- PR #2129 Add functions in the Java API to support nullable column filtering
- PR #2165 made changes to get_dummies api for it to be available in MethodCache
- PR #2171 Add CodeCov integration, fix doc version, make --skip-tests work when invoking with source
- PR #2184 handle remote orc files for dask-cudf
- PR #2186 Add `getitem` and `getattr` style access to Rolling objects
- PR #2168 Use cudf.Column for CategoricalColumn's categories instead of a tuple
- PR #2193 DOC: cudf::type_dispatcher documentation for specializing dispatched functors
- PR #2199 Better java support for appending strings
- PR #2176 Added column dtype support for datetime, int8, int16 to csv_writer
- PR #2209 Matching `get_dummies` & `select_dtypes` behavior to pandas
- PR #2217 Updated Java bindings to use the new groupby API
- PR #2214 DOC: Update doc instructions to build/install `cudf` and `dask-cudf`
- PR #2220 Update Java bindings for reduction rename
- PR #2232 Move CodeCov upload from build script to Jenkins
- PR #2225 refactor to use libcudf for gathering columns in dataframes
- PR #2293 Improve join performance (faster compute_join_output_size)
- PR #2300 Create separate dask codeowners for dask-cudf codebase
- PR #2304 gdf_group_by_without_aggregations returns gdf_column
- PR #2309 Java readers: remove redundant copy of result pointers
- PR #2307 Add `black` and `isort` to style checker script
- PR #2345 Restore removal of old groupby implementation
- PR #2342 Improve `astype()` to operate all ways
- PR #2329 using libcudf cudf::copy for column deep copy
- PR #2344 DOC: docs on code formatting for contributors
- PR #2376 Add inoperative axis= and win_type= arguments to Rolling()
- PR #2378 remove dask for (de-)serialization of cudf objects
- PR #2353 Bump Arrow and Dask versions
- PR #2377 Replace `standard_python_slice` with just `slice.indices()`
- PR #2373 cudf.DataFrame enhancements & Series.values support
- PR #2392 Remove dlpack submodule; make cuDF's Cython API externally accessible
- PR #2430 Updated Java bindings to use the new unary API
- PR #2406 Moved all existing `table` related files to a `legacy/` directory
- PR #2350 Performance related changes to get_dummies
- PR #2420 Remove `cudautils.astype` and replace with `typecast.apply_cast`
- PR #2456 Small improvement to typecast utility
- PR #2458 Fix handling of thirdparty packages in `isort` config
- PR #2459 IO Readers: Consolidate all readers to use `datasource` class
- PR #2475 Exposed type_dispatcher.hpp, nvcategory_util.hpp and wrapper_types.hpp in the include folder
- PR #2484 Enabled building libcudf as a static library
- PR #2453 Streamline CUDA_REL environment variable
- PR #2483 Bundle Boost filesystem dependency in the Java jar
- PR #2486 Java API hash functions
- PR #2481 Adds the ignore_null_keys option to the java api
- PR #2490 Java api: support multiple aggregates for the same column
- PR #2510 Java api: uses table based apply_boolean_mask
- PR #2432 Use pandas formatting for console, html, and latex output
- PR #2573 Bump numba version to 0.45.1
- PR #2606 Fix references to notebooks-contrib
## Bug Fixes
- PR #2086 Fixed quantile api behavior mismatch in series & dataframe
- PR #2128 Add offset param to host buffer readers in java API.
- PR #2145 Work around binops validity checks for java
- PR #2146 Work around unary_math validity checks for java
- PR #2151 Fixes bug in cudf::copy_range where null_count was invalid
- PR #2139 matching to pandas describe behavior & fixing nan values issue
- PR #2161 Implicitly convert unsigned to signed integer types in binops
- PR #2154 CSV Reader: Fix bools misdetected as strings dtype
- PR #2178 Fix bug in rolling bindings where a view of an ephemeral column was being taken
- PR #2180 Fix issue with isort reordering `importorskip` below imports depending on them
- PR #2187 fix to honor dtype when numpy arrays are passed to columnops.as_column
- PR #2190 Fix issue in astype conversion of string column to 'str'
- PR #2208 Fix issue with calling `head()` on one row dataframe
- PR #2229 Propagate exceptions from Cython cdef functions
- PR #2234 Fix issue with local build script not properly building
- PR #2223 Fix CUDA invalid configuration errors reported after loading small compressed ORC files
- PR #2162 Setting is_unique and is_monotonic-related attributes
- PR #2244 Fix ORC RLEv2 delta mode decoding with nonzero residual delta width
- PR #2297 Work around `var/std` unsupported only at debug build
- PR #2302 Fixed java serialization corner case
- PR #2355 Handle float16 in binary operations
- PR #2311 Fix copy behaviour for GenericIndex
- PR #2349 Fix issues with String filter in java API
- PR #2323 Fix groupby on categoricals
- PR #2328 Ensure order is preserved in CategoricalAccessor._set_categories
- PR #2202 Fix issue with unary ops mishandling empty input
- PR #2326 Fix for bug in DLPack when reading multiple columns
- PR #2324 Fix cudf Docker build
- PR #2325 Fix ORC RLEv2 patched base mode decoding with nonzero patch width
- PR #2235 Fix get_dummies to be compatible with dask
- PR #2332 Zero initialize gdf_dtype_extra_info
- PR #2355 Handle float16 in binary operations
- PR #2360 Fix missing dtype handling in cudf.Series & columnops.as_column
- PR #2364 Fix quantile api and other trivial issues around it
- PR #2361 Fixed issue with `codes` of CategoricalIndex
- PR #2357 Fixed inconsistent type of index created with from_pandas vs direct construction
- PR #2389 Fixed Rolling __getattr__ and __getitem__ for offset based windows
- PR #2402 Fixed bug in valid mask computation in cudf::copy_if (apply_boolean_mask)
- PR #2401 Fix to a scalar datetime(of type Days) issue
- PR #2386 Correctly allocate output valids in groupby
- PR #2411 Fixed failures on binary op on single element string column
- PR #2422 Fix Pandas logical binary operation incompatibilites
- PR #2447 Fix CodeCov posting build statuses temporarily
- PR #2450 Fix erroneous null handling in `cudf.DataFrame`'s `apply_rows`
- PR #2470 Fix issues with empty strings and string categories (Java)
- PR #2471 Fix String Column Validity.
- PR #2481 Fix java validity buffer serialization
- PR #2485 Updated bytes calculation to use size_t to avoid overflow in column concat
- PR #2461 Fix groupby multiple aggregations same column
- PR #2514 Fix cudf::drop_nulls threshold handling in Cython
- PR #2516 Fix utilities include paths and meta.yaml header paths
- PR #2517 Fix device memory leak in to_dlpack tensor deleter
- PR #2431 Fix local build generated file ownerships
- PR #2511 Added import of orc, refactored exception handlers to not squash fatal exceptions
- PR #2527 Fix index and column input handling in dask_cudf read_parquet
- PR #2466 Fix `dataframe.query` returning null rows erroneously
- PR #2548 Orc reader: fix non-deterministic data decoding at chunk boundaries
- PR #2557 fix cudautils import in string.py
- PR #2521 Fix casting datetimes from/to the same resolution
- PR #2545 Fix MultiIndexes with datetime levels
- PR #2560 Remove duplicate `dlpack` definition in conda recipe
- PR #2567 Fix ColumnVector.fromScalar issues while dealing with null scalars
- PR #2565 Orc reader: fix incorrect data decoding of int64 data types
- PR #2577 Fix search benchmark compilation error by adding necessary header
- PR #2604 Fix a bug in copying.pyx:_normalize_types that upcasted int32 to int64
# cuDF 0.8.0 (27 June 2019)
## New Features
- PR #1524 Add GPU-accelerated JSON Lines parser with limited feature set
- PR #1569 Add support for Json objects to the JSON Lines reader
- PR #1622 Add Series.loc
- PR #1654 Add cudf::apply_boolean_mask: faster replacement for gdf_apply_stencil
- PR #1487 cython gather/scatter
- PR #1310 Implemented the slice/split functionality.
- PR #1630 Add Python layer to the GPU-accelerated JSON reader
- PR #1745 Add rounding of numeric columns via Numba
- PR #1772 JSON reader: add support for BytesIO and StringIO input
- PR #1527 Support GDF_BOOL8 in readers and writers
- PR #1819 Logical operators (AND, OR, NOT) for libcudf and cuDF
- PR #1813 ORC Reader: Add support for stripe selection
- PR #1828 JSON Reader: add support for bool8 columns
- PR #1833 Add column iterator with/without nulls
- PR #1665 Add the point-in-polygon GIS function
- PR #1863 Series and Dataframe methods for all and any
- PR #1908 cudf::copy_range and cudf::fill for copying/assigning an index or range to a constant
- PR #1921 Add additional formats for typecasting to/from strings
- PR #1807 Add Series.dropna()
- PR #1987 Allow user defined functions in the form of ptx code to be passed to binops
- PR #1948 Add operator functions like `Series.add()` to DataFrame and Series
- PR #1954 Add skip test argument to GPU build script
- PR #2018 Add bindings for new groupby C++ API
- PR #1984 Add rolling window operations Series.rolling() and DataFrame.rolling()
- PR #1542 Python method and bindings for to_csv
- PR #1995 Add Java API
- PR #1998 Add google benchmark to cudf
- PR #1845 Add cudf::drop_duplicates, DataFrame.drop_duplicates
- PR #1652 Added `Series.where()` feature
- PR #2074 Java Aggregates, logical ops, and better RMM support
- PR #2140 Add a `cudf::transform` function
- PR #2068 Concatenation of different typed columns
## Improvements
- PR #1538 Replacing LesserRTTI with inequality_comparator
- PR #1703 C++: Added non-aggregating `insert` to `concurrent_unordered_map` with specializations to store pairs with a single atomicCAS when possible.
- PR #1422 C++: Added a RAII wrapper for CUDA streams
- PR #1701 Added `unique` method for stringColumns
- PR #1713 Add documentation for Dask-XGBoost
- PR #1666 CSV Reader: Improve performance for files with large number of columns
- PR #1725 Enable the ability to use a single column groupby as its own index
- PR #1759 Add an example showing simultaneous rolling averages to `apply_grouped` documentation
- PR #1746 C++: Remove unused code: `windowed_ops.cu`, `sorting.cu`, `hash_ops.cu`
- PR #1748 C++: Add `bool` nullability flag to `device_table` row operators
- PR #1764 Improve Numerical column: `mean_var` and `mean`
- PR #1767 Speed up Python unit tests
- PR #1770 Added build.sh script, updated CI scripts and documentation
- PR #1739 ORC Reader: Add more pytest coverage
- PR #1696 Added null support in `Series.replace()`.
- PR #1390 Added some basic utility functions for `gdf_column`'s
- PR #1791 Added general column comparison code for testing
- PR #1795 Add printing of git submodule info to `print_env.sh`
- PR #1796 Removing old sort based group by code and gdf_filter
- PR #1811 Added functions for copying/allocating `cudf::table`s
- PR #1838 Improve columnops.column_empty so that it returns typed columns instead of a generic Column
- PR #1890 Add utils.get_dummies- a pandas-like wrapper around one_hot-encoding
- PR #1823 CSV Reader: default the column type to string for empty dataframes
- PR #1827 Create bindings for scalar-vector binops, and update one_hot_encoding to use them
- PR #1817 Operators now support different sized dataframes as long as they don't share different sized columns
- PR #1855 Transition replace_nulls to new C++ API and update corresponding Cython/Python code
- PR #1858 Add `std::initializer_list` constructor to `column_wrapper`
- PR #1846 C++ type-erased gdf_equal_columns test util; fix gdf_equal_columns logic error
- PR #1390 Added some basic utility functions for `gdf_column`s
- PR #1391 Tidy up bit-resolution-operation and bitmask class code
- PR #1882 Add iloc functionality to MultiIndex dataframes
- PR #1884 Rolling windows: general enhancements and better coverage for unit tests
- PR #1886 support GDF_STRING_CATEGORY columns in apply_boolean_mask, drop_nulls and other libcudf functions
- PR #1896 Improve performance of groupby with levels specified in dask-cudf
- PR #1915 Improve iloc performance for non-contiguous row selection
- PR #1859 Convert read_json into a C++ API
- PR #1919 Rename libcudf namespace gdf to namespace cudf
- PR #1850 Support left_on and right_on for DataFrame merge operator
- PR #1930 Specialize constructor for `cudf::bool8` to cast argument to `bool`
- PR #1938 Add default constructor for `column_wrapper`
- PR #1930 Specialize constructor for `cudf::bool8` to cast argument to `bool`
- PR #1952 consolidate libcudf public API headers in include/cudf
- PR #1949 Improved selection with boolmask using libcudf `apply_boolean_mask`
- PR #1956 Add support for nulls in `query()`
- PR #1973 Update `std::tuple` to `std::pair` in top-most libcudf APIs and C++ transition guide
- PR #1981 Convert read_csv into a C++ API
- PR #1868 ORC Reader: Support row index for speed up on small/medium datasets
- PR #1964 Added support for list-like types in Series.str.cat
- PR #2005 Use HTML5 details tag in bug report issue template
- PR #2003 Removed few redundant unit-tests from test_string.py::test_string_cat
- PR #1944 Groupby design improvements
- PR #2017 Convert `read_orc()` into a C++ API
- PR #2011 Convert `read_parquet()` into a C++ API
- PR #1756 Add documentation "10 Minutes to cuDF and dask_cuDF"
- PR #2034 Adding support for string columns concatenation using "add" binary operator
- PR #2042 Replace old "10 Minutes" guide with new guide for docs build process
- PR #2036 Make library of common test utils to speed up tests compilation
- PR #2022 Facilitating get_dummies to be a high level api too
- PR #2050 Namespace IO readers and add back free-form `read_xxx` functions
- PR #2104 Add a functional ``sort=`` keyword argument to groupby
- PR #2108 Add `find_and_replace` for StringColumn for replacing single values
- PR #1803 cuDF/CuPy interoperability documentation
## Bug Fixes
- PR #1465 Fix for test_orc.py and test_sparse_df.py test failures
- PR #1583 Fix underlying issue in `as_index()` that was causing `Series.quantile()` to fail
- PR #1680 Add errors= keyword to drop() to fix cudf-dask bug
- PR #1651 Fix `query` function on empty dataframe
- PR #1616 Fix CategoricalColumn to access categories by index instead of iteration
- PR #1660 Fix bug in `loc` when indexing with a column name (a string)
- PR #1683 ORC reader: fix timestamp conversion to UTC
- PR #1613 Improve CategoricalColumn.fillna(-1) performance
- PR #1642 Fix failure of CSV_TEST gdf_csv_test.SkiprowsNrows on multiuser systems
- PR #1709 Fix handling of `datetime64[ms]` in `dataframe.select_dtypes`
- PR #1704 CSV Reader: Add support for the plus sign in number fields
- PR #1687 CSV reader: return an empty dataframe for zero size input
- PR #1757 Concatenating columns with null columns
- PR #1755 Add col_level keyword argument to melt
- PR #1758 Fix df.set_index() when setting index from an empty column
- PR #1749 ORC reader: fix long strings of NULL values resulting in incorrect data
- PR #1742 Parquet Reader: Fix index column name to match PANDAS compat
- PR #1782 Update libcudf doc version
- PR #1783 Update conda dependencies
- PR #1786 Maintain the original series name in series.unique output
- PR #1760 CSV Reader: fix segfault when dtype list only includes columns from usecols list
- PR #1831 build.sh: Assuming python is in PATH instead of using PYTHON env var
- PR #1839 Raise an error instead of segfaulting when transposing a DataFrame with StringColumns
- PR #1840 Retain index correctly during merge left_on right_on
- PR #1825 cuDF: Multiaggregation Groupby Failures
- PR #1789 CSV Reader: Fix missing support for specifying `int8` and `int16` dtypes
- PR #1857 Cython Bindings: Handle `bool` columns while calling `column_view_from_NDArrays`
- PR #1849 Allow DataFrame support methods to pass arguments to the methods
- PR #1847 Fixed #1375 by moving the nvstring check into the wrapper function
- PR #1864 Fixing cudf reduction for POWER platform
- PR #1869 Parquet reader: fix Dask timestamps not matching with Pandas (convert to milliseconds)
- PR #1876 add dtype=bool for `any`, `all` to treat integer column correctly
- PR #1875 CSV reader: take NaN values into account in dtype detection
- PR #1873 Add column dtype checking for the all/any methods
- PR #1902 Bug with string iteration in _apply_basic_agg
- PR #1887 Fix for initialization issue in pq_read_arg,orc_read_arg
- PR #1867 JSON reader: add support for null/empty fields, including the 'null' literal
- PR #1891 Fix bug #1750 in string column comparison
- PR #1909 Support of `to_pandas()` of boolean series with null values
- PR #1923 Use prefix removal when two aggs are called on a SeriesGroupBy
- PR #1914 Zero initialize gdf_column local variables
- PR #1959 Add support for comparing boolean Series to scalar
- PR #1966 Ignore index fix in series append
- PR #1967 Compute index __sizeof__ only once for DataFrame __sizeof__
- PR #1977 Support CUDA installation in default system directories
- PR #1982 Fixes incorrect index name after join operation
- PR #1985 Implement `GDF_PYMOD`, a special modulo that follows python's sign rules
- PR #1991 Parquet reader: fix decoding of NULLs
- PR #1990 Fixes a rendering bug in the `apply_grouped` documentation
- PR #1978 Fix for values being filled in an empty dataframe
- PR #2001 Correctly create MultiColumn from Pandas MultiColumn
- PR #2006 Handle empty dataframe groupby construction for dask
- PR #1965 Parquet Reader: Fix duplicate index column when it's already in `use_cols`
- PR #2033 Add pip to conda environment files to fix warning
- PR #2028 CSV Reader: Fix reading of uncompressed files without a recognized file extension
- PR #2073 Fix an issue when gathering columns with NVCategory and nulls
- PR #2053 cudf::apply_boolean_mask return empty column for empty boolean mask
- PR #2066 exclude `IteratorTest.mean_var_output` test from debug build
- PR #2069 Fix JNI code to use read_csv and read_parquet APIs
- PR #2071 Fix bug with unfound transitive dependencies for GTests in Ubuntu 18.04
- PR #2089 Configure Sphinx to render params correctly
- PR #2091 Fix another bug with unfound transitive dependencies for `cudftestutils` in Ubuntu 18.04
- PR #2115 Just apply `--disable-new-dtags` instead of trying to define all the transitive dependencies
- PR #2106 Fix errors in JitCache tests caused by sharing of device memory between processes
- PR #2120 Fix errors in JitCache tests caused by running multiple threads on the same data
- PR #2102 Fix memory leak in groupby
- PR #2113 fixed typo in to_csv code example
# cudf 0.7.2 (16 May 2019)
## New Features
- PR #1735 Added overload for atomicAdd on int64. Streamlined implementation of custom atomic overloads.
- PR #1741 Add MultiIndex concatenation
## Bug Fixes
- PR #1718 Fix issue with SeriesGroupBy MultiIndex in dask-cudf
- PR #1734 Python: fix performance regression for groupby count() aggregations
- PR #1768 Cython: fix handling read only schema buffers in gpuarrow reader
# cudf 0.7.1 (11 May 2019)
## New Features
- PR #1702 Lazy load MultiIndex to return groupby performance to near optimal.
## Bug Fixes
- PR #1708 Fix handling of `datetime64[ms]` in `dataframe.select_dtypes`
# cuDF 0.7.0 (10 May 2019)
## New Features
- PR #982 Implement gdf_group_by_without_aggregations and gdf_unique_indices functions
- PR #1142 Add `GDF_BOOL` column type
- PR #1194 Implement overloads for CUDA atomic operations
- PR #1292 Implemented Bitwise binary ops AND, OR, XOR (&, |, ^)
- PR #1235 Add GPU-accelerated Parquet Reader
- PR #1335 Added local_dict arg in `DataFrame.query()`.
- PR #1282 Add Series and DataFrame.describe()
- PR #1356 Rolling windows
- PR #1381 Add DataFrame._get_numeric_data
- PR #1388 Add CODEOWNERS file to auto-request reviews based on where changes are made
- PR #1396 Add DataFrame.drop method
- PR #1413 Add DataFrame.melt method
- PR #1412 Add DataFrame.pop()
- PR #1419 Initial CSV writer function
- PR #1441 Add Series level cumulative ops (cumsum, cummin, cummax, cumprod)
- PR #1420 Add script to build and test on a local gpuCI image
- PR #1440 Add DatetimeColumn.min(), DatetimeColumn.max()
- PR #1455 Add Series.Shift via Numba kernel
- PR #1441 Add Series level cumulative ops (cumsum, cummin, cummax, cumprod)
- PR #1461 Add Python coverage test to gpu build
- PR #1445 Parquet Reader: Add selective reading of rows and row group
- PR #1532 Parquet Reader: Add support for INT96 timestamps
- PR #1516 Add Series and DataFrame.ndim
- PR #1556 Add libcudf C++ transition guide
- PR #1466 Add GPU-accelerated ORC Reader
- PR #1565 Add build script for nightly doc builds
- PR #1508 Add Series isna, isnull, and notna
- PR #1456 Add Series.diff() via Numba kernel
- PR #1588 Add Index `astype` typecasting
- PR #1301 MultiIndex support
- PR #1599 Level keyword supported in groupby
- PR #929 Add support operations to dataframe
- PR #1609 Groupby accept list of Series
- PR #1658 Support `group_keys=True` keyword in groupby method
## Improvements
- PR #1531 Refactor closures as private functions in gpuarrow
- PR #1404 Parquet reader page data decoding speedup
- PR #1076 Use `type_dispatcher` in join, quantiles, filter, segmented sort, radix sort and hash_groupby
- PR #1202 Simplify README.md
- PR #1149 CSV Reader: Change convertStrToValue() functions to `__device__` only
- PR #1238 Improve performance of the CUDA trie used in the CSV reader
- PR #1245 Use file cache for JIT kernels
- PR #1278 Update CONTRIBUTING for new conda environment yml naming conventions
- PR #1163 Refactored UnaryOps. Reduced API to two functions: `gdf_unary_math` and `gdf_cast`. Added `abs`, `-`, and `~` ops. Changed bindings to Cython
- PR #1284 Update docs version
- PR #1287 add exclude argument to cudf.select_dtype function
- PR #1286 Refactor some of the CSV Reader kernels into generic utility functions
- PR #1291 fillna in `Series.to_gpu_array()` and `Series.to_array()` can accept the scalar too now.
- PR #1005 generic `reduction` and `scan` support
- PR #1349 Replace modernGPU sort join with thrust.
- PR #1363 Add a dataframe.mean(...) that raises NotImplementedError to satisfy `dask.dataframe.utils.is_dataframe_like`
- PR #1319 CSV Reader: Use column wrapper for gdf_column output alloc/dealloc
- PR #1376 Change series quantile default to linear
- PR #1399 Replace CFFI bindings for NVTX functions with Cython bindings
- PR #1389 Refactored `set_null_count()`
- PR #1386 Added macros `GDF_TRY()`, `CUDF_TRY()` and `ASSERT_CUDF_SUCCEEDED()`
- PR #1435 Rework CMake and conda recipes to depend on installed libraries
- PR #1391 Tidy up bit-resolution-operation and bitmask class code
- PR #1439 Add cmake variable to enable compiling CUDA code with -lineinfo
- PR #1462 Add ability to read parquet files from arrow::io::RandomAccessFile
- PR #1453 Convert CSV Reader CFFI to Cython
- PR #1479 Convert Parquet Reader CFFI to Cython
- PR #1397 Add a utility function for producing an overflow-safe kernel launch grid configuration
- PR #1382 Add GPU parsing of nested brackets to cuIO parsing utilities
- PR #1481 Add cudf::table constructor to allocate a set of `gdf_column`s
- PR #1484 Convert GroupBy CFFI to Cython
- PR #1463 Allow and default melt keyword argument var_name to be None
- PR #1486 Parquet Reader: Use device_buffer rather than device_ptr
- PR #1525 Add cudatoolkit conda dependency
- PR #1520 Renamed `src/dataframe` to `src/table` and moved `table.hpp`. Made `types.hpp` to be type declarations only.
- PR #1492 Convert transpose CFFI to Cython
- PR #1495 Convert binary and unary ops CFFI to Cython
- PR #1503 Convert sorting and hashing ops CFFI to Cython
- PR #1522 Use latest release version in update-version CI script
- PR #1533 Remove stale join CFFI, fix memory leaks in join Cython
- PR #1521 Added `row_bitmask` to compute bitmask for rows of a table. Merged `valids_ops.cu` and `bitmask_ops.cu`
- PR #1553 Overload `hash_row` to avoid using initial hash values. Updated `gdf_hash` to select between overloads
- PR #1585 Updated `cudf::table` to maintain own copy of wrapped `gdf_column*`s
- PR #1559 Add `except +` to all Cython function definitions to catch C++ exceptions properly
- PR #1617 `has_nulls` and `column_dtypes` for `cudf::table`
- PR #1590 Remove CFFI from the build / install process entirely
- PR #1536 Convert gpuarrow CFFI to Cython
- PR #1655 Add `Column._pointer` as a way to access underlying `gdf_column*` of a `Column`
- PR #1655 Update readme conda install instructions for cudf version 0.6 and 0.7
## Bug Fixes
- PR #1233 Fix dtypes issue while adding the column to `str` dataframe.
- PR #1254 CSV Reader: fix data type detection for floating-point numbers in scientific notation
- PR #1289 Fix looping over each value instead of each category in concatenation
- PR #1293 Fix Inaccurate error message in join.pyx
- PR #1308 Add atomicCAS overload for `int8_t`, `int16_t`
- PR #1317 Fix catch polymorphic exception by reference in ipc.cu
- PR #1325 Fix dtype of null bitmasks to int8
- PR #1326 Update build documentation to use -DCMAKE_CXX11_ABI=ON
- PR #1334 Add "na_position" argument to CategoricalColumn sort_by_values
- PR #1321 Fix out of bounds warning when checking Bzip2 header
- PR #1359 Add atomicAnd/Or/Xor for integers
- PR #1354 Fix `fillna()` behaviour when replacing values with different dtypes
- PR #1347 Fixed core dump issue while passing dict_dtypes without column names in `cudf.read_csv()`
- PR #1379 Fixed build failure caused due to error: 'col_dtype' may be used uninitialized
- PR #1392 Update cudf Dockerfile and package_versions.sh
- PR #1385 Added INT8 type to `_schema_to_dtype` for use in GpuArrowReader
- PR #1393 Fixed a bug in `gdf_count_nonzero_mask()` for the case of 0 bits to count
- PR #1395 Update CONTRIBUTING to use the environment variable CUDF_HOME
- PR #1416 Fix bug at gdf_quantile_exact and gdf_quantile_appox
- PR #1421 Fix remove creation of series multiple times during `add_column()`
- PR #1405 CSV Reader: Fix memory leaks on read_csv() failure
- PR #1328 Fix CategoricalColumn to_arrow() null mask
- PR #1433 Fix NVStrings/categories includes
- PR #1432 Update NVStrings to 0.7.* to coincide with 0.7 development
- PR #1483 Modify CSV reader to avoid cropping blank quoted characters in non-string fields
- PR #1446 Merge 1275 hotfix from master into branch-0.7
- PR #1447 Fix legacy groupby apply docstring
- PR #1451 Fix hash join estimated result size is not correct
- PR #1454 Fix local build script improperly change directory permissions
- PR #1490 Require Dask 1.1.0+ for `is_dataframe_like` test or skip otherwise.
- PR #1491 Use more specific directories & groups in CODEOWNERS
- PR #1497 Fix Thrust issue on CentOS caused by missing default constructor of host_vector elements
- PR #1498 Add missing include guard to device_atomics.cuh and separated DEVICE_ATOMICS_TEST
- PR #1506 Fix csv-write call to updated NVStrings method
- PR #1510 Added nvstrings `fillna()` function
- PR #1507 Parquet Reader: Default string data to GDF_STRING
- PR #1535 Fix doc issue to ensure correct labelling of cudf.series
- PR #1537 Fix `undefined reference` link error in HashPartitionTest
- PR #1548 Fix ci/local/build.sh README from using an incorrect image example
- PR #1551 CSV Reader: Fix integer column name indexing
- PR #1586 Fix broken `scalar_wrapper::operator==`
- PR #1591 ORC/Parquet Reader: Fix missing import for FileNotFoundError exception
- PR #1573 Parquet Reader: Fix crash due to clash with ORC reader datasource
- PR #1607 Revert change of `column.to_dense_buffer` always return by copy for performance concerns
- PR #1618 ORC reader: fix assert & data output when nrows/skiprows isn't aligned to stripe boundaries
- PR #1631 Fix failure of TYPES_TEST on some gcc-7 based systems.
- PR #1641 CSV Reader: Fix skip_blank_lines behavior with Windows line terminators (
)
- PR #1648 ORC reader: fix non-deterministic output when skiprows is non-zero
- PR #1676 Fix groupby `as_index` behaviour with `MultiIndex`
- PR #1659 Fix bug caused by empty groupbys and multiindex slicing throwing exceptions
- PR #1656 Correct Groupby failure in dask when un-aggregable columns are left in dataframe.
- PR #1689 Fix groupby performance regression
- PR #1694 Add Cython as a runtime dependency since it's required in `setup.py`
# cuDF 0.6.1 (25 Mar 2019)
## Bug Fixes
- PR #1275 Fix CentOS exception in DataFrame.hash_partition from using value "returned" by a void function
# cuDF 0.6.0 (22 Mar 2019)
## New Features
- PR #760 Raise `FileNotFoundError` instead of `GDF_FILE_ERROR` in `read_csv` if the file does not exist
- PR #539 Add Python bindings for replace function
- PR #823 Add Doxygen configuration to enable building HTML documentation for libcudf C/C++ API
- PR #807 CSV Reader: Add byte_range parameter to specify the range in the input file to be read
- PR #857 Add Tail method for Series/DataFrame and update Head method to use iloc
- PR #858 Add series feature hashing support
- PR #871 CSV Reader: Add support for NA values, including user specified strings
- PR #893 Adds PyArrow based parquet readers / writers to Python, fix category dtype handling, fix arrow ingest buffer size issues
- PR #867 CSV Reader: Add support for ignoring blank lines and comment lines
- PR #887 Add Series digitize method
- PR #895 Add Series groupby
- PR #898 Add DataFrame.groupby(level=0) support
- PR #920 Add feather, JSON, HDF5 readers / writers from PyArrow / Pandas
- PR #888 CSV Reader: Add prefix parameter for column names, used when parsing without a header
- PR #913 Add DLPack support: convert between cuDF DataFrame and DLTensor
- PR #939 Add ORC reader from PyArrow
- PR #918 Add Series.groupby(level=0) support
- PR #906 Add binary and comparison ops to DataFrame
- PR #958 Support unary and binary ops on indexes
- PR #964 Add `rename` method to `DataFrame`, `Series`, and `Index`
- PR #985 Add `Series.to_frame` method
- PR #985 Add `drop=` keyword to reset_index method
- PR #994 Remove references to pygdf
- PR #990 Add external series groupby support
- PR #988 Add top-level merge function to cuDF
- PR #992 Add comparison binaryops to DateTime columns
- PR #996 Replace relative path imports with absolute paths in tests
- PR #995 CSV Reader: Add index_col parameter to specify the column name or index to be used as row labels
- PR #1004 Add `from_gpu_matrix` method to DataFrame
- PR #997 Add property index setter
- PR #1007 Replace relative path imports with absolute paths in cudf
- PR #1013 select columns with df.columns
- PR #1016 Rename Series.unique_count() to nunique() to match pandas API
- PR #947 Prefixsum to handle nulls and float types
- PR #1029 Remove rest of relative path imports
- PR #1021 Add filtered selection with assignment for Dataframes
- PR #872 Adding NVCategory support to cudf apis
- PR #1052 Add left/right_index and left/right_on keywords to merge
- PR #1091 Add `indicator=` and `suffixes=` keywords to merge
- PR #1107 Add unsupported keywords to Series.fillna
- PR #1032 Add string support to cuDF python
- PR #1136 Removed `gdf_concat`
- PR #1153 Added function for getting the padded allocation size for valid bitmask
- PR #1148 Add cudf.sqrt for dataframes and Series
- PR #1159 Add Python bindings for libcudf dlpack functions
- PR #1155 Add __array_ufunc__ for DataFrame and Series for sqrt
- PR #1168 to_frame for series accepts a name argument
## Improvements
- PR #1218 Add dask-cudf page to API docs
- PR #892 Add support for heterogeneous types in binary ops with JIT
- PR #730 Improve performance of `gdf_table` constructor
- PR #561 Add Doxygen style comments to Join CUDA functions
- PR #813 unified libcudf API functions by replacing gpu_ with gdf_
- PR #822 Add support for `__cuda_array_interface__` for ingest
- PR #756 Consolidate common helper functions from unordered map and multimap
- PR #753 Improve performance of groupby sum and average, especially for cases with few groups.
- PR #836 Add ingest support for arrow chunked arrays in Column, Series, DataFrame creation
- PR #763 Format doxygen comments for csv_read_arg struct
- PR #532 CSV Reader: Use type dispatcher instead of switch block
- PR #694 Unit test utilities improvements
- PR #878 Add better indexing to Groupby
- PR #554 Add `empty` method and `is_monotonic` attribute to `Index`
- PR #1040 Fixed up Doxygen comment tags
- PR #909 CSV Reader: Avoid host->device->host copy for header row data
- PR #916 Improved unit testing and error checking for `gdf_column_concat`
- PR #941 Replace `numpy` call in `Series.hash_encode` with `numba`
- PR #942 Added increment/decrement operators for wrapper types
- PR #943 Updated `count_nonzero_mask` to return `num_rows` when the mask is null
- PR #952 Added trait to map C++ type to `gdf_dtype`
- PR #966 Updated RMM submodule.
- PR #998 Add IO reader/writer modules to API docs, fix for missing cudf.Series docs
- PR #1017 concatenate along columns for Series and DataFrames
- PR #1002 Support indexing a dataframe with another boolean dataframe
- PR #1018 Better concatenation for Series and Dataframes
- PR #1036 Use Numpydoc style docstrings
- PR #1047 Adding gdf_dtype_extra_info to gdf_column_view_augmented
- PR #1054 Added default ctor to SerialTrieNode to overcome Thrust issue in CentOS7 + CUDA10
- PR #1024 CSV Reader: Add support for hexadecimal integers in integral-type columns
- PR #1033 Update `fillna()` to use libcudf function `gdf_replace_nulls`
- PR #1066 Added inplace assignment for columns and select_dtypes for dataframes
- PR #1026 CSV Reader: Change the meaning and type of the quoting parameter to match Pandas
- PR #1100 Adds `CUDF_EXPECTS` error-checking macro
- PR #1092 Fix select_dtype docstring
- PR #1111 Added cudf::table
- PR #1108 Sorting for datetime columns
- PR #1120 Return a `Series` (not a `Column`) from `Series.cat.set_categories()`
- PR #1128 CSV Reader: The last data row does not need to be line terminated
- PR #1183 Bump Arrow version to 0.12.1
- PR #1208 Default to CXX11_ABI=ON
- PR #1252 Fix NVStrings dependencies for cuda 9.2 and 10.0
- PR #2037 Optimize the existing `gather` and `scatter` routines in `libcudf`
## Bug Fixes
- PR #821 Fix flake8 issues revealed by flake8 update
- PR #808 Resolved renamed `d_columns_valids` variable name
- PR #820 CSV Reader: fix the issue where reader adds additional rows when file uses
as a line terminator
- PR #780 CSV Reader: Fix scientific notation parsing and null values for empty quotes
- PR #815 CSV Reader: Fix data parsing when tabs are present in the input CSV file
- PR #850 Fix bug where left joins where the left df has 0 rows causes a crash
- PR #861 Fix memory leak by preserving the boolean mask index
- PR #875 Handle unnamed indexes in to/from arrow functions
- PR #877 Fix ingest of 1 row arrow tables in from arrow function
- PR #876 Added missing `<type_traits>` include
- PR #889 Deleted test_rmm.py which has now moved to RMM repo
- PR #866 Merge v0.5.1 numpy ABI hotfix into 0.6
- PR #917 value_counts return int type on empty columns
- PR #611 Renamed `gdf_reduce_optimal_output_size()` -> `gdf_reduction_get_intermediate_output_size()`
- PR #923 fix index for negative slicing for cudf dataframe and series
- PR #927 CSV Reader: Fix category GDF_CATEGORY hashes not being computed properly
- PR #921 CSV Reader: Fix parsing errors with delim_whitespace, quotations in the header row, unnamed columns
- PR #933 Fix handling objects of all nulls in series creation
- PR #940 CSV Reader: Fix an issue where the last data row is missing when using byte_range
- PR #945 CSV Reader: Fix incorrect datetime64 when milliseconds or space separator are used
- PR #959 Groupby: Problem with column name lookup
- PR #950 Converting dataframe/recarry with non-contiguous arrays
- PR #963 CSV Reader: Fix another issue with missing data rows when using byte_range
- PR #999 Fix 0 sized kernel launches and empty sort_index exception
- PR #993 Fix dtype in selecting 0 rows from objects
- PR #1009 Fix performance regression in `to_pandas` method on DataFrame
- PR #1008 Remove custom dask communication approach
- PR #1001 CSV Reader: Fix a memory access error when reading a large (>2GB) file with date columns
- PR #1019 Binary Ops: Fix error when one input column has null mask but other doesn't
- PR #1014 CSV Reader: Fix false positives in bool value detection
- PR #1034 CSV Reader: Fix parsing floating point precision and leading zero exponents
- PR #1044 CSV Reader: Fix a segfault when byte range aligns with a page
- PR #1058 Added support for `DataFrame.loc[scalar]`
- PR #1060 Fix column creation with all valid nan values
- PR #1073 CSV Reader: Fix an issue where a column name includes the return character
- PR #1090 Updating Doxygen Comments
- PR #1080 Fix dtypes returned from loc / iloc because of lists
- PR #1102 CSV Reader: Minor fixes and memory usage improvements
- PR #1174: Fix release script typo
- PR #1137 Add prebuild script for CI
- PR #1118 Enhanced the `DataFrame.from_records()` feature
- PR #1129 Fix join performance with index parameter from using numpy array
- PR #1145 Issue with .agg call on multi-column dataframes
- PR #908 Some testing code cleanup
- PR #1167 Fix issue with null_count not being set after inplace fillna()
- PR #1184 Fix iloc performance regression
- PR #1185 Support left_on/right_on and also on=str in merge
- PR #1200 Fix allocating bitmasks with numba instead of rmm in allocate_mask function
- PR #1213 Fix bug with csv reader requesting subset of columns using wrong datatype
- PR #1223 gpuCI: Fix label on rapidsai channel on gpu build scripts
- PR #1242 Add explicit Thrust exec policy to fix NVCATEGORY_TEST segfault on some platforms
- PR #1246 Fix categorical tests that failed due to bad implicit type conversion
- PR #1255 Fix overwriting conda package main label uploads
- PR #1259 Add dlpack includes to pip build
# cuDF 0.5.1 (05 Feb 2019)
## Bug Fixes
- PR #842 Avoid using numpy via cimport to prevent ABI issues in Cython compilation
# cuDF 0.5.0 (28 Jan 2019)
## New Features
- PR #722 Add bzip2 decompression support to `read_csv()`
- PR #693 add ZLIB-based GZIP/ZIP support to `read_csv_strings()`
- PR #411 added null support to gdf_order_by (new API) and cudf_table::sort
- PR #525 Added GitHub Issue templates for bugs, documentation, new features, and questions
- PR #501 CSV Reader: Add support for user-specified decimal point and thousands separator to read_csv_strings()
- PR #455 CSV Reader: Add support for user-specified decimal point and thousands separator to read_csv()
- PR #439 add `DataFrame.drop` method similar to pandas
- PR #356 add `DataFrame.transpose` method and `DataFrame.T` property similar to pandas
- PR #505 CSV Reader: Add support for user-specified boolean values
- PR #350 Implemented Series replace function
- PR #490 Added print_env.sh script to gather relevant environment details when reporting cuDF issues
- PR #474 add ZLIB-based GZIP/ZIP support to `read_csv()`
- PR #547 Added melt similar to `pandas.melt()`
- PR #491 Add CI test script to check for updates to CHANGELOG.md in PRs
- PR #550 Add CI test script to check for style issues in PRs
- PR #558 Add CI scripts for cpu-based conda and gpu-based test builds
- PR #524 Add Boolean Indexing
- PR #564 Update python `sort_values` method to use updated libcudf `gdf_order_by` API
- PR #509 CSV Reader: Input CSV file can now be passed in as a text or a binary buffer
- PR #607 Add `__iter__` and iteritems to DataFrame class
- PR #643 added a new api gdf_replace_nulls that allows a user to replace nulls in a column
## Improvements
- PR #426 Removed sort-based groupby and refactored existing groupby APIs. Also improves C++/CUDA compile time.
- PR #461 Add `CUDF_HOME` variable in README.md to replace relative pathing.
- PR #472 RMM: Created centralized rmm::device_vector alias and rmm::exec_policy
- PR #500 Improved the concurrent hash map class to support partitioned (multi-pass) hash table building.
- PR #454 Improve CSV reader docs and examples
- PR #465 Added templated C++ API for RMM to avoid explicit cast to `void**`
- PR #513 `.gitignore` tweaks
- PR #521 Add `assert_eq` function for testing
- PR #502 Simplify Dockerfile for local dev, eliminate old conda/pip envs
- PR #549 Adds `-rdynamic` compiler flag to nvcc for Debug builds
- PR #472 RMM: Created centralized rmm::device_vector alias and rmm::exec_policy
- PR #577 Added external C++ API for scatter/gather functions
- PR #500 Improved the concurrent hash map class to support partitioned (multi-pass) hash table building
- PR #583 Updated `gdf_size_type` to `int`
- PR #500 Improved the concurrent hash map class to support partitioned (multi-pass) hash table building
- PR #617 Added .dockerignore file. Prevents adding stale cmake cache files to the docker container
- PR #658 Reduced `JOIN_TEST` time by isolating overflow test of hash table size computation
- PR #664 Added Debugging instructions to README
- PR #651 Remove noqa marks in `__init__.py` files
- PR #671 CSV Reader: uncompressed buffer input can be parsed without explicitly specifying compression as None
- PR #684 Make RMM a submodule
- PR #718 Ensure sum, product, min, max methods pandas compatibility on empty datasets
- PR #720 Refactored Index classes to make them more Pandas-like, added CategoricalIndex
- PR #749 Improve to_arrow and from_arrow Pandas compatibility
- PR #766 Remove TravisCI references, remove unused variables from CMake, fix ARROW_VERSION in Cmake
- PR #773 Add build-args back to Dockerfile and handle dependencies based on environment yml file
- PR #781 Move thirdparty submodules to root and symlink in /cpp
- PR #843 Fix broken cudf/python API examples, add new methods to the API index
## Bug Fixes
- PR #569 CSV Reader: Fix days being off-by-one when parsing some dates
- PR #531 CSV Reader: Fix incorrect parsing of quoted numbers
- PR #465 Added templated C++ API for RMM to avoid explicit cast to `void**`
- PR #473 Added missing <random> include
- PR #478 CSV Reader: Add api support for auto column detection, header, mangle_dupe_cols, usecols
- PR #495 Updated README to correct where cffi pytest should be executed
- PR #501 Fix the intermittent segfault caused by the `thousands` and `compression` parameters in the csv reader
- PR #502 Simplify Dockerfile for local dev, eliminate old conda/pip envs
- PR #512 fix bug for `on` parameter in `DataFrame.merge` to allow for None or single column name
- PR #511 Updated python/cudf/bindings/join.pyx to fix cudf merge printing out dtypes
- PR #513 `.gitignore` tweaks
- PR #521 Add `assert_eq` function for testing
- PR #537 Fix CMAKE_CUDA_STANDARD_REQURIED typo in CMakeLists.txt
- PR #447 Fix silent failure in initializing DataFrame from generator
- PR #545 Temporarily disable csv reader thousands test to prevent segfault (test re-enabled in PR #501)
- PR #559 Fix Assertion error while using `applymap` to change the output dtype
- PR #575 Update `print_env.sh` script to better handle missing commands
- PR #612 Prevent an exception from occurring with true division on integer series.
- PR #630 Fix deprecation warning for `pd.core.common.is_categorical_dtype`
- PR #622 Fix Series.append() behaviour when appending values with different numeric dtype
- PR #603 Fix error while creating an empty column using None.
- PR #673 Fix array of strings not being caught in from_pandas
- PR #644 Fix return type and column support of dataframe.quantile()
- PR #634 Fix create `DataFrame.from_pandas()` with numeric column names
- PR #654 Add resolution check for GDF_TIMESTAMP in Join
- PR #648 Enforce one-to-one copy required when using `numba>=0.42.0`
- PR #645 Fix cmake build type handling not setting debug options when CMAKE_BUILD_TYPE=="Debug"
- PR #669 Fix GIL deadlock when launching multiple python threads that make Cython calls
- PR #665 Reworked the hash map to add a way to report the destination partition for a key
- PR #670 CMAKE: Fix env include path taking precedence over libcudf source headers
- PR #674 Check for gdf supported column types
- PR #677 Fix 'gdf_csv_test_Dates' gtest failure due to missing nrows parameter
- PR #604 Fix the parsing errors while reading a csv file using `sep` instead of `delimiter`.
- PR #686 Fix converting nulls to NaT values when converting Series to Pandas/Numpy
- PR #689 CSV Reader: Fix behavior with skiprows+header to match pandas implementation
- PR #691 Fixes Join on empty input DFs
- PR #706 CSV Reader: Fix broken dtype inference when whitespace is in data
- PR #717 CSV reader: fix behavior when parsing a csv file with no data rows
- PR #724 CSV Reader: fix build issue due to parameter type mismatch in a std::max call
- PR #734 Prevents reading undefined memory in gpu_expand_mask_bits numba kernel
- PR #747 CSV Reader: fix an issue where CUDA allocations fail with some large input files
- PR #750 Fix race condition for handling NVStrings in CMake
- PR #719 Fix merge column ordering
- PR #770 Fix issue where RMM submodule pointed to wrong branch and pin other to correct branches
- PR #778 Fix hard coded ABI off setting
- PR #784 Update RMM submodule commit-ish and pip paths
- PR #794 Update `rmm::exec_policy` usage to fix segmentation faults when used as temporary allocator.
- PR #800 Point git submodules to branches of forks instead of exact commits
# cuDF 0.4.0 (05 Dec 2018)
## New Features
- PR #398 add pandas-compatible `DataFrame.shape()` and `Series.shape()`
- PR #394 New documentation feature "10 Minutes to cuDF"
- PR #361 CSV Reader: Add support for strings with delimiters
## Improvements
- PR #436 Improvements for type_dispatcher and wrapper structs
- PR #429 Add CHANGELOG.md (this file)
- PR #266 use faster CUDA-accelerated DataFrame column/Series concatenation.
- PR #379 new C++ `type_dispatcher` reduces code complexity in supporting many data types.
- PR #349 Improve performance for creating columns from memoryview objects
- PR #445 Update reductions to use type_dispatcher. Adds integer types support to sum_of_squares.
- PR #448 Improve installation instructions in README.md
- PR #456 Change default CMake build to Release, and added option for disabling compilation of tests
## Bug Fixes
- PR #444 Fix csv_test CUDA too many resources requested fail.
- PR #396 added missing output buffer in validity tests for groupbys.
- PR #408 Dockerfile updates for source reorganization
- PR #437 Add cffi to Dockerfile conda env, fixes "cannot import name 'librmm'"
- PR #417 Fix `map_test` failure with CUDA 10
- PR #414 Fix CMake installation include file paths
- PR #418 Properly cast string dtypes to programmatic dtypes when instantiating columns
- PR #427 Fix and tests for Concatenation illegal memory access with nulls
# cuDF 0.3.0 (23 Nov 2018)
## New Features
- PR #336 CSV Reader string support
## Improvements
- PR #354 source code refactored for better organization. CMake build system overhaul. Beginning of transition to Cython bindings.
- PR #290 Add support for typecasting to/from datetime dtype
- PR #323 Add handling pyarrow boolean arrays in input/out, add tests
- PR #325 GDF_VALIDITY_UNSUPPORTED now returned for algorithms that don't support non-empty valid bitmasks
- PR #381 Faster InputTooLarge Join test completes in ms rather than minutes.
- PR #373 .gitignore improvements
- PR #367 Doc cleanup & examples for DataFrame methods
- PR #333 Add Rapids Memory Manager documentation
- PR #321 Rapids Memory Manager adds file/line location logging and convenience macros
- PR #334 Implement DataFrame `__copy__` and `__deepcopy__`
- PR #271 Add NVTX ranges to pygdf
- PR #311 Document system requirements for conda install
## Bug Fixes
- PR #337 Retain index on `scale()` function
- PR #344 Fix test failure due to PyArrow 0.11 Boolean handling
- PR #364 Remove noexcept from managed_allocator; CMakeLists fix for NVstrings
- PR #357 Fix bug that made all series be considered booleans for indexing
- PR #351 replace conda env configuration for developers
- PRs #346 #360 Fix CSV reading of negative numbers
- PR #342 Fix CMake to use conda-installed nvstrings
- PR #341 Preserve categorical dtype after groupby aggregations
- PR #315 ReadTheDocs build update to fix missing libcuda.so
- PR #320 FIX out-of-bounds access error in reductions.cu
- PR #319 Fix out-of-bounds memory access in libcudf count_valid_bits
- PR #303 Fix printing empty dataframe
# cuDF 0.2.0 and cuDF 0.1.0
These were initial releases of cuDF based on previously separate pyGDF and libGDF libraries.
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/cudf/build.sh
|
#!/bin/bash
# Copyright (c) 2019-2023, NVIDIA CORPORATION.
# cuDF build script
# This script is used to build the component(s) in this repo from
# source, and can be called with various options to customize the
# build as needed (see the help output for details)
# Abort script on first error
set -e
NUMARGS=$#
ARGS=$*
# NOTE: ensure all dir changes are relative to the location of this
# script, and that this script resides in the repo dir!
REPODIR=$(cd $(dirname $0); pwd)
VALIDARGS="clean libcudf cudf cudfjar dask_cudf benchmarks tests libcudf_kafka cudf_kafka custreamz -v -g -n -l --allgpuarch --disable_nvtx --opensource_nvcomp --show_depr_warn --ptds -h --build_metrics --incl_cache_stats"
HELP="$0 [clean] [libcudf] [cudf] [cudfjar] [dask_cudf] [benchmarks] [tests] [libcudf_kafka] [cudf_kafka] [custreamz] [-v] [-g] [-n] [-h] [--cmake-args=\\\"<args>\\\"]
clean - remove all existing build artifacts and configuration (start
over)
libcudf - build the cudf C++ code only
cudf - build the cudf Python package
cudfjar - build cudf JAR with static libcudf using devtoolset toolchain
dask_cudf - build the dask_cudf Python package
benchmarks - build benchmarks
tests - build tests
libcudf_kafka - build the libcudf_kafka C++ code only
cudf_kafka - build the cudf_kafka Python package
custreamz - build the custreamz Python package
-v - verbose build mode
-g - build for debug
-n - no install step (does not affect Python)
--allgpuarch - build for all supported GPU architectures
--disable_nvtx - disable inserting NVTX profiling ranges
--opensource_nvcomp - disable use of proprietary nvcomp extensions
--show_depr_warn - show cmake deprecation warnings
--ptds - enable per-thread default stream
--build_metrics - generate build metrics report for libcudf
--incl_cache_stats - include cache statistics in build metrics report
--cmake-args=\\\"<args>\\\" - pass arbitrary list of CMake configuration options (escape all quotes in argument)
-h | --h[elp] - print this text
default action (no args) is to build and install 'libcudf' then 'cudf'
then 'dask_cudf' targets
"
LIB_BUILD_DIR=${LIB_BUILD_DIR:=${REPODIR}/cpp/build}
KAFKA_LIB_BUILD_DIR=${KAFKA_LIB_BUILD_DIR:=${REPODIR}/cpp/libcudf_kafka/build}
CUDF_KAFKA_BUILD_DIR=${REPODIR}/python/cudf_kafka/build
CUDF_BUILD_DIR=${REPODIR}/python/cudf/build
DASK_CUDF_BUILD_DIR=${REPODIR}/python/dask_cudf/build
CUSTREAMZ_BUILD_DIR=${REPODIR}/python/custreamz/build
CUDF_JAR_JAVA_BUILD_DIR="$REPODIR/java/target"
BUILD_DIRS="${LIB_BUILD_DIR} ${CUDF_BUILD_DIR} ${DASK_CUDF_BUILD_DIR} ${KAFKA_LIB_BUILD_DIR} ${CUDF_KAFKA_BUILD_DIR} ${CUSTREAMZ_BUILD_DIR} ${CUDF_JAR_JAVA_BUILD_DIR}"
# Set defaults for vars modified by flags to this script
VERBOSE_FLAG=""
BUILD_TYPE=Release
INSTALL_TARGET=install
BUILD_BENCHMARKS=OFF
BUILD_ALL_GPU_ARCH=0
BUILD_NVTX=ON
BUILD_TESTS=OFF
BUILD_DISABLE_DEPRECATION_WARNINGS=ON
BUILD_PER_THREAD_DEFAULT_STREAM=OFF
BUILD_REPORT_METRICS=OFF
BUILD_REPORT_INCL_CACHE_STATS=OFF
USE_PROPRIETARY_NVCOMP=ON
# Set defaults for vars that may not have been defined externally
# FIXME: if INSTALL_PREFIX is not set, check PREFIX, then check
# CONDA_PREFIX, but there is no fallback from there!
INSTALL_PREFIX=${INSTALL_PREFIX:=${PREFIX:=${CONDA_PREFIX}}}
PARALLEL_LEVEL=${PARALLEL_LEVEL:=$(nproc)}
function hasArg {
(( ${NUMARGS} != 0 )) && (echo " ${ARGS} " | grep -q " $1 ")
}
function cmakeArgs {
# Check for multiple cmake args options
if [[ $(echo $ARGS | { grep -Eo "\-\-cmake\-args" || true; } | wc -l ) -gt 1 ]]; then
echo "Multiple --cmake-args options were provided, please provide only one: ${ARGS}"
exit 1
fi
# Check for cmake args option
if [[ -n $(echo $ARGS | { grep -E "\-\-cmake\-args" || true; } ) ]]; then
# There are possible weird edge cases that may cause this regex filter to output nothing and fail silently
# the true pipe will catch any weird edge cases that may happen and will cause the program to fall back
# on the invalid option error
EXTRA_CMAKE_ARGS=$(echo $ARGS | { grep -Eo "\-\-cmake\-args=\".+\"" || true; })
if [[ -n ${EXTRA_CMAKE_ARGS} ]]; then
# Remove the full EXTRA_CMAKE_ARGS argument from list of args so that it passes validArgs function
ARGS=${ARGS//$EXTRA_CMAKE_ARGS/}
# Filter the full argument down to just the extra string that will be added to cmake call
EXTRA_CMAKE_ARGS=$(echo $EXTRA_CMAKE_ARGS | grep -Eo "\".+\"" | sed -e 's/^"//' -e 's/"$//')
fi
fi
}
function buildAll {
((${NUMARGS} == 0 )) || !(echo " ${ARGS} " | grep -q " [^-]\+ ")
}
function buildLibCudfJniInDocker {
local cudaVersion="11.5.0"
local imageName="cudf-build:${cudaVersion}-devel-centos7"
local CMAKE_GENERATOR="${CMAKE_GENERATOR:-Ninja}"
local workspaceDir="/rapids"
local localMavenRepo=${LOCAL_MAVEN_REPO:-"$HOME/.m2/repository"}
local workspaceRepoDir="$workspaceDir/cudf"
local workspaceMavenRepoDir="$workspaceDir/.m2/repository"
local workspaceCcacheDir="$workspaceDir/.ccache"
mkdir -p "$CUDF_JAR_JAVA_BUILD_DIR/libcudf-cmake-build"
mkdir -p "$HOME/.ccache" "$HOME/.m2"
nvidia-docker build \
-f java/ci/Dockerfile.centos7 \
--build-arg CUDA_VERSION=${cudaVersion} \
-t $imageName .
nvidia-docker run -it -u $(id -u):$(id -g) --rm \
-e PARALLEL_LEVEL \
-e CCACHE_DISABLE \
-e CCACHE_DIR="$workspaceCcacheDir" \
-v "/etc/group:/etc/group:ro" \
-v "/etc/passwd:/etc/passwd:ro" \
-v "/etc/shadow:/etc/shadow:ro" \
-v "/etc/sudoers.d:/etc/sudoers.d:ro" \
-v "$HOME/.ccache:$workspaceCcacheDir:rw" \
-v "$REPODIR:$workspaceRepoDir:rw" \
-v "$localMavenRepo:$workspaceMavenRepoDir:rw" \
--workdir "$workspaceRepoDir/java/target/libcudf-cmake-build" \
${imageName} \
scl enable devtoolset-9 \
"cmake $workspaceRepoDir/cpp \
-G${CMAKE_GENERATOR} \
-DCMAKE_C_COMPILER_LAUNCHER=ccache \
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DCMAKE_CUDA_COMPILER_LAUNCHER=ccache \
-DCMAKE_CXX_LINKER_LAUNCHER=ccache \
-DCMAKE_BUILD_TYPE=${BUILD_TYPE} \
-DCUDA_STATIC_RUNTIME=ON \
-DCMAKE_CUDA_ARCHITECTURES=${CUDF_CMAKE_CUDA_ARCHITECTURES} \
-DCMAKE_INSTALL_PREFIX=/usr/local/rapids \
-DUSE_NVTX=ON \
-DCUDF_USE_PROPRIETARY_NVCOMP=ON \
-DCUDF_USE_ARROW_STATIC=ON \
-DCUDF_ENABLE_ARROW_S3=OFF \
-DBUILD_TESTS=OFF \
-DCUDF_USE_PER_THREAD_DEFAULT_STREAM=ON \
-DRMM_LOGGING_LEVEL=OFF \
-DBUILD_SHARED_LIBS=OFF && \
cmake --build . --parallel ${PARALLEL_LEVEL} && \
cd $workspaceRepoDir/java && \
mvn ${MVN_PHASES:-"package"} \
-Dmaven.repo.local=$workspaceMavenRepoDir \
-DskipTests=${SKIP_TESTS:-false} \
-Dparallel.level=${PARALLEL_LEVEL} \
-Dcmake.ccache.opts='-DCMAKE_C_COMPILER_LAUNCHER=ccache \
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DCMAKE_CUDA_COMPILER_LAUNCHER=ccache \
-DCMAKE_CXX_LINKER_LAUNCHER=ccache' \
-DCUDF_CPP_BUILD_DIR=$workspaceRepoDir/java/target/libcudf-cmake-build \
-DCUDA_STATIC_RUNTIME=ON \
-DCUDF_USE_PER_THREAD_DEFAULT_STREAM=ON \
-DUSE_GDS=ON \
-DGPU_ARCHS=${CUDF_CMAKE_CUDA_ARCHITECTURES} \
-DCUDF_JNI_LIBCUDF_STATIC=ON \
-Dtest=*,!CuFileTest,!CudaFatalTest,!ColumnViewNonEmptyNullsTest"
}
if hasArg -h || hasArg --h || hasArg --help; then
echo "${HELP}"
exit 0
fi
# Check for valid usage
if (( ${NUMARGS} != 0 )); then
# Check for cmake args
cmakeArgs
for a in ${ARGS}; do
if ! (echo " ${VALIDARGS} " | grep -q " ${a} "); then
echo "Invalid option or formatting, check --help: ${a}"
exit 1
fi
done
fi
# Process flags
if hasArg -v; then
VERBOSE_FLAG="-v"
fi
if hasArg -g; then
BUILD_TYPE=Debug
fi
if hasArg -n; then
INSTALL_TARGET=""
LIBCUDF_BUILD_DIR=${LIB_BUILD_DIR}
fi
if hasArg --allgpuarch; then
BUILD_ALL_GPU_ARCH=1
fi
if hasArg benchmarks; then
BUILD_BENCHMARKS=ON
fi
if hasArg tests; then
BUILD_TESTS=ON
fi
if hasArg --disable_nvtx; then
BUILD_NVTX="OFF"
fi
if hasArg --opensource_nvcomp; then
USE_PROPRIETARY_NVCOMP="OFF"
fi
if hasArg --show_depr_warn; then
BUILD_DISABLE_DEPRECATION_WARNINGS=OFF
fi
if hasArg --ptds; then
BUILD_PER_THREAD_DEFAULT_STREAM=ON
fi
if hasArg --build_metrics; then
BUILD_REPORT_METRICS=ON
fi
if hasArg --incl_cache_stats; then
BUILD_REPORT_INCL_CACHE_STATS=ON
fi
# Append `-DFIND_CUDF_CPP=ON` to EXTRA_CMAKE_ARGS unless a user specified the option.
if [[ "${EXTRA_CMAKE_ARGS}" != *"DFIND_CUDF_CPP"* ]]; then
EXTRA_CMAKE_ARGS="${EXTRA_CMAKE_ARGS} -DFIND_CUDF_CPP=ON"
fi
# If clean given, run it prior to any other steps
if hasArg clean; then
# If the dirs to clean are mounted dirs in a container, the
# contents should be removed but the mounted dirs will remain.
# The find removes all contents but leaves the dirs, the rmdir
# attempts to remove the dirs but can fail safely.
for bd in ${BUILD_DIRS}; do
if [ -d ${bd} ]; then
find ${bd} -mindepth 1 -delete
rmdir ${bd} || true
fi
done
# Cleaning up python artifacts
find ${REPODIR}/python/ | grep -E "(__pycache__|\.pyc|\.pyo|\.so|\_skbuild$)" | xargs rm -rf
fi
################################################################################
# Configure, build, and install libcudf
if buildAll || hasArg libcudf || hasArg cudf || hasArg cudfjar; then
if (( ${BUILD_ALL_GPU_ARCH} == 0 )); then
CUDF_CMAKE_CUDA_ARCHITECTURES="${CUDF_CMAKE_CUDA_ARCHITECTURES:-NATIVE}"
if [[ "$CUDF_CMAKE_CUDA_ARCHITECTURES" == "NATIVE" ]]; then
echo "Building for the architecture of the GPU in the system..."
else
echo "Building for the GPU architecture(s) $CUDF_CMAKE_CUDA_ARCHITECTURES ..."
fi
else
CUDF_CMAKE_CUDA_ARCHITECTURES="RAPIDS"
echo "Building for *ALL* supported GPU architectures..."
fi
fi
if buildAll || hasArg libcudf; then
# get the current count before the compile starts
if [[ "$BUILD_REPORT_INCL_CACHE_STATS" == "ON" && -x "$(command -v sccache)" ]]; then
# zero the sccache statistics
sccache --zero-stats
fi
cmake -S $REPODIR/cpp -B ${LIB_BUILD_DIR} \
-DCMAKE_INSTALL_PREFIX=${INSTALL_PREFIX} \
-DCMAKE_CUDA_ARCHITECTURES=${CUDF_CMAKE_CUDA_ARCHITECTURES} \
-DUSE_NVTX=${BUILD_NVTX} \
-DCUDF_USE_PROPRIETARY_NVCOMP=${USE_PROPRIETARY_NVCOMP} \
-DBUILD_TESTS=${BUILD_TESTS} \
-DBUILD_BENCHMARKS=${BUILD_BENCHMARKS} \
-DDISABLE_DEPRECATION_WARNINGS=${BUILD_DISABLE_DEPRECATION_WARNINGS} \
-DCUDF_USE_PER_THREAD_DEFAULT_STREAM=${BUILD_PER_THREAD_DEFAULT_STREAM} \
-DCMAKE_BUILD_TYPE=${BUILD_TYPE} \
${EXTRA_CMAKE_ARGS}
cd ${LIB_BUILD_DIR}
compile_start=$(date +%s)
cmake --build . -j${PARALLEL_LEVEL} ${VERBOSE_FLAG}
compile_end=$(date +%s)
compile_total=$(( compile_end - compile_start ))
# Record build times
if [[ "$BUILD_REPORT_METRICS" == "ON" && -f "${LIB_BUILD_DIR}/.ninja_log" ]]; then
echo "Formatting build metrics"
MSG=""
# get some sccache stats after the compile
if [[ "$BUILD_REPORT_INCL_CACHE_STATS" == "ON" && -x "$(command -v sccache)" ]]; then
COMPILE_REQUESTS=$(sccache -s | grep "Compile requests \+ [0-9]\+$" | awk '{ print $NF }')
CACHE_HITS=$(sccache -s | grep "Cache hits \+ [0-9]\+$" | awk '{ print $NF }')
HIT_RATE=$(echo - | awk "{printf \"%.2f\n\", $CACHE_HITS / $COMPILE_REQUESTS * 100}")
MSG="${MSG}<br/>cache hit rate ${HIT_RATE} %"
fi
MSG="${MSG}<br/>parallel setting: $PARALLEL_LEVEL"
MSG="${MSG}<br/>parallel build time: $compile_total seconds"
if [[ -f "${LIB_BUILD_DIR}/libcudf.so" ]]; then
LIBCUDF_FS=$(ls -lh ${LIB_BUILD_DIR}/libcudf.so | awk '{print $5}')
MSG="${MSG}<br/>libcudf.so size: $LIBCUDF_FS"
fi
BMR_DIR=${RAPIDS_ARTIFACTS_DIR:-"${LIB_BUILD_DIR}"}
echo "Metrics output dir: [$BMR_DIR]"
mkdir -p ${BMR_DIR}
MSG_OUTFILE="$(mktemp)"
echo "$MSG" > "${MSG_OUTFILE}"
python ${REPODIR}/cpp/scripts/sort_ninja_log.py ${LIB_BUILD_DIR}/.ninja_log --fmt html --msg "${MSG_OUTFILE}" > ${BMR_DIR}/ninja_log.html
cp ${LIB_BUILD_DIR}/.ninja_log ${BMR_DIR}/ninja.log
fi
if [[ ${INSTALL_TARGET} != "" ]]; then
cmake --build . -j${PARALLEL_LEVEL} --target install ${VERBOSE_FLAG}
fi
fi
# Build and install the cudf Python package
if buildAll || hasArg cudf; then
cd ${REPODIR}/python/cudf
SKBUILD_CONFIGURE_OPTIONS="-DCMAKE_PREFIX_PATH=${INSTALL_PREFIX} -DCMAKE_LIBRARY_PATH=${LIBCUDF_BUILD_DIR} -DCMAKE_CUDA_ARCHITECTURES=${CUDF_CMAKE_CUDA_ARCHITECTURES} ${EXTRA_CMAKE_ARGS}" \
SKBUILD_BUILD_OPTIONS="-j${PARALLEL_LEVEL:-1}" \
python -m pip install --no-build-isolation --no-deps .
fi
# Build and install the dask_cudf Python package
if buildAll || hasArg dask_cudf; then
cd ${REPODIR}/python/dask_cudf
python -m pip install --no-build-isolation --no-deps .
fi
if hasArg cudfjar; then
buildLibCudfJniInDocker
fi
# Build libcudf_kafka library
if hasArg libcudf_kafka; then
cmake -S $REPODIR/cpp/libcudf_kafka -B ${KAFKA_LIB_BUILD_DIR} \
-DCMAKE_INSTALL_PREFIX=${INSTALL_PREFIX} \
-DBUILD_TESTS=${BUILD_TESTS} \
-DCMAKE_BUILD_TYPE=${BUILD_TYPE} \
${EXTRA_CMAKE_ARGS}
cd ${KAFKA_LIB_BUILD_DIR}
cmake --build . -j${PARALLEL_LEVEL} ${VERBOSE_FLAG}
if [[ ${INSTALL_TARGET} != "" ]]; then
cmake --build . -j${PARALLEL_LEVEL} --target install ${VERBOSE_FLAG}
fi
fi
# build cudf_kafka Python package
if hasArg cudf_kafka; then
cd ${REPODIR}/python/cudf_kafka
SKBUILD_CONFIGURE_OPTIONS="-DCMAKE_PREFIX_PATH=${INSTALL_PREFIX} -DCMAKE_LIBRARY_PATH=${LIBCUDF_BUILD_DIR} ${EXTRA_CMAKE_ARGS}" \
SKBUILD_BUILD_OPTIONS="-j${PARALLEL_LEVEL:-1}" \
python -m pip install --no-build-isolation --no-deps .
fi
# build custreamz Python package
if hasArg custreamz; then
cd ${REPODIR}/python/custreamz
SKBUILD_CONFIGURE_OPTIONS="-DCMAKE_LIBRARY_PATH=${LIBCUDF_BUILD_DIR}" \
SKBUILD_BUILD_OPTIONS="-j${PARALLEL_LEVEL:-1}" \
python -m pip install --no-build-isolation --no-deps .
fi
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/cudf/codecov.yml
|
#Configuration File for CodeCov
coverage:
status:
project: off
patch:
default:
target: auto
threshold: 5%
github_checks:
annotations: true
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/cudf/dependencies.yaml
|
# Dependency list for https://github.com/rapidsai/dependency-file-generator
files:
all:
output: conda
matrix:
cuda: ["11.8", "12.0"]
arch: [x86_64]
includes:
- build_all
- build_cpp
- build_wheels
- build_python_common
- build_python_cudf
- cudatoolkit
- develop
- docs
- notebooks
- py_version
- run_common
- run_cudf
- run_dask_cudf
- run_custreamz
- test_cpp
- test_python_common
- test_python_cudf
- test_python_dask_cudf
- depends_on_cupy
test_cpp:
output: none
includes:
- cudatoolkit
- test_cpp
- libarrow_run
test_python:
output: none
includes:
- cudatoolkit
- py_version
- test_python_common
- test_python_cudf
- test_python_dask_cudf
- pyarrow_run
test_java:
output: none
includes:
- build_all
- libarrow_run
- cudatoolkit
- test_java
test_notebooks:
output: none
includes:
- notebooks
- py_version
checks:
output: none
includes:
- develop
- py_version
docs:
output: none
includes:
- cudatoolkit
- docs
- libarrow_run
- py_version
py_build_cudf:
output: pyproject
pyproject_dir: python/cudf
extras:
table: build-system
includes:
- build_all
- build_python_common
- build_python_cudf
- build_wheels
py_run_cudf:
output: pyproject
pyproject_dir: python/cudf
extras:
table: project
includes:
- run_common
- run_cudf
- pyarrow_run
- depends_on_cupy
py_test_cudf:
output: pyproject
pyproject_dir: python/cudf
extras:
table: project.optional-dependencies
key: test
includes:
- test_python_common
- test_python_cudf
py_test_pandas_cudf:
output: pyproject
pyproject_dir: python/cudf
extras:
table: project.optional-dependencies
key: pandas_tests
includes:
- test_python_pandas_cudf
py_test_cudf_pandas:
output: pyproject
pyproject_dir: python/cudf
extras:
table: project.optional-dependencies
key: cudf_pandas_tests
includes:
- test_python_cudf_pandas
py_build_dask_cudf:
output: pyproject
pyproject_dir: python/dask_cudf
extras:
table: build-system
includes:
- build_wheels
py_run_dask_cudf:
output: pyproject
pyproject_dir: python/dask_cudf
extras:
table: project
includes:
- run_common
- run_dask_cudf
- depends_on_cudf
- depends_on_cupy
py_test_dask_cudf:
output: pyproject
pyproject_dir: python/dask_cudf
extras:
table: project.optional-dependencies
key: test
includes:
- test_python_common
- test_python_dask_cudf
py_build_cudf_kafka:
output: pyproject
pyproject_dir: python/cudf_kafka
extras:
table: build-system
includes:
- build_python_common
- build_wheels
py_run_cudf_kafka:
output: pyproject
pyproject_dir: python/cudf_kafka
extras:
table: project
includes:
- depends_on_cudf
py_test_cudf_kafka:
output: pyproject
pyproject_dir: python/cudf_kafka
extras:
table: project.optional-dependencies
key: test
includes:
- test_python_common
py_build_custreamz:
output: pyproject
pyproject_dir: python/custreamz
extras:
table: build-system
includes:
- build_wheels
py_run_custreamz:
output: pyproject
pyproject_dir: python/custreamz
extras:
table: project
includes:
- run_custreamz
- depends_on_cudf
- depends_on_cudf_kafka
py_test_custreamz:
output: pyproject
pyproject_dir: python/custreamz
extras:
table: project.optional-dependencies
key: test
includes:
- test_python_common
channels:
- rapidsai
- rapidsai-nightly
- dask/label/dev
- pytorch
- conda-forge
- nvidia
dependencies:
build_all:
common:
- output_types: [conda, requirements, pyproject]
packages:
- &cmake_ver cmake>=3.26.4
- ninja
- output_types: conda
packages:
- c-compiler
- cxx-compiler
- dlpack>=0.5,<0.6.0a0
- zlib>=1.2.13
specific:
- output_types: conda
matrices:
- matrix:
arch: x86_64
packages:
- gcc_linux-64=11.*
- sysroot_linux-64==2.17
- matrix:
arch: aarch64
packages:
- gcc_linux-aarch64=11.*
- sysroot_linux-aarch64==2.17
- output_types: conda
matrices:
- matrix:
cuda: "12.0"
packages:
- cuda-version=12.0
- cuda-nvcc
- matrix:
arch: x86_64
cuda: "11.8"
packages:
- nvcc_linux-64=11.8
- matrix:
arch: aarch64
cuda: "11.8"
packages:
- nvcc_linux-aarch64=11.8
build_cpp:
common:
- output_types: conda
packages:
- fmt>=9.1.0,<10
- &gbench benchmark==1.8.0
- >est gtest>=1.13.0
- &gmock gmock>=1.13.0
- librmm==24.2.*
- libkvikio==24.2.*
# Hard pin the patch version used during the build. This must be kept
# in sync with the version pinned in get_arrow.cmake.
- libarrow-all==14.0.1.*
- librdkafka>=1.9.0,<1.10.0a0
# Align nvcomp version with rapids-cmake
- nvcomp==3.0.4
- spdlog>=1.11.0,<1.12
build_wheels:
common:
- output_types: [requirements, pyproject]
packages:
- wheel
- setuptools
build_python_common:
common:
- output_types: [conda, requirements, pyproject]
packages:
- cython>=3.0.3
# TODO: Pin to numpy<1.25 until cudf requires pandas 2
- &numpy numpy>=1.21,<1.25
- scikit-build>=0.13.1
- output_types: [conda, requirements, pyproject]
packages:
# Hard pin the patch version used during the build. This must be kept
# in sync with the version pinned in get_arrow.cmake.
- pyarrow==14.0.1.*
build_python_cudf:
common:
- output_types: conda
packages:
- &rmm_conda rmm==24.2.*
- &protobuf protobuf>=4.21,<5
- pip
- pip:
- git+https://github.com/python-streamz/streamz.git@master
- output_types: [requirements, pyproject]
packages:
- protoc-wheel
- output_types: requirements
packages:
# pip recognizes the index as a global option for the requirements.txt file
# This index is needed for rmm-cu{11,12}.
- --extra-index-url=https://pypi.nvidia.com
- git+https://github.com/python-streamz/streamz.git@master
specific:
- output_types: [requirements, pyproject]
matrices:
- matrix: {cuda: "12.2"}
packages: &build_python_packages_cu12
- &rmm_cu12 rmm-cu12==24.2.*
- {matrix: {cuda: "12.1"}, packages: *build_python_packages_cu12}
- {matrix: {cuda: "12.0"}, packages: *build_python_packages_cu12}
- matrix: {cuda: "11.8"}
packages: &build_python_packages_cu11
- &rmm_cu11 rmm-cu11==24.2.*
- {matrix: {cuda: "11.5"}, packages: *build_python_packages_cu11}
- {matrix: {cuda: "11.4"}, packages: *build_python_packages_cu11}
- {matrix: {cuda: "11.2"}, packages: *build_python_packages_cu11}
- {matrix: null, packages: null }
- output_types: pyproject
matrices:
- {matrix: null, packages: [*rmm_conda] }
libarrow_run:
common:
- output_types: conda
packages:
# Allow runtime version to float up to minor version
# Disallow libarrow 14.0.0 due to a CVE
- libarrow-all>=14.0.1,<15.0.0a0
pyarrow_run:
common:
- output_types: [conda, requirements, pyproject]
packages:
# Allow runtime version to float up to minor version
# Disallow pyarrow 14.0.0 due to a CVE
- pyarrow>=14.0.1,<15.0.0a0
cudatoolkit:
specific:
- output_types: conda
matrices:
- matrix:
cuda: "12.0"
packages:
- cuda-version=12.0
- cuda-cudart-dev
- cuda-nvrtc-dev
- cuda-nvtx-dev
- libcurand-dev
- matrix:
cuda: "11.8"
packages:
- cuda-version=11.8
- cudatoolkit
- cuda-nvtx=11.8
- libcurand-dev=10.3.0.86
- libcurand=10.3.0.86
- matrix:
cuda: "11.5"
packages:
- cuda-version=11.5
- cudatoolkit
- cuda-nvtx=11.5
# Can't hard pin the version since 11.x is missing many
# packages for specific versions
- libcurand-dev>=10.2.6.48,<=10.2.7.107
- libcurand>=10.2.6.48,<=10.2.7.107
- matrix:
cuda: "11.4"
packages:
- cuda-version=11.4
- cudatoolkit
- &cudanvtx114 cuda-nvtx=11.4
- &libcurand_dev114 libcurand-dev>=10.2.5.43,<=10.2.5.120
- &libcurand114 libcurand>=10.2.5.43,<=10.2.5.120
- matrix:
cuda: "11.2"
packages:
- cuda-version=11.2
- cudatoolkit
# The NVIDIA channel doesn't publish pkgs older than 11.4 for
# these libs, so 11.2 uses 11.4 packages (the oldest
# available).
- *cudanvtx114
- *libcurand_dev114
- *libcurand114
- output_types: conda
matrices:
- matrix:
cuda: "12.0"
arch: x86_64
packages:
- libcufile-dev
- matrix:
cuda: "11.8"
arch: x86_64
packages:
- libcufile=1.4.0.31
- libcufile-dev=1.4.0.31
- matrix:
cuda: "11.5"
arch: x86_64
packages:
- libcufile>=1.1.0.37,<=1.1.1.25
- libcufile-dev>=1.1.0.37,<=1.1.1.25
- matrix:
cuda: "11.4"
arch: x86_64
packages:
- &libcufile_114 libcufile>=1.0.0.82,<=1.0.2.10
- &libcufile_dev114 libcufile-dev>=1.0.0.82,<=1.0.2.10
- matrix:
cuda: "11.2"
arch: x86_64
packages:
# The NVIDIA channel doesn't publish pkgs older than 11.4 for these libs,
# so 11.2 uses 11.4 packages (the oldest available).
- *libcufile_114
- *libcufile_dev114
# Fallback matrix for aarch64, which doesn't support libcufile.
- matrix:
packages:
develop:
common:
- output_types: [conda, requirements]
packages:
- pre-commit
# pre-commit requires identify minimum version 1.0, but clang-format requires textproto support and that was
# added in 2.5.20, so we need to call out the minimum version needed for our plugins
- identify>=2.5.20
- output_types: conda
packages:
- clang==16.0.6
- clang-tools=16.0.6
- &doxygen doxygen=1.9.1 # pre-commit hook needs a specific version.
docs:
common:
- output_types: [conda]
packages:
- dask-cuda==24.2.*
- *doxygen
- make
- myst-nb
- nbsphinx
- numpydoc
- pandoc
# https://github.com/pydata/pydata-sphinx-theme/issues/1539
- pydata-sphinx-theme!=0.14.2
- scipy
- sphinx
- sphinx-autobuild
- sphinx-copybutton
- sphinx-markdown-tables
- sphinxcontrib-websupport
notebooks:
common:
- output_types: [conda, requirements]
packages:
- ipython
- notebook
- scipy
py_version:
specific:
- output_types: conda
matrices:
- matrix:
py: "3.9"
packages:
- python=3.9
- matrix:
py: "3.10"
packages:
- python=3.10
- matrix:
packages:
- python>=3.9,<3.11
run_common:
common:
- output_types: [conda, requirements, pyproject]
packages:
- fsspec>=0.6.0
- *numpy
- pandas>=1.3,<1.6.0dev0
run_cudf:
common:
- output_types: [conda, requirements, pyproject]
packages:
- cachetools
# TODO: Pin to numba<0.58 until #14160 is resolved
- &numba numba>=0.57,<0.58
- nvtx>=0.2.1
- packaging
- rich
- typing_extensions>=4.0.0
- *protobuf
- output_types: conda
packages:
- *rmm_conda
- output_types: requirements
packages:
# pip recognizes the index as a global option for the requirements.txt file
# This index is needed for rmm, cubinlinker, ptxcompiler.
- --extra-index-url=https://pypi.nvidia.com
specific:
- output_types: [conda, requirements, pyproject]
matrices:
- matrix: {cuda: "12.2"}
packages: &run_cudf_packages_all_cu12
- cuda-python>=12.0,<13.0a0
- {matrix: {cuda: "12.1"}, packages: *run_cudf_packages_all_cu12}
- {matrix: {cuda: "12.0"}, packages: *run_cudf_packages_all_cu12}
- matrix: {cuda: "11.8"}
packages: &run_cudf_packages_all_cu11
- cuda-python>=11.7.1,<12.0a0
- {matrix: {cuda: "11.5"}, packages: *run_cudf_packages_all_cu11}
- {matrix: {cuda: "11.4"}, packages: *run_cudf_packages_all_cu11}
- {matrix: {cuda: "11.2"}, packages: *run_cudf_packages_all_cu11}
- {matrix: null, packages: *run_cudf_packages_all_cu11}
- output_types: conda
matrices:
- matrix: {cuda: "11.8"}
packages: &run_cudf_packages_conda_cu11
- cubinlinker
- ptxcompiler
- {matrix: {cuda: "11.5"}, packages: *run_cudf_packages_conda_cu11}
- {matrix: {cuda: "11.4"}, packages: *run_cudf_packages_conda_cu11}
- {matrix: {cuda: "11.2"}, packages: *run_cudf_packages_conda_cu11}
- {matrix: null, packages: null}
- output_types: [requirements, pyproject]
matrices:
- matrix: {cuda: "12.2"}
packages: &run_cudf_packages_pip_cu12
- rmm-cu12==24.2.*
- {matrix: {cuda: "12.1"}, packages: *run_cudf_packages_pip_cu12}
- {matrix: {cuda: "12.0"}, packages: *run_cudf_packages_pip_cu12}
- matrix: {cuda: "11.8"}
packages: &run_cudf_packages_pip_cu11
- rmm-cu11==24.2.*
- cubinlinker-cu11
- ptxcompiler-cu11
- {matrix: {cuda: "11.5"}, packages: *run_cudf_packages_pip_cu11}
- {matrix: {cuda: "11.4"}, packages: *run_cudf_packages_pip_cu11}
- {matrix: {cuda: "11.2"}, packages: *run_cudf_packages_pip_cu11}
- {matrix: null, packages: null}
- output_types: pyproject
matrices:
- {matrix: null, packages: [cubinlinker, ptxcompiler, *rmm_conda] }
run_dask_cudf:
common:
- output_types: [conda, requirements, pyproject]
packages:
- rapids-dask-dependency==24.2.*
run_custreamz:
common:
- output_types: conda
packages:
- python-confluent-kafka>=1.9.0,<1.10.0a0
- output_types: [conda, requirements, pyproject]
packages:
- streamz
- output_types: [requirements, pyproject]
packages:
- confluent-kafka>=1.9.0,<1.10.0a0
test_cpp:
common:
- output_types: conda
packages:
- *cmake_ver
- *gbench
- *gtest
- *gmock
specific:
- output_types: conda
matrices:
- matrix:
cuda: "12.0"
packages:
- cuda-version=12.0
- cuda-sanitizer-api
- matrix:
cuda: "11.8"
packages:
- cuda-sanitizer-api=11.8.86
- matrix:
packages:
test_java:
common:
- output_types: conda
packages:
- *cmake_ver
- maven
- openjdk=8.*
test_python_common:
common:
- output_types: [conda, requirements, pyproject]
packages:
- pytest
- pytest-cov
- pytest-xdist
test_python_cudf:
common:
- output_types: [conda, requirements, pyproject]
packages:
- cramjam
- fastavro>=0.22.9
- hypothesis
- mimesis>=4.1.0
- pytest-benchmark
- pytest-cases
- python-snappy>=0.6.0
- scipy
- output_types: conda
packages:
- aiobotocore>=2.2.0
- boto3>=1.21.21
- botocore>=1.24.21
- msgpack-python
- moto>=4.0.8
- s3fs>=2022.3.0
- output_types: pyproject
packages:
- msgpack
- &tokenizers tokenizers==0.13.1
- &transformers transformers==4.24.0
- tzdata
specific:
- output_types: conda
matrices:
- matrix:
arch: x86_64
packages:
# Currently, CUDA builds of pytorch do not exist for aarch64. We require
# version <1.12.0 because newer versions use nvidia::cuda-toolkit.
- pytorch<1.12.0
# We only install these on x86_64 to avoid pulling pytorch as a
# dependency of transformers.
- *tokenizers
- *transformers
- matrix:
packages:
test_python_dask_cudf:
common:
- output_types: [conda, requirements, pyproject]
packages:
- dask-cuda==24.2.*
- *numba
depends_on_cudf:
common:
- output_types: conda
packages:
- &cudf_conda cudf==24.2.*
- output_types: requirements
packages:
# pip recognizes the index as a global option for the requirements.txt file
# This index is needed for rmm, cubinlinker, ptxcompiler.
- --extra-index-url=https://pypi.nvidia.com
specific:
- output_types: [requirements, pyproject]
matrices:
- matrix: {cuda: "12.2"}
packages: &cudf_packages_pip_cu12
- cudf-cu12==24.2.*
- {matrix: {cuda: "12.1"}, packages: *cudf_packages_pip_cu12}
- {matrix: {cuda: "12.0"}, packages: *cudf_packages_pip_cu12}
- matrix: {cuda: "11.8"}
packages: &cudf_packages_pip_cu11
- cudf-cu11==24.2.*
- {matrix: {cuda: "11.5"}, packages: *cudf_packages_pip_cu11}
- {matrix: {cuda: "11.4"}, packages: *cudf_packages_pip_cu11}
- {matrix: {cuda: "11.2"}, packages: *cudf_packages_pip_cu11}
- {matrix: null, packages: [*cudf_conda]}
depends_on_cudf_kafka:
common:
- output_types: conda
packages:
- &cudf_kafka_conda cudf_kafka==24.2.*
- output_types: requirements
packages:
# pip recognizes the index as a global option for the requirements.txt file
# This index is needed for rmm, cubinlinker, ptxcompiler.
- --extra-index-url=https://pypi.nvidia.com
specific:
- output_types: [requirements, pyproject]
matrices:
- matrix: {cuda: "12.2"}
packages: &cudf_kafka_packages_pip_cu12
- cudf_kafka-cu12==24.2.*
- {matrix: {cuda: "12.1"}, packages: *cudf_kafka_packages_pip_cu12}
- {matrix: {cuda: "12.0"}, packages: *cudf_kafka_packages_pip_cu12}
- matrix: {cuda: "11.8"}
packages: &cudf_kafka_packages_pip_cu11
- cudf_kafka-cu11==24.2.*
- {matrix: {cuda: "11.5"}, packages: *cudf_kafka_packages_pip_cu11}
- {matrix: {cuda: "11.4"}, packages: *cudf_kafka_packages_pip_cu11}
- {matrix: {cuda: "11.2"}, packages: *cudf_kafka_packages_pip_cu11}
- {matrix: null, packages: [*cudf_kafka_conda]}
depends_on_cupy:
common:
- output_types: conda
packages:
- cupy>=12.0.0
specific:
- output_types: [requirements, pyproject]
matrices:
# All CUDA 12 versions
- matrix: {cuda: "12.2"}
packages: &cupy_packages_cu12
- cupy-cuda12x>=12.0.0
- {matrix: {cuda: "12.1"}, packages: *cupy_packages_cu12}
- {matrix: {cuda: "12.0"}, packages: *cupy_packages_cu12}
# All CUDA 11 versions
- matrix: {cuda: "11.8"}
packages: &cupy_packages_cu11
- cupy-cuda11x>=12.0.0
- {matrix: {cuda: "11.5"}, packages: *cupy_packages_cu11}
- {matrix: {cuda: "11.4"}, packages: *cupy_packages_cu11}
- {matrix: {cuda: "11.2"}, packages: *cupy_packages_cu11}
- {matrix: null, packages: *cupy_packages_cu11}
test_python_pandas_cudf:
common:
- output_types: pyproject
packages:
# dependencies to run pandas tests
# https://github.com/pandas-dev/pandas/blob/main/environment.yml
# TODO: When pandas 2.0 is the minimum version, can just specify pandas[all]
- beautifulsoup4
- blosc
- brotlipy
- boto3
- botocore>=1.24.21
- bottleneck
- fastparquet
- flask
- fsspec
- html5lib
- hypothesis
- gcsfs
- ipython
- jinja2
- lxml
- matplotlib
- moto
- numba
- numexpr
- openpyxl
- odfpy
- py
- psycopg2-binary
- pyarrow
- pymysql
- pyreadstat
- pytest-asyncio
- pytest-reportlog
- python-snappy
- pyxlsb
- s3fs
- scipy
- sqlalchemy
- tables
- pandas-gbq
- tabulate
- xarray
- xlrd
- xlsxwriter
- xlwt
- zstandard
test_python_cudf_pandas:
common:
- output_types: pyproject
packages:
- ipython
- openpyxl
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/cudf/CONTRIBUTING.md
|
# Contributing to cuDF
Contributions to cuDF fall into the following categories:
1. To report a bug, request a new feature, or report a problem with documentation, please file an
[issue](https://github.com/rapidsai/cudf/issues/new/choose) describing the problem or new feature
in detail. The RAPIDS team evaluates and triages issues, and schedules them for a release. If you
believe the issue needs priority attention, please comment on the issue to notify the team.
2. To propose and implement a new feature, please file a new feature request
[issue](https://github.com/rapidsai/cudf/issues/new/choose). Describe the intended feature and
discuss the design and implementation with the team and community. Once the team agrees that the
plan looks good, go ahead and implement it, using the [code contributions](#code-contributions)
guide below.
3. To implement a feature or bug fix for an existing issue, please follow the [code
contributions](#code-contributions) guide below. If you need more context on a particular issue,
please ask in a comment.
As contributors and maintainers to this project, you are expected to abide by cuDF's code of
conduct. More information can be found at:
[Contributor Code of Conduct](https://docs.rapids.ai/resources/conduct/).
## Code contributions
### Your first issue
1. Follow the guide at the bottom of this page for
[Setting up your build environment](#setting-up-your-build-environment).
2. Find an issue to work on. The best way is to look for the
[good first issue](https://github.com/rapidsai/cudf/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22)
or [help wanted](https://github.com/rapidsai/cudf/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22)
labels.
3. Comment on the issue stating that you are going to work on it.
4. Create a fork of the cudf repository and check out a branch with a name that
describes your planned work. For example, `fix-documentation`.
5. Write code to address the issue or implement the feature.
6. Add unit tests and unit benchmarks.
7. [Create your pull request](https://github.com/rapidsai/cudf/compare). To run continuous integration (CI) tests without requesting review, open a draft pull request.
8. Verify that CI passes all [status checks](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/collaborating-on-repositories-with-code-quality-features/about-status-checks).
Fix if needed.
9. Wait for other developers to review your code and update code as needed.
10. Once reviewed and approved, a RAPIDS developer will merge your pull request.
If you are unsure about anything, don't hesitate to comment on issues and ask for clarification!
### Seasoned developers
Once you have gotten your feet wet and are more comfortable with the code, you can look at the
prioritized issues for our next release in our
[project boards](https://github.com/rapidsai/cudf/projects).
**Note:** Always look at the release board that is
[currently under development](https://docs.rapids.ai/maintainers) for issues to work on. This is
where RAPIDS developers also focus their efforts.
Look at the unassigned issues, and find an issue to which you are comfortable contributing. Start
with _Step 3_ above, commenting on the issue to let others know you are working on it. If you have
any questions related to the implementation of the issue, ask them in the issue instead of the PR.
## Setting up your build environment
The following instructions are for developers and contributors to cuDF development. These
instructions are tested on Ubuntu Linux LTS releases. Use these instructions to build cuDF from
source and contribute to its development. Other operating systems may be compatible, but are not
currently tested.
Building cudf with the provided conda environment is recommended for users who wish to enable all
library features. The following instructions are for building with a conda environment. Dependencies
for a minimal build of libcudf without using conda are also listed below.
### General requirements
Compilers:
* `gcc` version 9.3+
* `nvcc` version 11.5+
* `cmake` version 3.26.4+
CUDA/GPU:
* CUDA 11.5+
* NVIDIA driver 450.80.02+
* Pascal architecture or better
You can obtain CUDA from
[https://developer.nvidia.com/cuda-downloads](https://developer.nvidia.com/cuda-downloads).
### Create the build environment
- Clone the repository:
```bash
CUDF_HOME=$(pwd)/cudf
git clone https://github.com/rapidsai/cudf.git $CUDF_HOME
cd $CUDF_HOME
```
#### Building with a conda environment
**Note:** Using a conda environment is the easiest way to satisfy the library's dependencies.
Instructions for a minimal build environment without conda are included below.
- Create the conda development environment:
```bash
# create the conda environment (assuming in base `cudf` directory)
# note: RAPIDS currently doesn't support `channel_priority: strict`;
# use `channel_priority: flexible` instead
conda env create --name cudf_dev --file conda/environments/all_cuda-118_arch-x86_64.yaml
# activate the environment
conda activate cudf_dev
```
- **Note**: the conda environment files are updated frequently, so the
development environment may also need to be updated if dependency versions or
pinnings are changed.
#### Building without a conda environment
- libcudf has the following minimal dependencies (in addition to those listed in the [General
requirements](#general-requirements)). The packages listed below use Ubuntu package names:
- `build-essential`
- `libssl-dev`
- `libz-dev`
- `libpython3-dev` (required if building cudf)
### Build cuDF from source
- A `build.sh` script is provided in `$CUDF_HOME`. Running the script with no additional arguments
will install the `libcudf`, `cudf` and `dask_cudf` libraries. By default, the libraries are
installed to the `$CONDA_PREFIX` directory. To install into a different location, set the location
in `$INSTALL_PREFIX`. Finally, note that the script depends on the `nvcc` executable being on your
path, or defined in `$CUDACXX`.
```bash
cd $CUDF_HOME
# Choose one of the following commands, depending on whether
# you want to build and install the libcudf C++ library only,
# or include the cudf and/or dask_cudf Python libraries:
./build.sh # libcudf, cudf and dask_cudf
./build.sh libcudf # libcudf only
./build.sh libcudf cudf # libcudf and cudf only
```
- Other libraries like `cudf-kafka` and `custreamz` can be installed with this script. For the
complete list of libraries as well as details about the script usage, run the `help` command:
```bash
./build.sh --help
```
### Build, install and test cuDF libraries for contributors
The general workflow is provided below. Please also see the last section about
[code formatting](#code-formatting).
#### `libcudf` (C++)
- If you're only interested in building the library (and not the unit tests):
```bash
cd $CUDF_HOME
./build.sh libcudf
```
- If, in addition, you want to build tests:
```bash
./build.sh libcudf tests
```
- To run the tests:
```bash
make test
```
#### `cudf` (Python)
- First, build the `libcudf` C++ library following the steps above
- To build and install in edit/develop `cudf` Python package:
```bash
cd $CUDF_HOME/python/cudf
python setup.py build_ext --inplace
python setup.py develop
```
- To run `cudf` tests:
```bash
cd $CUDF_HOME/python
pytest -v cudf/cudf/tests
```
#### `dask-cudf` (Python)
- First, build the `libcudf` C++ and `cudf` Python libraries following the steps above
- To install the `dask-cudf` Python package in editable/develop mode:
```bash
cd $CUDF_HOME/python/dask_cudf
python setup.py build_ext --inplace
python setup.py develop
```
- To run `dask_cudf` tests:
```bash
cd $CUDF_HOME/python
pytest -v dask_cudf
```
#### `libcudf_kafka` (C++)
- If you're only interested in building the library (and not the unit tests):
```bash
cd $CUDF_HOME
./build.sh libcudf_kafka
```
- If, in addition, you want to build tests:
```bash
./build.sh libcudf_kafka tests
```
- To run the tests:
```bash
make test
```
#### `cudf-kafka` (Python)
- First, build the `libcudf` and `libcudf_kafka` libraries following the steps above
- To install the `cudf-kafka` Python package in editable/develop mode:
```bash
cd $CUDF_HOME/python/cudf_kafka
python setup.py build_ext --inplace
python setup.py develop
```
#### `custreamz` (Python)
- First, build `libcudf`, `libcudf_kafka`, and `cudf_kafka` following the steps above
- To install the `custreamz` Python package in editable/develop mode:
```bash
cd $CUDF_HOME/python/custreamz
python setup.py build_ext --inplace
python setup.py develop
```
- To run `custreamz` tests :
```bash
cd $CUDF_HOME/python
pytest -v custreamz
```
#### `cudf` (Java):
- First, build the `libcudf` C++ library following the steps above
- Then, refer to the [Java README](java/README.md)
Done! You are ready to develop for the cuDF project. Please review the project's
[code formatting guidelines](#code-formatting).
## Debugging cuDF
### Building in debug mode from source
Follow the instructions to [build from source](#build-cudf-from-source) and add `-g` to the
`./build.sh` command.
For example:
```bash
./build.sh libcudf -g
```
This builds `libcudf` in debug mode which enables some `assert` safety checks and includes symbols
in the library for debugging.
All other steps for installing `libcudf` into your environment are the same.
### Debugging with `cuda-gdb` and `cuda-memcheck`
When you have a debug build of `libcudf` installed, debugging with the `cuda-gdb` and
`cuda-memcheck` is easy.
If you are debugging a Python script, run the following:
```bash
cuda-gdb -ex r --args python <program_name>.py <program_arguments>
```
```bash
cuda-memcheck python <program_name>.py <program_arguments>
```
### Device debug symbols
The device debug symbols are not automatically added with the cmake `Debug` build type because it
causes a runtime delay of several minutes when loading the libcudf.so library.
Therefore, it is recommended to add device debug symbols only to specific files by setting the `-G`
compile option locally in your `cpp/CMakeLists.txt` for that file. Here is an example of adding the
`-G` option to the compile command for `src/copying/copy.cu` source file:
```cmake
set_source_files_properties(src/copying/copy.cu PROPERTIES COMPILE_OPTIONS "-G")
```
This will add the device debug symbols for this object file in `libcudf.so`. You can then use
`cuda-dbg` to debug into the kernels in that source file.
## Code Formatting
### Using pre-commit hooks
cuDF uses [pre-commit](https://pre-commit.com/) to execute all code linters and formatters. These
tools ensure a consistent code format throughout the project. Using pre-commit ensures that linter
versions and options are aligned for all developers. Additionally, there is a CI check in place to
enforce that committed code follows our standards.
To use `pre-commit`, install via `conda` or `pip`:
```bash
conda install -c conda-forge pre-commit
```
```bash
pip install pre-commit
```
Then run pre-commit hooks before committing code:
```bash
pre-commit run
```
By default, pre-commit runs on staged files (only changes and additions that will be committed).
To run pre-commit checks on all files, execute:
```bash
pre-commit run --all-files
```
Optionally, you may set up the pre-commit hooks to run automatically when you make a git commit. This can be done by running:
```bash
pre-commit install
```
Now code linters and formatters will be run each time you commit changes.
You can skip these checks with `git commit --no-verify` or with the short version `git commit -n`.
### Summary of pre-commit hooks
The following section describes some of the core pre-commit hooks used by the repository.
See `.pre-commit-config.yaml` for a full list.
C++/CUDA is formatted with [`clang-format`](https://clang.llvm.org/docs/ClangFormat.html).
[`doxygen`](https://doxygen.nl/) is used as documentation generator and also as a documentation linter.
In order to run doxygen as a linter on C++/CUDA code, run
```bash
./ci/checks/doxygen.sh
```
Python code runs several linters including [Black](https://black.readthedocs.io/en/stable/),
[isort](https://pycqa.github.io/isort/), and [flake8](https://flake8.pycqa.org/en/latest/).
cuDF also uses [codespell](https://github.com/codespell-project/codespell) to find spelling
mistakes, and this check is run as a pre-commit hook. To apply the suggested spelling fixes,
you can run `codespell -i 3 -w .` from the repository root directory.
This will bring up an interactive prompt to select which spelling fixes to apply.
## Developer Guidelines
The [C++ Developer Guide](cpp/doxygen/developer_guide/DEVELOPER_GUIDE.md) includes details on contributing to libcudf C++ code.
The [Python Developer Guide](https://docs.rapids.ai/api/cudf/stable/developer_guide/index.html) includes details on contributing to cuDF Python code.
## Attribution
Portions adopted from https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md
Portions adopted from https://github.com/dask/dask/blob/master/docs/source/develop.rst
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/cudf/LICENSE
|
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2018 NVIDIA Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/cudf/VERSION
|
24.02.00
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/cudf/.clang-format
|
---
# Refer to the following link for the explanation of each params:
# http://releases.llvm.org/8.0.0/tools/clang/docs/ClangFormatStyleOptions.html
Language: Cpp
# BasedOnStyle: Google
AccessModifierOffset: -1
AlignAfterOpenBracket: Align
AlignConsecutiveAssignments: true
AlignConsecutiveBitFields: true
AlignConsecutiveDeclarations: false
AlignConsecutiveMacros: true
AlignEscapedNewlines: Left
AlignOperands: true
AlignTrailingComments: true
AllowAllArgumentsOnNextLine: true
AllowAllConstructorInitializersOnNextLine: true
AllowAllParametersOfDeclarationOnNextLine: true
AllowShortBlocksOnASingleLine: true
AllowShortCaseLabelsOnASingleLine: true
AllowShortEnumsOnASingleLine: true
AllowShortFunctionsOnASingleLine: All
AllowShortIfStatementsOnASingleLine: true
AllowShortLambdasOnASingleLine: true
AllowShortLoopsOnASingleLine: false
# This is deprecated
AlwaysBreakAfterDefinitionReturnType: None
AlwaysBreakAfterReturnType: None
AlwaysBreakBeforeMultilineStrings: true
AlwaysBreakTemplateDeclarations: Yes
BinPackArguments: false
BinPackParameters: false
BraceWrapping:
AfterClass: false
AfterControlStatement: false
AfterEnum: false
AfterFunction: false
AfterNamespace: false
AfterObjCDeclaration: false
AfterStruct: false
AfterUnion: false
AfterExternBlock: false
BeforeCatch: false
BeforeElse: false
IndentBraces: false
# disabling the below splits, else, they'll just add to the vertical length of source files!
SplitEmptyFunction: false
SplitEmptyRecord: false
SplitEmptyNamespace: false
BreakAfterJavaFieldAnnotations: false
BreakBeforeBinaryOperators: None
BreakBeforeBraces: WebKit
BreakBeforeInheritanceComma: false
BreakBeforeTernaryOperators: true
BreakConstructorInitializersBeforeComma: false
BreakConstructorInitializers: BeforeColon
BreakInheritanceList: BeforeColon
BreakStringLiterals: true
ColumnLimit: 100
CommentPragmas: '^ IWYU pragma:'
CompactNamespaces: false
ConstructorInitializerAllOnOneLineOrOnePerLine: true
# Kept the below 2 to be the same as `IndentWidth` to keep everything uniform
ConstructorInitializerIndentWidth: 2
ContinuationIndentWidth: 2
Cpp11BracedListStyle: true
DerivePointerAlignment: false
DisableFormat: false
ExperimentalAutoDetectBinPacking: false
FixNamespaceComments: true
ForEachMacros:
- foreach
- Q_FOREACH
- BOOST_FOREACH
IncludeBlocks: Preserve
IncludeIsMainRegex: '([-_](test|unittest))?$'
IndentCaseLabels: true
IndentPPDirectives: None
IndentWidth: 2
IndentWrappedFunctionNames: false
JavaScriptQuotes: Leave
JavaScriptWrapImports: true
KeepEmptyLinesAtTheStartOfBlocks: false
MacroBlockBegin: ''
MacroBlockEnd: ''
MaxEmptyLinesToKeep: 1
NamespaceIndentation: None
ObjCBinPackProtocolList: Never
ObjCBlockIndentWidth: 2
ObjCSpaceAfterProperty: false
ObjCSpaceBeforeProtocolList: true
PenaltyBreakAssignment: 2
PenaltyBreakBeforeFirstCallParameter: 1
PenaltyBreakComment: 300
PenaltyBreakFirstLessLess: 120
PenaltyBreakString: 1000
PenaltyBreakTemplateDeclaration: 10
PenaltyExcessCharacter: 1000000
PenaltyReturnTypeOnItsOwnLine: 200
PointerAlignment: Left
RawStringFormats:
- Language: Cpp
Delimiters:
- cc
- CC
- cpp
- Cpp
- CPP
- 'c++'
- 'C++'
CanonicalDelimiter: ''
- Language: TextProto
Delimiters:
- pb
- PB
- proto
- PROTO
EnclosingFunctions:
- EqualsProto
- EquivToProto
- PARSE_PARTIAL_TEXT_PROTO
- PARSE_TEST_PROTO
- PARSE_TEXT_PROTO
- ParseTextOrDie
- ParseTextProtoOrDie
CanonicalDelimiter: ''
BasedOnStyle: google
# Enabling comment reflow causes doxygen comments to be messed up in their formats!
ReflowComments: true
SortIncludes: true
SortUsingDeclarations: true
SpaceAfterCStyleCast: false
SpaceAfterTemplateKeyword: true
SpaceBeforeAssignmentOperators: true
SpaceBeforeCpp11BracedList: false
SpaceBeforeCtorInitializerColon: true
SpaceBeforeInheritanceColon: true
SpaceBeforeParens: ControlStatements
SpaceBeforeRangeBasedForLoopColon: true
SpaceBeforeSquareBrackets: false
SpaceInEmptyBlock: false
SpaceInEmptyParentheses: false
SpacesBeforeTrailingComments: 2
SpacesInAngles: false
SpacesInConditionalStatement: false
SpacesInContainerLiterals: true
SpacesInCStyleCastParentheses: false
SpacesInParentheses: false
SpacesInSquareBrackets: false
Standard: c++17
StatementMacros:
- Q_UNUSED
- QT_REQUIRE_VERSION
# Be consistent with indent-width, even for people who use tab for indentation!
TabWidth: 2
UseTab: Never
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/cudf/print_env.sh
|
#!/usr/bin/env bash
# Copyright (c) 2022, NVIDIA CORPORATION.
# Reports relevant environment information useful for diagnosing and
# debugging cuDF issues.
# Usage:
# "./print_env.sh" - prints to stdout
# "./print_env.sh > env.txt" - prints to file "env.txt"
print_env() {
echo "**git***"
if [ "$(git rev-parse --is-inside-work-tree 2>/dev/null)" == "true" ]; then
git log --decorate -n 1
echo "**git submodules***"
git submodule status --recursive
else
echo "Not inside a git repository"
fi
echo
echo "***OS Information***"
cat /etc/*-release
uname -a
echo
echo "***GPU Information***"
nvidia-smi
echo
echo "***CPU***"
lscpu
echo
echo "***CMake***"
which cmake && cmake --version
echo
echo "***g++***"
which g++ && g++ --version
echo
echo "***nvcc***"
which nvcc && nvcc --version
echo
echo "***Python***"
which python && python -c "import sys; print('Python {0}.{1}.{2}'.format(sys.version_info[0], sys.version_info[1], sys.version_info[2]))"
echo
echo "***Environment Variables***"
printf '%-32s: %s\n' PATH $PATH
printf '%-32s: %s\n' LD_LIBRARY_PATH $LD_LIBRARY_PATH
printf '%-32s: %s\n' NUMBAPRO_NVVM $NUMBAPRO_NVVM
printf '%-32s: %s\n' NUMBAPRO_LIBDEVICE $NUMBAPRO_LIBDEVICE
printf '%-32s: %s\n' CONDA_PREFIX $CONDA_PREFIX
printf '%-32s: %s\n' PYTHON_PATH $PYTHON_PATH
echo
# Print conda packages if conda exists
if type "conda" &> /dev/null; then
echo '***conda packages***'
which conda && conda list
echo
# Print pip packages if pip exists
elif type "pip" &> /dev/null; then
echo "conda not found"
echo "***pip packages***"
which pip && pip list
echo
else
echo "conda not found"
echo "pip not found"
fi
}
echo "<details><summary>Click here to see environment details</summary><pre>"
echo " "
print_env | while read -r line; do
echo " $line"
done
echo "</pre></details>"
| 0 |
rapidsai_public_repos/cudf/python
|
rapidsai_public_repos/cudf/python/dask_cudf/pyproject.toml
|
# Copyright (c) 2021-2023, NVIDIA CORPORATION.
[build-system]
build-backend = "setuptools.build_meta"
requires = [
"setuptools",
"wheel",
] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit ../../dependencies.yaml and run `rapids-dependency-file-generator`.
[project]
name = "dask_cudf"
dynamic = ["version", "entry-points"]
description = "Utilities for Dask and cuDF interactions"
readme = { file = "README.md", content-type = "text/markdown" }
authors = [
{ name = "NVIDIA Corporation" },
]
license = { text = "Apache 2.0" }
requires-python = ">=3.9"
dependencies = [
"cudf==24.2.*",
"cupy-cuda11x>=12.0.0",
"fsspec>=0.6.0",
"numpy>=1.21,<1.25",
"pandas>=1.3,<1.6.0dev0",
"rapids-dask-dependency==24.2.*",
] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit ../../dependencies.yaml and run `rapids-dependency-file-generator`.
classifiers = [
"Intended Audience :: Developers",
"Topic :: Database",
"Topic :: Scientific/Engineering",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
]
[project.optional-dependencies]
test = [
"dask-cuda==24.2.*",
"numba>=0.57,<0.58",
"pytest",
"pytest-cov",
"pytest-xdist",
] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit ../../dependencies.yaml and run `rapids-dependency-file-generator`.
[project.urls]
Homepage = "https://github.com/rapidsai/cudf"
[tool.setuptools]
license-files = ["LICENSE"]
[tool.setuptools.dynamic]
version = {file = "dask_cudf/VERSION"}
[tool.isort]
line_length = 79
multi_line_output = 3
include_trailing_comma = true
force_grid_wrap = 0
combine_as_imports = true
order_by_type = true
known_dask = [
"dask",
"distributed",
"dask_cuda",
]
known_rapids = [
"rmm",
"cudf",
]
known_first_party = [
"dask_cudf",
]
default_section = "THIRDPARTY"
sections = [
"FUTURE",
"STDLIB",
"THIRDPARTY",
"DASK",
"RAPIDS",
"FIRSTPARTY",
"LOCALFOLDER",
]
skip = [
"thirdparty",
".eggs",
".git",
".hg",
".mypy_cache",
".tox",
".venv",
"_build",
"buck-out",
"build",
"dist",
]
| 0 |
rapidsai_public_repos/cudf/python
|
rapidsai_public_repos/cudf/python/dask_cudf/README.md
|
# <div align="left"><img src="img/rapids_logo.png" width="90px"/> cuDF - GPU DataFrames</div>
## π’ cuDF can now be used as a no-code-change accelerator for pandas! To learn more, see [here](https://rapids.ai/cudf-pandas/)!
cuDF is a GPU DataFrame library for loading joining, aggregating,
filtering, and otherwise manipulating data. cuDF leverages
[libcudf](https://docs.rapids.ai/api/libcudf/stable/), a
blazing-fast C++/CUDA dataframe library and the [Apache
Arrow](https://arrow.apache.org/) columnar format to provide a
GPU-accelerated pandas API.
You can import `cudf` directly and use it like `pandas`:
```python
import cudf
import requests
from io import StringIO
url = "https://github.com/plotly/datasets/raw/master/tips.csv"
content = requests.get(url).content.decode("utf-8")
tips_df = cudf.read_csv(StringIO(content))
tips_df["tip_percentage"] = tips_df["tip"] / tips_df["total_bill"] * 100
# display average tip by dining party size
print(tips_df.groupby("size").tip_percentage.mean())
```
Or, you can use cuDF as a no-code-change accelerator for pandas, using
[`cudf.pandas`](https://docs.rapids.ai/api/cudf/stable/cudf_pandas).
`cudf.pandas` supports 100% of the pandas API, utilizing cuDF for
supported operations and falling back to pandas when needed:
```python
%load_ext cudf.pandas # pandas operations now use the GPU!
import pandas as pd
import requests
from io import StringIO
url = "https://github.com/plotly/datasets/raw/master/tips.csv"
content = requests.get(url).content.decode("utf-8")
tips_df = pd.read_csv(StringIO(content))
tips_df["tip_percentage"] = tips_df["tip"] / tips_df["total_bill"] * 100
# display average tip by dining party size
print(tips_df.groupby("size").tip_percentage.mean())
```
## Resources
- [Try cudf.pandas now](https://nvda.ws/rapids-cudf): Explore `cudf.pandas` on a free GPU enabled instance on Google Colab!
- [Install](https://docs.rapids.ai/install): Instructions for installing cuDF and other [RAPIDS](https://rapids.ai) libraries.
- [cudf (Python) documentation](https://docs.rapids.ai/api/cudf/stable/)
- [libcudf (C++/CUDA) documentation](https://docs.rapids.ai/api/libcudf/stable/)
- [RAPIDS Community](https://rapids.ai/learn-more/#get-involved): Get help, contribute, and collaborate.
## Installation
### CUDA/GPU requirements
* CUDA 11.2+
* NVIDIA driver 450.80.02+
* Pascal architecture or better (Compute Capability >=6.0)
### Conda
cuDF can be installed with conda (via [miniconda](https://docs.conda.io/projects/miniconda/en/latest/) or the full [Anaconda distribution](https://www.anaconda.com/download) from the `rapidsai` channel:
```bash
conda install -c rapidsai -c conda-forge -c nvidia \
cudf=24.02 python=3.10 cuda-version=11.8
```
We also provide [nightly Conda packages](https://anaconda.org/rapidsai-nightly) built from the HEAD
of our latest development branch.
Note: cuDF is supported only on Linux, and with Python versions 3.9 and later.
See the [RAPIDS installation guide](https://docs.rapids.ai/install) for more OS and version info.
## Build/Install from Source
See build [instructions](CONTRIBUTING.md#setting-up-your-build-environment).
## Contributing
Please see our [guide for contributing to cuDF](CONTRIBUTING.md).
| 0 |
rapidsai_public_repos/cudf/python
|
rapidsai_public_repos/cudf/python/dask_cudf/setup.py
|
# Copyright (c) 2019-2023, NVIDIA CORPORATION.
from setuptools import find_packages, setup
packages = find_packages(exclude=["tests", "tests.*"])
setup(
include_package_data=True,
packages=packages,
package_data={key: ["VERSION"] for key in packages},
entry_points={
"dask.dataframe.backends": [
"cudf = dask_cudf.backends:CudfBackendEntrypoint",
]
},
zip_safe=False,
)
| 0 |
rapidsai_public_repos/cudf/python
|
rapidsai_public_repos/cudf/python/dask_cudf/LICENSE
|
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2018 NVIDIA Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 0 |
rapidsai_public_repos/cudf/python
|
rapidsai_public_repos/cudf/python/dask_cudf/.coveragerc
|
# Configuration file for Python coverage tests
[run]
source = dask_cudf
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/accessors.py
|
# Copyright (c) 2021, NVIDIA CORPORATION.
class StructMethods:
def __init__(self, d_series):
self.d_series = d_series
def field(self, key):
"""
Extract children of the specified struct column
in the Series
Parameters
----------
key: int or str
index/position or field name of the respective
struct column
Returns
-------
Series
Examples
--------
>>> s = cudf.Series([{'a': 1, 'b': 2}, {'a': 3, 'b': 4}])
>>> ds = dask_cudf.from_cudf(s, 2)
>>> ds.struct.field(0).compute()
0 1
1 3
dtype: int64
>>> ds.struct.field('a').compute()
0 1
1 3
dtype: int64
"""
typ = self.d_series._meta.struct.field(key).dtype
return self.d_series.map_partitions(
lambda s: s.struct.field(key),
meta=self.d_series._meta._constructor([], dtype=typ),
)
def explode(self):
"""
Creates a dataframe view of the struct column, one column per field.
Returns
-------
DataFrame
Examples
--------
>>> import cudf, dask_cudf
>>> ds = dask_cudf.from_cudf(cudf.Series(
... [{'a': 42, 'b': 'str1', 'c': [-1]},
... {'a': 0, 'b': 'str2', 'c': [400, 500]},
... {'a': 7, 'b': '', 'c': []}]), npartitions=2)
>>> ds.struct.explode().compute()
a b c
0 42 str1 [-1]
1 0 str2 [400, 500]
2 7 []
"""
return self.d_series.map_partitions(
lambda s: s.struct.explode(),
meta=self.d_series._meta.struct.explode(),
)
class ListMethods:
def __init__(self, d_series):
self.d_series = d_series
def len(self):
"""
Computes the length of each element in the Series/Index.
Returns
-------
Series or Index
Examples
--------
>>> s = cudf.Series([[1, 2, 3], None, [4, 5]])
>>> ds = dask_cudf.from_cudf(s, 2)
>>> ds
0 [1, 2, 3]
1 None
2 [4, 5]
dtype: list
>>> ds.list.len().compute()
0 3
1 <NA>
2 2
dtype: int32
"""
return self.d_series.map_partitions(
lambda s: s.list.len(), meta=self.d_series._meta
)
def contains(self, search_key):
"""
Creates a column of bool values indicating whether the specified scalar
is an element of each row of a list column.
Parameters
----------
search_key : scalar
element being searched for in each row of the list column
Returns
-------
Column
Examples
--------
>>> s = cudf.Series([[1, 2, 3], [3, 4, 5], [4, 5, 6]])
>>> ds = dask_cudf.from_cudf(s, 2)
>>> ds.list.contains(4).compute()
Series([False, True, True])
dtype: bool
"""
return self.d_series.map_partitions(
lambda s: s.list.contains(search_key), meta=self.d_series._meta
)
def get(self, index):
"""
Extract element at the given index from each component
Extract element from lists, tuples, or strings in
each element in the Series/Index.
Parameters
----------
index : int
Returns
-------
Series or Index
Examples
--------
>>> s = cudf.Series([[1, 2, 3], [3, 4, 5], [4, 5, 6]])
>>> ds = dask_cudf.from_cudf(s, 2)
>>> ds.list.get(-1).compute()
0 3
1 5
2 6
dtype: int64
"""
return self.d_series.map_partitions(
lambda s: s.list.get(index), meta=self.d_series._meta
)
@property
def leaves(self):
"""
From a Series of (possibly nested) lists, obtain the elements from
the innermost lists as a flat Series (one value per row).
Returns
-------
Series
Examples
--------
>>> s = cudf.Series([[[1, None], [3, 4]], None, [[5, 6]]])
>>> ds = dask_cudf.from_cudf(s, 2)
>>> ds.list.leaves.compute()
0 1
1 <NA>
2 3
3 4
4 5
5 6
dtype: int64
"""
return self.d_series.map_partitions(
lambda s: s.list.leaves, meta=self.d_series._meta
)
def take(self, lists_indices):
"""
Collect list elements based on given indices.
Parameters
----------
lists_indices: List type arrays
Specifies what to collect from each row
Returns
-------
ListColumn
Examples
--------
>>> s = cudf.Series([[1, 2, 3], None, [4, 5]])
>>> ds = dask_cudf.from_cudf(s, 2)
>>> ds
0 [1, 2, 3]
1 None
2 [4, 5]
dtype: list
>>> ds.list.take([[0, 1], [], []]).compute()
0 [1, 2]
1 None
2 []
dtype: list
"""
return self.d_series.map_partitions(
lambda s: s.list.take(lists_indices), meta=self.d_series._meta
)
def unique(self):
"""
Returns unique element for each list in the column, order for each
unique element is not guaranteed.
Returns
-------
ListColumn
Examples
--------
>>> s = cudf.Series([[1, 1, 2, None, None], None, [4, 4], []])
>>> ds = dask_cudf.from_cudf(s, 2)
>>> ds
0 [1.0, 1.0, 2.0, nan, nan]
1 None
2 [4.0, 4.0]
3 []
dtype: list
>>> ds.list.unique().compute() # Order of elements not guaranteed
0 [1.0, 2.0, nan]
1 None
2 [4.0]
3 []
dtype: list
"""
return self.d_series.map_partitions(
lambda s: s.list.unique(), meta=self.d_series._meta
)
def sort_values(
self,
ascending=True,
inplace=False,
kind="quicksort",
na_position="last",
ignore_index=False,
):
"""
Sort each list by the values.
Sort the lists in ascending or descending order by some criterion.
Parameters
----------
ascending : bool, default True
If True, sort values in ascending order, otherwise descending.
na_position : {'first', 'last'}, default 'last'
'first' puts nulls at the beginning, 'last' puts nulls at the end.
ignore_index : bool, default False
If True, the resulting axis will be labeled 0, 1, ..., n - 1.
Returns
-------
ListColumn with each list sorted
Notes
-----
Difference from pandas:
* Not supporting: `inplace`, `kind`
Examples
--------
>>> s = cudf.Series([[4, 2, None, 9], [8, 8, 2], [2, 1]])
>>> ds = dask_cudf.from_cudf(s, 2)
>>> ds.list.sort_values(ascending=True, na_position="last").compute()
0 [2.0, 4.0, 9.0, nan]
1 [2.0, 8.0, 8.0]
2 [1.0, 2.0]
dtype: list
"""
return self.d_series.map_partitions(
lambda s: s.list.sort_values(
ascending, inplace, kind, na_position, ignore_index
),
meta=self.d_series._meta,
)
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/groupby.py
|
# Copyright (c) 2020-2023, NVIDIA CORPORATION.
from functools import wraps
from typing import Set
import numpy as np
import pandas as pd
from dask.dataframe.core import (
DataFrame as DaskDataFrame,
aca,
split_out_on_cols,
)
from dask.dataframe.groupby import DataFrameGroupBy, SeriesGroupBy
from dask.utils import funcname
import cudf
from cudf.utils.nvtx_annotation import _dask_cudf_nvtx_annotate
# aggregations that are dask-cudf optimized
OPTIMIZED_AGGS = (
"count",
"mean",
"std",
"var",
"sum",
"min",
"max",
"collect",
"first",
"last",
)
def _check_groupby_optimized(func):
"""
Decorator for dask-cudf's groupby methods that returns the dask-cudf
optimized method if the groupby object is supported, otherwise
reverting to the upstream Dask method
"""
@wraps(func)
def wrapper(*args, **kwargs):
gb = args[0]
if _groupby_optimized(gb):
return func(*args, **kwargs)
# note that we use upstream Dask's default kwargs for this call if
# none are specified; this shouldn't be an issue as those defaults are
# consistent with dask-cudf
return getattr(super(type(gb), gb), func.__name__)(*args[1:], **kwargs)
return wrapper
class CudfDataFrameGroupBy(DataFrameGroupBy):
@_dask_cudf_nvtx_annotate
def __init__(self, *args, sort=None, **kwargs):
self.sep = kwargs.pop("sep", "___")
self.as_index = kwargs.pop("as_index", True)
super().__init__(*args, sort=sort, **kwargs)
@_dask_cudf_nvtx_annotate
def __getitem__(self, key):
if isinstance(key, list):
g = CudfDataFrameGroupBy(
self.obj,
by=self.by,
slice=key,
sort=self.sort,
**self.dropna,
)
else:
g = CudfSeriesGroupBy(
self.obj,
by=self.by,
slice=key,
sort=self.sort,
**self.dropna,
)
g._meta = g._meta[key]
return g
@_dask_cudf_nvtx_annotate
def _make_groupby_method_aggs(self, agg_name):
"""Create aggs dictionary for aggregation methods"""
if isinstance(self.by, list):
return {c: agg_name for c in self.obj.columns if c not in self.by}
return {c: agg_name for c in self.obj.columns if c != self.by}
@_dask_cudf_nvtx_annotate
@_check_groupby_optimized
def count(self, split_every=None, split_out=1):
return _make_groupby_agg_call(
self,
self._make_groupby_method_aggs("count"),
split_every,
split_out,
)
@_dask_cudf_nvtx_annotate
@_check_groupby_optimized
def mean(self, split_every=None, split_out=1):
return _make_groupby_agg_call(
self,
self._make_groupby_method_aggs("mean"),
split_every,
split_out,
)
@_dask_cudf_nvtx_annotate
@_check_groupby_optimized
def std(self, split_every=None, split_out=1):
return _make_groupby_agg_call(
self,
self._make_groupby_method_aggs("std"),
split_every,
split_out,
)
@_dask_cudf_nvtx_annotate
@_check_groupby_optimized
def var(self, split_every=None, split_out=1):
return _make_groupby_agg_call(
self,
self._make_groupby_method_aggs("var"),
split_every,
split_out,
)
@_dask_cudf_nvtx_annotate
@_check_groupby_optimized
def sum(self, split_every=None, split_out=1):
return _make_groupby_agg_call(
self,
self._make_groupby_method_aggs("sum"),
split_every,
split_out,
)
@_dask_cudf_nvtx_annotate
@_check_groupby_optimized
def min(self, split_every=None, split_out=1):
return _make_groupby_agg_call(
self,
self._make_groupby_method_aggs("min"),
split_every,
split_out,
)
@_dask_cudf_nvtx_annotate
@_check_groupby_optimized
def max(self, split_every=None, split_out=1):
return _make_groupby_agg_call(
self,
self._make_groupby_method_aggs("max"),
split_every,
split_out,
)
@_dask_cudf_nvtx_annotate
@_check_groupby_optimized
def collect(self, split_every=None, split_out=1):
return _make_groupby_agg_call(
self,
self._make_groupby_method_aggs("collect"),
split_every,
split_out,
)
@_dask_cudf_nvtx_annotate
@_check_groupby_optimized
def first(self, split_every=None, split_out=1):
return _make_groupby_agg_call(
self,
self._make_groupby_method_aggs("first"),
split_every,
split_out,
)
@_dask_cudf_nvtx_annotate
@_check_groupby_optimized
def last(self, split_every=None, split_out=1):
return _make_groupby_agg_call(
self,
self._make_groupby_method_aggs("last"),
split_every,
split_out,
)
@_dask_cudf_nvtx_annotate
def aggregate(self, arg, split_every=None, split_out=1, shuffle=None):
if arg == "size":
return self.size()
arg = _redirect_aggs(arg)
if _groupby_optimized(self) and _aggs_optimized(arg, OPTIMIZED_AGGS):
if isinstance(self._meta.grouping.keys, cudf.MultiIndex):
keys = self._meta.grouping.keys.names
else:
keys = self._meta.grouping.keys.name
return groupby_agg(
self.obj,
keys,
arg,
split_every=split_every,
split_out=split_out,
sep=self.sep,
sort=self.sort,
as_index=self.as_index,
shuffle=shuffle,
**self.dropna,
)
return super().aggregate(
arg,
split_every=split_every,
split_out=split_out,
shuffle=shuffle,
)
class CudfSeriesGroupBy(SeriesGroupBy):
@_dask_cudf_nvtx_annotate
def __init__(self, *args, sort=None, **kwargs):
self.sep = kwargs.pop("sep", "___")
self.as_index = kwargs.pop("as_index", True)
super().__init__(*args, sort=sort, **kwargs)
@_dask_cudf_nvtx_annotate
@_check_groupby_optimized
def count(self, split_every=None, split_out=1):
return _make_groupby_agg_call(
self,
{self._slice: "count"},
split_every,
split_out,
)[self._slice]
@_dask_cudf_nvtx_annotate
@_check_groupby_optimized
def mean(self, split_every=None, split_out=1):
return _make_groupby_agg_call(
self,
{self._slice: "mean"},
split_every,
split_out,
)[self._slice]
@_dask_cudf_nvtx_annotate
@_check_groupby_optimized
def std(self, split_every=None, split_out=1):
return _make_groupby_agg_call(
self,
{self._slice: "std"},
split_every,
split_out,
)[self._slice]
@_dask_cudf_nvtx_annotate
@_check_groupby_optimized
def var(self, split_every=None, split_out=1):
return _make_groupby_agg_call(
self,
{self._slice: "var"},
split_every,
split_out,
)[self._slice]
@_dask_cudf_nvtx_annotate
@_check_groupby_optimized
def sum(self, split_every=None, split_out=1):
return _make_groupby_agg_call(
self,
{self._slice: "sum"},
split_every,
split_out,
)[self._slice]
@_dask_cudf_nvtx_annotate
@_check_groupby_optimized
def min(self, split_every=None, split_out=1):
return _make_groupby_agg_call(
self,
{self._slice: "min"},
split_every,
split_out,
)[self._slice]
@_dask_cudf_nvtx_annotate
@_check_groupby_optimized
def max(self, split_every=None, split_out=1):
return _make_groupby_agg_call(
self,
{self._slice: "max"},
split_every,
split_out,
)[self._slice]
@_dask_cudf_nvtx_annotate
@_check_groupby_optimized
def collect(self, split_every=None, split_out=1):
return _make_groupby_agg_call(
self,
{self._slice: "collect"},
split_every,
split_out,
)[self._slice]
@_dask_cudf_nvtx_annotate
@_check_groupby_optimized
def first(self, split_every=None, split_out=1):
return _make_groupby_agg_call(
self,
{self._slice: "first"},
split_every,
split_out,
)[self._slice]
@_dask_cudf_nvtx_annotate
@_check_groupby_optimized
def last(self, split_every=None, split_out=1):
return _make_groupby_agg_call(
self,
{self._slice: "last"},
split_every,
split_out,
)[self._slice]
@_dask_cudf_nvtx_annotate
def aggregate(self, arg, split_every=None, split_out=1, shuffle=None):
if arg == "size":
return self.size()
arg = _redirect_aggs(arg)
if not isinstance(arg, dict):
arg = {self._slice: arg}
if _groupby_optimized(self) and _aggs_optimized(arg, OPTIMIZED_AGGS):
return _make_groupby_agg_call(
self, arg, split_every, split_out, shuffle
)[self._slice]
return super().aggregate(
arg,
split_every=split_every,
split_out=split_out,
shuffle=shuffle,
)
def _shuffle_aggregate(
ddf,
gb_cols,
chunk,
chunk_kwargs,
aggregate,
aggregate_kwargs,
split_every,
split_out,
token=None,
sort=None,
shuffle=None,
):
# Shuffle-based groupby aggregation
# NOTE: This function is the dask_cudf version of
# dask.dataframe.groupby._shuffle_aggregate
# Step 1 - Chunkwise groupby operation
chunk_name = f"{token or funcname(chunk)}-chunk"
chunked = ddf.map_partitions(
chunk,
meta=chunk(ddf._meta, **chunk_kwargs),
token=chunk_name,
**chunk_kwargs,
)
# Step 2 - Perform global sort or shuffle
shuffle_npartitions = max(
chunked.npartitions // split_every,
split_out,
)
if sort and split_out > 1:
# Sort-based code path
result = (
chunked.repartition(npartitions=shuffle_npartitions)
.sort_values(
gb_cols,
ignore_index=True,
shuffle=shuffle,
)
.map_partitions(
aggregate,
meta=aggregate(chunked._meta, **aggregate_kwargs),
**aggregate_kwargs,
)
)
else:
# Hash-based code path
result = chunked.shuffle(
gb_cols,
npartitions=shuffle_npartitions,
ignore_index=True,
shuffle=shuffle,
).map_partitions(
aggregate,
meta=aggregate(chunked._meta, **aggregate_kwargs),
**aggregate_kwargs,
)
# Step 3 - Repartition and return
if split_out < result.npartitions:
return result.repartition(npartitions=split_out)
return result
@_dask_cudf_nvtx_annotate
def groupby_agg(
ddf,
gb_cols,
aggs_in,
split_every=None,
split_out=None,
dropna=True,
sep="___",
sort=False,
as_index=True,
shuffle=None,
):
"""Optimized groupby aggregation for Dask-CuDF.
Parameters
----------
ddf : DataFrame
DataFrame object to perform grouping on.
gb_cols : str or list[str]
Column names to group by.
aggs_in : str, list, or dict
Aggregations to perform.
split_every : int (optional)
How to group intermediate aggregates.
dropna : bool
Drop grouping key values corresponding to NA values.
as_index : bool
Currently ignored.
sort : bool
Sort the group keys, better performance is obtained when
not sorting.
shuffle : str (optional)
Control how shuffling of the DataFrame is performed.
sep : str
Internal usage.
Notes
-----
This "optimized" approach is more performant than the algorithm in
implemented in :meth:`DataFrame.apply` because it allows the cuDF
backend to perform multiple aggregations at once.
This aggregation algorithm only supports the following options
* "collect"
* "count"
* "first"
* "last"
* "max"
* "mean"
* "min"
* "std"
* "sum"
* "var"
See Also
--------
DataFrame.groupby : generic groupby of a DataFrame
dask.dataframe.apply_concat_apply : for more description of the
split_every argument.
"""
# Assert that aggregations are supported
aggs = _redirect_aggs(aggs_in)
if not _aggs_optimized(aggs, OPTIMIZED_AGGS):
raise ValueError(
f"Supported aggs include {OPTIMIZED_AGGS} for groupby_agg API. "
f"Aggregations must be specified with dict or list syntax."
)
# If split_every is False, we use an all-to-one reduction
if split_every is False:
split_every = max(ddf.npartitions, 2)
# Deal with default split_out and split_every params
split_every = split_every or 8
split_out = split_out or 1
# Standardize `gb_cols`, `columns`, and `aggs`
if isinstance(gb_cols, str):
gb_cols = [gb_cols]
columns = [c for c in ddf.columns if c not in gb_cols]
if not isinstance(aggs, dict):
aggs = {col: aggs for col in columns}
# Assert if our output will have a MultiIndex; this will be the case if
# any value in the `aggs` dict is not a string (i.e. multiple/named
# aggregations per column)
str_cols_out = True
aggs_renames = {}
for col in aggs:
if isinstance(aggs[col], str) or callable(aggs[col]):
aggs[col] = [aggs[col]]
elif isinstance(aggs[col], dict):
str_cols_out = False
col_aggs = []
for k, v in aggs[col].items():
aggs_renames[col, v] = k
col_aggs.append(v)
aggs[col] = col_aggs
else:
str_cols_out = False
if col in gb_cols:
columns.append(col)
# Construct meta
_aggs = aggs.copy()
if str_cols_out:
# Metadata should use `str` for dict values if that is
# what the user originally specified (column names will
# be str, rather than tuples).
for col in aggs:
_aggs[col] = _aggs[col][0]
_meta = ddf._meta.groupby(gb_cols, as_index=as_index).agg(_aggs)
if aggs_renames:
col_array = []
agg_array = []
for col, agg in _meta.columns:
col_array.append(col)
agg_array.append(aggs_renames.get((col, agg), agg))
_meta.columns = pd.MultiIndex.from_arrays([col_array, agg_array])
chunk = _groupby_partition_agg
chunk_kwargs = {
"gb_cols": gb_cols,
"aggs": aggs,
"columns": columns,
"dropna": dropna,
"sort": sort,
"sep": sep,
}
combine = _tree_node_agg
combine_kwargs = {
"gb_cols": gb_cols,
"dropna": dropna,
"sort": sort,
"sep": sep,
}
aggregate = _finalize_gb_agg
aggregate_kwargs = {
"gb_cols": gb_cols,
"aggs": aggs,
"columns": columns,
"final_columns": _meta.columns,
"as_index": as_index,
"dropna": dropna,
"sort": sort,
"sep": sep,
"str_cols_out": str_cols_out,
"aggs_renames": aggs_renames,
}
# Use shuffle=True for split_out>1
if sort and split_out > 1 and shuffle is None:
shuffle = "tasks"
# Check if we are using the shuffle-based algorithm
if shuffle:
# Shuffle-based aggregation
return _shuffle_aggregate(
ddf,
gb_cols,
chunk,
chunk_kwargs,
aggregate,
aggregate_kwargs,
split_every,
split_out,
token="cudf-aggregate",
sort=sort,
shuffle=shuffle if isinstance(shuffle, str) else None,
)
# Deal with sort/shuffle defaults
if split_out > 1 and sort:
raise ValueError(
"dask-cudf's groupby algorithm does not yet support "
"`sort=True` when `split_out>1`, unless a shuffle-based "
"algorithm is used. Please use `split_out=1`, group "
"with `sort=False`, or set `shuffle=True`."
)
# Determine required columns to enable column projection
required_columns = list(
set(gb_cols).union(aggs.keys()).intersection(ddf.columns)
)
return aca(
[ddf[required_columns]],
chunk=chunk,
chunk_kwargs=chunk_kwargs,
combine=combine,
combine_kwargs=combine_kwargs,
aggregate=aggregate,
aggregate_kwargs=aggregate_kwargs,
token="cudf-aggregate",
split_every=split_every,
split_out=split_out,
split_out_setup=split_out_on_cols,
split_out_setup_kwargs={"cols": gb_cols},
sort=sort,
ignore_index=True,
)
@_dask_cudf_nvtx_annotate
def _make_groupby_agg_call(gb, aggs, split_every, split_out, shuffle=None):
"""Helper method to consolidate the common `groupby_agg` call for all
aggregations in one place
"""
return groupby_agg(
gb.obj,
gb.by,
aggs,
split_every=split_every,
split_out=split_out,
sep=gb.sep,
sort=gb.sort,
as_index=gb.as_index,
shuffle=shuffle,
**gb.dropna,
)
@_dask_cudf_nvtx_annotate
def _redirect_aggs(arg):
"""Redirect aggregations to their corresponding name in cuDF"""
redirects = {
sum: "sum",
max: "max",
min: "min",
list: "collect",
"list": "collect",
}
if isinstance(arg, dict):
new_arg = dict()
for col in arg:
if isinstance(arg[col], list):
new_arg[col] = [redirects.get(agg, agg) for agg in arg[col]]
elif isinstance(arg[col], dict):
new_arg[col] = {
k: redirects.get(v, v) for k, v in arg[col].items()
}
else:
new_arg[col] = redirects.get(arg[col], arg[col])
return new_arg
if isinstance(arg, list):
return [redirects.get(agg, agg) for agg in arg]
return redirects.get(arg, arg)
@_dask_cudf_nvtx_annotate
def _aggs_optimized(arg, supported: set):
"""Check that aggregations in `arg` are a subset of `supported`"""
if isinstance(arg, (list, dict)):
if isinstance(arg, dict):
_global_set: Set[str] = set()
for col in arg:
if isinstance(arg[col], list):
_global_set = _global_set.union(set(arg[col]))
elif isinstance(arg[col], dict):
_global_set = _global_set.union(set(arg[col].values()))
else:
_global_set.add(arg[col])
else:
_global_set = set(arg)
return bool(_global_set.issubset(supported))
elif isinstance(arg, str):
return arg in supported
return False
@_dask_cudf_nvtx_annotate
def _groupby_optimized(gb):
"""Check that groupby input can use dask-cudf optimized codepath"""
return isinstance(gb.obj, DaskDataFrame) and (
isinstance(gb.by, str)
or (isinstance(gb.by, list) and all(isinstance(x, str) for x in gb.by))
)
def _make_name(col_name, sep="_"):
"""Combine elements of `col_name` into a single string, or no-op if
`col_name` is already a string
"""
if isinstance(col_name, str):
return col_name
return sep.join(name for name in col_name if name != "")
@_dask_cudf_nvtx_annotate
def _groupby_partition_agg(df, gb_cols, aggs, columns, dropna, sort, sep):
"""Initial partition-level aggregation task.
This is the first operation to be executed on each input
partition in `groupby_agg`. Depending on `aggs`, four possible
groupby aggregations ("count", "sum", "min", and "max") are
performed. The result is then partitioned (by hashing `gb_cols`)
into a number of distinct dictionary elements. The number of
elements in the output dictionary (`split_out`) corresponds to
the number of partitions in the final output of `groupby_agg`.
"""
# Modify dict for initial (partition-wise) aggregations
_agg_dict = {}
for col, agg_list in aggs.items():
_agg_dict[col] = set()
for agg in agg_list:
if agg in ("mean", "std", "var"):
_agg_dict[col].add("count")
_agg_dict[col].add("sum")
else:
_agg_dict[col].add(agg)
_agg_dict[col] = list(_agg_dict[col])
if set(agg_list).intersection({"std", "var"}):
pow2_name = _make_name((col, "pow2"), sep=sep)
df[pow2_name] = df[col].astype("float64").pow(2)
_agg_dict[pow2_name] = ["sum"]
gb = df.groupby(gb_cols, dropna=dropna, as_index=False, sort=sort).agg(
_agg_dict
)
output_columns = [_make_name(name, sep=sep) for name in gb.columns]
gb.columns = output_columns
# Return with deterministic column ordering
return gb[sorted(output_columns)]
@_dask_cudf_nvtx_annotate
def _tree_node_agg(df, gb_cols, dropna, sort, sep):
"""Node in groupby-aggregation reduction tree.
The input DataFrame (`df`) corresponds to the
concatenated output of one or more `_groupby_partition_agg`
tasks. In this function, "sum", "min" and/or "max" groupby
aggregations will be used to combine the statistics for
duplicate keys.
"""
agg_dict = {}
for col in df.columns:
if col in gb_cols:
continue
agg = col.split(sep)[-1]
if agg in ("count", "sum"):
agg_dict[col] = ["sum"]
elif agg in OPTIMIZED_AGGS:
agg_dict[col] = [agg]
else:
raise ValueError(f"Unexpected aggregation: {agg}")
gb = df.groupby(gb_cols, dropna=dropna, as_index=False, sort=sort).agg(
agg_dict
)
# Don't include the last aggregation in the column names
output_columns = [
_make_name(name[:-1] if isinstance(name, tuple) else name, sep=sep)
for name in gb.columns
]
gb.columns = output_columns
# Return with deterministic column ordering
return gb[sorted(output_columns)]
@_dask_cudf_nvtx_annotate
def _var_agg(df, col, count_name, sum_name, pow2_sum_name, ddof=1):
"""Calculate variance (given count, sum, and sum-squared columns)."""
# Select count, sum, and sum-squared
n = df[count_name]
x = df[sum_name]
x2 = df[pow2_sum_name]
# Use sum-squared approach to get variance
var = x2 - x**2 / n
div = n - ddof
div[div < 1] = 1 # Avoid division by 0
var /= div
# Set appropriate NaN elements
# (since we avoided 0-division)
var[(n - ddof) == 0] = np.nan
return var
@_dask_cudf_nvtx_annotate
def _finalize_gb_agg(
gb_in,
gb_cols,
aggs,
columns,
final_columns,
as_index,
dropna,
sort,
sep,
str_cols_out,
aggs_renames,
):
"""Final aggregation task.
This is the final operation on each output partitions
of the `groupby_agg` algorithm. This function must
take care of higher-order aggregations, like "mean",
"std" and "var". We also need to deal with the column
index, the row index, and final sorting behavior.
"""
gb = _tree_node_agg(gb_in, gb_cols, dropna, sort, sep)
# Deal with higher-order aggregations
for col in columns:
agg_list = aggs.get(col, [])
agg_set = set(agg_list)
if agg_set.intersection({"mean", "std", "var"}):
count_name = _make_name((col, "count"), sep=sep)
sum_name = _make_name((col, "sum"), sep=sep)
if agg_set.intersection({"std", "var"}):
pow2_sum_name = _make_name((col, "pow2", "sum"), sep=sep)
var = _var_agg(gb, col, count_name, sum_name, pow2_sum_name)
if "var" in agg_list:
name_var = _make_name((col, "var"), sep=sep)
gb[name_var] = var
if "std" in agg_list:
name_std = _make_name((col, "std"), sep=sep)
gb[name_std] = np.sqrt(var)
gb.drop(columns=[pow2_sum_name], inplace=True)
if "mean" in agg_list:
mean_name = _make_name((col, "mean"), sep=sep)
gb[mean_name] = gb[sum_name] / gb[count_name]
if "sum" not in agg_list:
gb.drop(columns=[sum_name], inplace=True)
if "count" not in agg_list:
gb.drop(columns=[count_name], inplace=True)
if "collect" in agg_list:
collect_name = _make_name((col, "collect"), sep=sep)
gb[collect_name] = gb[collect_name].list.concat()
# Ensure sorted keys if `sort=True`
if sort:
gb = gb.sort_values(gb_cols)
# Set index if necessary
if as_index:
gb.set_index(gb_cols, inplace=True)
# Unflatten column names
col_array = []
agg_array = []
for col in gb.columns:
if col in gb_cols:
col_array.append(col)
agg_array.append("")
else:
name, agg = col.split(sep)
col_array.append(name)
agg_array.append(aggs_renames.get((name, agg), agg))
if str_cols_out:
gb.columns = col_array
else:
gb.columns = pd.MultiIndex.from_arrays([col_array, agg_array])
return gb[final_columns]
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/backends.py
|
# Copyright (c) 2020-2023, NVIDIA CORPORATION.
import warnings
from collections.abc import Iterator
import cupy as cp
import numpy as np
import pandas as pd
import pyarrow as pa
from pandas.api.types import is_scalar
from pandas.core.tools.datetimes import is_datetime64tz_dtype
import dask.dataframe as dd
from dask import config
from dask.array.dispatch import percentile_lookup
from dask.dataframe.backends import (
DataFrameBackendEntrypoint,
PandasBackendEntrypoint,
)
from dask.dataframe.core import get_parallel_type, meta_nonempty
from dask.dataframe.dispatch import (
categorical_dtype_dispatch,
concat_dispatch,
from_pyarrow_table_dispatch,
group_split_dispatch,
grouper_dispatch,
hash_object_dispatch,
is_categorical_dtype_dispatch,
make_meta_dispatch,
pyarrow_schema_dispatch,
to_pyarrow_table_dispatch,
tolist_dispatch,
union_categoricals_dispatch,
)
from dask.dataframe.utils import (
UNKNOWN_CATEGORIES,
_nonempty_scalar,
_scalar_from_dtype,
make_meta_obj,
)
from dask.sizeof import sizeof as sizeof_dispatch
from dask.utils import Dispatch, is_arraylike
import cudf
from cudf.api.types import is_string_dtype
from cudf.utils.nvtx_annotation import _dask_cudf_nvtx_annotate
from .core import DataFrame, Index, Series
get_parallel_type.register(cudf.DataFrame, lambda _: DataFrame)
get_parallel_type.register(cudf.Series, lambda _: Series)
get_parallel_type.register(cudf.BaseIndex, lambda _: Index)
@meta_nonempty.register(cudf.BaseIndex)
@_dask_cudf_nvtx_annotate
def _nonempty_index(idx):
if isinstance(idx, cudf.core.index.RangeIndex):
return cudf.core.index.RangeIndex(2, name=idx.name)
elif isinstance(idx, cudf.core.index.DatetimeIndex):
start = "1970-01-01"
data = np.array([start, "1970-01-02"], dtype=idx.dtype)
values = cudf.core.column.as_column(data)
return cudf.core.index.DatetimeIndex(values, name=idx.name)
elif isinstance(idx, cudf.StringIndex):
return cudf.StringIndex(["cat", "dog"], name=idx.name)
elif isinstance(idx, cudf.core.index.CategoricalIndex):
key = tuple(idx._data.keys())
assert len(key) == 1
categories = idx._data[key[0]].categories
codes = [0, 0]
ordered = idx._data[key[0]].ordered
values = cudf.core.column.build_categorical_column(
categories=categories, codes=codes, ordered=ordered
)
return cudf.core.index.CategoricalIndex(values, name=idx.name)
elif isinstance(idx, cudf.core.index.GenericIndex):
return cudf.core.index.GenericIndex(
np.arange(2, dtype=idx.dtype), name=idx.name
)
elif isinstance(idx, cudf.core.multiindex.MultiIndex):
levels = [meta_nonempty(lev) for lev in idx.levels]
codes = [[0, 0] for i in idx.levels]
return cudf.core.multiindex.MultiIndex(
levels=levels, codes=codes, names=idx.names
)
raise TypeError(f"Don't know how to handle index of type {type(idx)}")
def _nest_list_data(data, leaf_type):
"""
Helper for _get_non_empty_data which creates
nested list data
"""
data = [data]
while isinstance(leaf_type, cudf.ListDtype):
leaf_type = leaf_type.element_type
data = [data]
return data
@_dask_cudf_nvtx_annotate
def _get_non_empty_data(s):
if isinstance(s, cudf.core.column.CategoricalColumn):
categories = (
s.categories if len(s.categories) else [UNKNOWN_CATEGORIES]
)
codes = cudf.core.column.full(
size=2, fill_value=0, dtype=cudf._lib.types.size_type_dtype
)
ordered = s.ordered
data = cudf.core.column.build_categorical_column(
categories=categories, codes=codes, ordered=ordered
)
elif isinstance(s, cudf.core.column.ListColumn):
leaf_type = s.dtype.leaf_type
if is_string_dtype(leaf_type):
data = ["cat", "dog"]
else:
data = np.array([0, 1], dtype=leaf_type).tolist()
data = _nest_list_data(data, s.dtype) * 2
data = cudf.core.column.as_column(data, dtype=s.dtype)
elif isinstance(s, cudf.core.column.StructColumn):
struct_dtype = s.dtype
data = [{key: None for key in struct_dtype.fields.keys()}] * 2
data = cudf.core.column.as_column(data, dtype=s.dtype)
elif is_string_dtype(s.dtype):
data = pa.array(["cat", "dog"])
elif is_datetime64tz_dtype(s.dtype):
from cudf.utils.dtypes import get_time_unit
data = cudf.date_range("2001-01-01", periods=2, freq=get_time_unit(s))
data = data.tz_localize(str(s.dtype.tz))._column
else:
if pd.api.types.is_numeric_dtype(s.dtype):
data = cudf.core.column.as_column(
cp.arange(start=0, stop=2, dtype=s.dtype)
)
else:
data = cudf.core.column.as_column(
cp.arange(start=0, stop=2, dtype="int64")
).astype(s.dtype)
return data
@meta_nonempty.register(cudf.Series)
@_dask_cudf_nvtx_annotate
def _nonempty_series(s, idx=None):
if idx is None:
idx = _nonempty_index(s.index)
data = _get_non_empty_data(s._column)
return cudf.Series(data, name=s.name, index=idx)
@meta_nonempty.register(cudf.DataFrame)
@_dask_cudf_nvtx_annotate
def meta_nonempty_cudf(x):
idx = meta_nonempty(x.index)
columns_with_dtype = dict()
res = cudf.DataFrame(index=idx)
for col in x._data.names:
dtype = str(x._data[col].dtype)
if dtype in ("list", "struct", "category"):
# 1. Not possible to hash and store list & struct types
# as they can contain different levels of nesting or
# fields.
# 2. Not possible to has `category` types as
# they often contain an underlying types to them.
res._data[col] = _get_non_empty_data(x._data[col])
else:
if dtype not in columns_with_dtype:
columns_with_dtype[dtype] = cudf.core.column.as_column(
_get_non_empty_data(x._data[col])
)
res._data[col] = columns_with_dtype[dtype]
return res
@make_meta_dispatch.register((cudf.Series, cudf.DataFrame))
@_dask_cudf_nvtx_annotate
def make_meta_cudf(x, index=None):
return x.head(0)
@make_meta_dispatch.register(cudf.BaseIndex)
@_dask_cudf_nvtx_annotate
def make_meta_cudf_index(x, index=None):
return x[:0]
@_dask_cudf_nvtx_annotate
def _empty_series(name, dtype, index=None):
if isinstance(dtype, str) and dtype == "category":
return cudf.Series(
[UNKNOWN_CATEGORIES], dtype=dtype, name=name, index=index
).iloc[:0]
return cudf.Series([], dtype=dtype, name=name, index=index)
@make_meta_obj.register(object)
@_dask_cudf_nvtx_annotate
def make_meta_object_cudf(x, index=None):
"""Create an empty cudf object containing the desired metadata.
Parameters
----------
x : dict, tuple, list, cudf.Series, cudf.DataFrame, cudf.Index,
dtype, scalar
To create a DataFrame, provide a `dict` mapping of `{name: dtype}`, or
an iterable of `(name, dtype)` tuples. To create a `Series`, provide a
tuple of `(name, dtype)`. If a cudf object, names, dtypes, and index
should match the desired output. If a dtype or scalar, a scalar of the
same dtype is returned.
index : cudf.Index, optional
Any cudf index to use in the metadata. If none provided, a
`RangeIndex` will be used.
Examples
--------
>>> make_meta([('a', 'i8'), ('b', 'O')])
Empty DataFrame
Columns: [a, b]
Index: []
>>> make_meta(('a', 'f8'))
Series([], Name: a, dtype: float64)
>>> make_meta('i8')
1
"""
if hasattr(x, "_meta"):
return x._meta
elif is_arraylike(x) and x.shape:
return x[:0]
if index is not None:
index = make_meta_dispatch(index)
if isinstance(x, dict):
return cudf.DataFrame(
{c: _empty_series(c, d, index=index) for (c, d) in x.items()},
index=index,
)
if isinstance(x, tuple) and len(x) == 2:
return _empty_series(x[0], x[1], index=index)
elif isinstance(x, (list, tuple)):
if not all(isinstance(i, tuple) and len(i) == 2 for i in x):
raise ValueError(
f"Expected iterable of tuples of (name, dtype), got {x}"
)
return cudf.DataFrame(
{c: _empty_series(c, d, index=index) for (c, d) in x},
columns=[c for c, d in x],
index=index,
)
elif not hasattr(x, "dtype") and x is not None:
# could be a string, a dtype object, or a python type. Skip `None`,
# because it is implicitly converted to `dtype('f8')`, which we don't
# want here.
try:
dtype = np.dtype(x)
return _scalar_from_dtype(dtype)
except Exception:
# Continue on to next check
pass
if is_scalar(x):
return _nonempty_scalar(x)
raise TypeError(f"Don't know how to create metadata from {x}")
@concat_dispatch.register((cudf.DataFrame, cudf.Series, cudf.BaseIndex))
@_dask_cudf_nvtx_annotate
def concat_cudf(
dfs,
axis=0,
join="outer",
uniform=False,
filter_warning=True,
sort=None,
ignore_index=False,
**kwargs,
):
assert join == "outer"
ignore_order = kwargs.get("ignore_order", False)
if ignore_order:
raise NotImplementedError(
"ignore_order parameter is not yet supported in dask-cudf"
)
return cudf.concat(dfs, axis=axis, ignore_index=ignore_index)
@categorical_dtype_dispatch.register(
(cudf.DataFrame, cudf.Series, cudf.BaseIndex)
)
@_dask_cudf_nvtx_annotate
def categorical_dtype_cudf(categories=None, ordered=False):
return cudf.CategoricalDtype(categories=categories, ordered=ordered)
@tolist_dispatch.register((cudf.Series, cudf.BaseIndex))
@_dask_cudf_nvtx_annotate
def tolist_cudf(obj):
return obj.to_arrow().to_pylist()
@is_categorical_dtype_dispatch.register(
(cudf.Series, cudf.BaseIndex, cudf.CategoricalDtype, Series)
)
@_dask_cudf_nvtx_annotate
def is_categorical_dtype_cudf(obj):
return cudf.api.types.is_categorical_dtype(obj)
@grouper_dispatch.register((cudf.Series, cudf.DataFrame))
def get_grouper_cudf(obj):
return cudf.core.groupby.Grouper
@percentile_lookup.register((cudf.Series, cp.ndarray, cudf.BaseIndex))
@_dask_cudf_nvtx_annotate
def percentile_cudf(a, q, interpolation="linear"):
# Cudf dispatch to the equivalent of `np.percentile`:
# https://numpy.org/doc/stable/reference/generated/numpy.percentile.html
a = cudf.Series(a)
# a is series.
n = len(a)
if not len(a):
return None, n
if isinstance(q, Iterator):
q = list(q)
if cudf.api.types.is_categorical_dtype(a.dtype):
result = cp.percentile(a.cat.codes, q, interpolation=interpolation)
return (
pd.Categorical.from_codes(
result, a.dtype.categories, a.dtype.ordered
),
n,
)
if np.issubdtype(a.dtype, np.datetime64):
result = a.quantile(
[i / 100.0 for i in q], interpolation=interpolation
)
if q[0] == 0:
# https://github.com/dask/dask/issues/6864
result[0] = min(result[0], a.min())
return result.to_pandas(), n
if not np.issubdtype(a.dtype, np.number):
interpolation = "nearest"
return (
a.quantile(
[i / 100.0 for i in q], interpolation=interpolation
).to_pandas(),
n,
)
@pyarrow_schema_dispatch.register((cudf.DataFrame,))
def _get_pyarrow_schema_cudf(obj, preserve_index=None, **kwargs):
if kwargs:
warnings.warn(
"Ignoring the following arguments to "
f"`pyarrow_schema_dispatch`: {list(kwargs)}"
)
return _cudf_to_table(
meta_nonempty(obj), preserve_index=preserve_index
).schema
@to_pyarrow_table_dispatch.register(cudf.DataFrame)
def _cudf_to_table(obj, preserve_index=None, **kwargs):
if kwargs:
warnings.warn(
"Ignoring the following arguments to "
f"`to_pyarrow_table_dispatch`: {list(kwargs)}"
)
# TODO: Remove this logic when cudf#14159 is resolved
# (see: https://github.com/rapidsai/cudf/issues/14159)
if preserve_index and isinstance(obj.index, cudf.RangeIndex):
obj = obj.copy()
obj.index.name = (
obj.index.name
if obj.index.name is not None
else "__index_level_0__"
)
obj.index = obj.index._as_int_index()
return obj.to_arrow(preserve_index=preserve_index)
@from_pyarrow_table_dispatch.register(cudf.DataFrame)
def _table_to_cudf(obj, table, self_destruct=None, **kwargs):
# cudf ignores self_destruct.
kwargs.pop("self_destruct", None)
if kwargs:
warnings.warn(
f"Ignoring the following arguments to "
f"`from_pyarrow_table_dispatch`: {list(kwargs)}"
)
result = obj.from_arrow(table)
# TODO: Remove this logic when cudf#14159 is resolved
# (see: https://github.com/rapidsai/cudf/issues/14159)
if "__index_level_0__" in result.index.names:
assert len(result.index.names) == 1
result.index.name = None
return result
@union_categoricals_dispatch.register((cudf.Series, cudf.BaseIndex))
@_dask_cudf_nvtx_annotate
def union_categoricals_cudf(
to_union, sort_categories=False, ignore_order=False
):
return cudf.api.types._union_categoricals(
to_union, sort_categories=False, ignore_order=False
)
@hash_object_dispatch.register((cudf.DataFrame, cudf.Series))
@_dask_cudf_nvtx_annotate
def hash_object_cudf(frame, index=True):
if index:
frame = frame.reset_index()
return frame.hash_values()
@hash_object_dispatch.register(cudf.BaseIndex)
@_dask_cudf_nvtx_annotate
def hash_object_cudf_index(ind, index=None):
if isinstance(ind, cudf.MultiIndex):
return ind.to_frame(index=False).hash_values()
col = cudf.core.column.as_column(ind)
return cudf.Series(col).hash_values()
@group_split_dispatch.register((cudf.Series, cudf.DataFrame))
@_dask_cudf_nvtx_annotate
def group_split_cudf(df, c, k, ignore_index=False):
return dict(
zip(
range(k),
df.scatter_by_map(
c.astype(np.int32, copy=False),
map_size=k,
keep_index=not ignore_index,
),
)
)
@sizeof_dispatch.register(cudf.DataFrame)
@_dask_cudf_nvtx_annotate
def sizeof_cudf_dataframe(df):
return int(
sum(col.memory_usage for col in df._data.columns)
+ df._index.memory_usage()
)
@sizeof_dispatch.register((cudf.Series, cudf.BaseIndex))
@_dask_cudf_nvtx_annotate
def sizeof_cudf_series_index(obj):
return obj.memory_usage()
# TODO: Remove try/except when cudf is pinned to dask>=2023.10.0
try:
from dask.dataframe.dispatch import partd_encode_dispatch
@partd_encode_dispatch.register(cudf.DataFrame)
def _simple_cudf_encode(_):
# Basic pickle-based encoding for a partd k-v store
import pickle
from functools import partial
import partd
def join(dfs):
if not dfs:
return cudf.DataFrame()
else:
return cudf.concat(dfs)
dumps = partial(pickle.dumps, protocol=pickle.HIGHEST_PROTOCOL)
return partial(partd.Encode, dumps, pickle.loads, join)
except ImportError:
pass
def _default_backend(func, *args, **kwargs):
# Utility to call a dask.dataframe function with
# the default ("pandas") backend
# NOTE: Some `CudfBackendEntrypoint` methods need to
# invoke the "pandas"-version of the same method, but
# with custom kwargs (e.g. `engine`). In these cases,
# an explicit "pandas" config context is needed to
# avoid a recursive loop
with config.set({"dataframe.backend": "pandas"}):
return func(*args, **kwargs)
def _unsupported_kwargs(old, new, kwargs):
# Utility to raise a meaningful error when
# unsupported kwargs are encountered within
# ``to_backend_dispatch``
if kwargs:
raise ValueError(
f"Unsupported key-word arguments used in `to_backend` "
f"for {old}-to-{new} conversion: {kwargs}"
)
# Register cudf->pandas
to_pandas_dispatch = PandasBackendEntrypoint.to_backend_dispatch()
@to_pandas_dispatch.register((cudf.DataFrame, cudf.Series, cudf.Index))
def to_pandas_dispatch_from_cudf(data, nullable=False, **kwargs):
_unsupported_kwargs("cudf", "pandas", kwargs)
return data.to_pandas(nullable=nullable)
# Register pandas->cudf
to_cudf_dispatch = Dispatch("to_cudf_dispatch")
@to_cudf_dispatch.register((pd.DataFrame, pd.Series, pd.Index))
def to_cudf_dispatch_from_pandas(data, nan_as_null=None, **kwargs):
_unsupported_kwargs("pandas", "cudf", kwargs)
return cudf.from_pandas(data, nan_as_null=nan_as_null)
# Define "cudf" backend engine to be registered with Dask
class CudfBackendEntrypoint(DataFrameBackendEntrypoint):
"""Backend-entrypoint class for Dask-DataFrame
This class is registered under the name "cudf" for the
``dask.dataframe.backends`` entrypoint in ``setup.cfg``.
Dask-DataFrame will use the methods defined in this class
in place of ``dask.dataframe.<creation-method>`` when the
"dataframe.backend" configuration is set to "cudf":
Examples
--------
>>> import dask
>>> import dask.dataframe as dd
>>> with dask.config.set({"dataframe.backend": "cudf"}):
... ddf = dd.from_dict({"a": range(10)})
>>> type(ddf)
<class 'dask_cudf.core.DataFrame'>
"""
@classmethod
def to_backend_dispatch(cls):
return to_cudf_dispatch
@classmethod
def to_backend(cls, data: dd.core._Frame, **kwargs):
if isinstance(data._meta, (cudf.DataFrame, cudf.Series, cudf.Index)):
# Already a cudf-backed collection
_unsupported_kwargs("cudf", "cudf", kwargs)
return data
return data.map_partitions(cls.to_backend_dispatch(), **kwargs)
@staticmethod
def from_dict(
data,
npartitions,
orient="columns",
dtype=None,
columns=None,
constructor=cudf.DataFrame,
):
return _default_backend(
dd.from_dict,
data,
npartitions=npartitions,
orient=orient,
dtype=dtype,
columns=columns,
constructor=constructor,
)
@staticmethod
def read_parquet(*args, engine=None, **kwargs):
from dask_cudf.io.parquet import CudfEngine
return _default_backend(
dd.read_parquet,
*args,
engine=CudfEngine,
**kwargs,
)
@staticmethod
def read_json(*args, **kwargs):
from dask_cudf.io.json import read_json
return read_json(*args, **kwargs)
@staticmethod
def read_orc(*args, **kwargs):
from dask_cudf.io import read_orc
return read_orc(*args, **kwargs)
@staticmethod
def read_csv(*args, **kwargs):
from dask_cudf.io import read_csv
return read_csv(*args, **kwargs)
@staticmethod
def read_hdf(*args, **kwargs):
from dask_cudf import from_dask_dataframe
# HDF5 reader not yet implemented in cudf
warnings.warn(
"read_hdf is not yet implemented in cudf/dask_cudf. "
"Moving to cudf from pandas. Expect poor performance!"
)
return from_dask_dataframe(
_default_backend(dd.read_hdf, *args, **kwargs)
)
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/core.py
|
# Copyright (c) 2018-2023, NVIDIA CORPORATION.
import math
import textwrap
import warnings
import numpy as np
import pandas as pd
from tlz import partition_all
from dask import dataframe as dd
from dask.base import normalize_token, tokenize
from dask.dataframe.core import (
Scalar,
handle_out,
make_meta as dask_make_meta,
map_partitions,
)
from dask.dataframe.utils import raise_on_meta_error
from dask.highlevelgraph import HighLevelGraph
from dask.utils import M, OperatorMethodMixin, apply, derived_from, funcname
import cudf
from cudf import _lib as libcudf
from cudf.utils.nvtx_annotation import _dask_cudf_nvtx_annotate
from dask_cudf import sorting
from dask_cudf.accessors import ListMethods, StructMethods
from dask_cudf.sorting import _get_shuffle_type
class _Frame(dd.core._Frame, OperatorMethodMixin):
"""Superclass for DataFrame and Series
Parameters
----------
dsk : dict
The dask graph to compute this DataFrame
name : str
The key prefix that specifies which keys in the dask comprise this
particular DataFrame / Series
meta : cudf.DataFrame, cudf.Series, or cudf.Index
An empty cudf object with names, dtypes, and indices matching the
expected output.
divisions : tuple of index values
Values along which we partition our blocks on the index
"""
def _is_partition_type(self, meta):
return isinstance(meta, self._partition_type)
def __repr__(self):
s = "<dask_cudf.%s | %d tasks | %d npartitions>"
return s % (type(self).__name__, len(self.dask), self.npartitions)
@_dask_cudf_nvtx_annotate
def to_dask_dataframe(self, **kwargs):
"""Create a dask.dataframe object from a dask_cudf object"""
nullable_pd_dtype = kwargs.get("nullable_pd_dtype", False)
return self.map_partitions(
M.to_pandas, nullable_pd_dtype=nullable_pd_dtype
)
concat = dd.concat
normalize_token.register(_Frame, lambda a: a._name)
class DataFrame(_Frame, dd.core.DataFrame):
"""
A distributed Dask DataFrame where the backing dataframe is a
:class:`cuDF DataFrame <cudf:cudf.DataFrame>`.
Typically you would not construct this object directly, but rather
use one of Dask-cuDF's IO routines.
Most operations on :doc:`Dask DataFrames <dask:dataframe>` are
supported, with many of the same caveats.
"""
_partition_type = cudf.DataFrame
@_dask_cudf_nvtx_annotate
def _assign_column(self, k, v):
def assigner(df, k, v):
out = df.copy()
out[k] = v
return out
meta = assigner(self._meta, k, dask_make_meta(v))
return self.map_partitions(assigner, k, v, meta=meta)
@_dask_cudf_nvtx_annotate
def apply_rows(self, func, incols, outcols, kwargs=None, cache_key=None):
import uuid
if kwargs is None:
kwargs = {}
if cache_key is None:
cache_key = uuid.uuid4()
def do_apply_rows(df, func, incols, outcols, kwargs):
return df.apply_rows(
func, incols, outcols, kwargs, cache_key=cache_key
)
meta = do_apply_rows(self._meta, func, incols, outcols, kwargs)
return self.map_partitions(
do_apply_rows, func, incols, outcols, kwargs, meta=meta
)
@_dask_cudf_nvtx_annotate
def merge(self, other, shuffle=None, **kwargs):
on = kwargs.pop("on", None)
if isinstance(on, tuple):
on = list(on)
return super().merge(
other, on=on, shuffle=_get_shuffle_type(shuffle), **kwargs
)
@_dask_cudf_nvtx_annotate
def join(self, other, shuffle=None, **kwargs):
# CuDF doesn't support "right" join yet
how = kwargs.pop("how", "left")
if how == "right":
return other.join(other=self, how="left", **kwargs)
on = kwargs.pop("on", None)
if isinstance(on, tuple):
on = list(on)
return super().join(
other, how=how, on=on, shuffle=_get_shuffle_type(shuffle), **kwargs
)
@_dask_cudf_nvtx_annotate
def set_index(
self, other, sorted=False, divisions=None, shuffle=None, **kwargs
):
pre_sorted = sorted
del sorted
if (
divisions == "quantile"
or isinstance(divisions, (cudf.DataFrame, cudf.Series))
or (
isinstance(other, str)
and cudf.api.types.is_string_dtype(self[other].dtype)
)
):
# Let upstream-dask handle "pre-sorted" case
if pre_sorted:
return dd.shuffle.set_sorted_index(
self, other, divisions=divisions, **kwargs
)
by = other
if not isinstance(other, list):
by = [by]
if len(by) > 1:
raise ValueError("Dask does not support MultiIndex (yet).")
if divisions == "quantile":
divisions = None
# Use dask_cudf's sort_values
df = self.sort_values(
by,
max_branch=kwargs.get("max_branch", None),
divisions=divisions,
set_divisions=True,
ignore_index=True,
shuffle=shuffle,
)
# Ignore divisions if its a dataframe
if isinstance(divisions, cudf.DataFrame):
divisions = None
# Set index and repartition
df2 = df.map_partitions(
sorting.set_index_post,
index_name=other,
drop=kwargs.get("drop", True),
column_dtype=df.columns.dtype,
)
npartitions = kwargs.get("npartitions", self.npartitions)
partition_size = kwargs.get("partition_size", None)
if partition_size:
return df2.repartition(partition_size=partition_size)
if not divisions and df2.npartitions != npartitions:
return df2.repartition(npartitions=npartitions)
if divisions and df2.npartitions != len(divisions) - 1:
return df2.repartition(divisions=divisions)
return df2
return super().set_index(
other,
sorted=pre_sorted,
shuffle=_get_shuffle_type(shuffle),
divisions=divisions,
**kwargs,
)
@_dask_cudf_nvtx_annotate
def sort_values(
self,
by,
ignore_index=False,
max_branch=None,
divisions=None,
set_divisions=False,
ascending=True,
na_position="last",
sort_function=None,
sort_function_kwargs=None,
shuffle=None,
**kwargs,
):
if kwargs:
raise ValueError(
f"Unsupported input arguments passed : {list(kwargs.keys())}"
)
df = sorting.sort_values(
self,
by,
max_branch=max_branch,
divisions=divisions,
set_divisions=set_divisions,
ignore_index=ignore_index,
ascending=ascending,
na_position=na_position,
shuffle=shuffle,
sort_function=sort_function,
sort_function_kwargs=sort_function_kwargs,
)
if ignore_index:
return df.reset_index(drop=True)
return df
@_dask_cudf_nvtx_annotate
def to_parquet(self, path, *args, **kwargs):
"""Calls dask.dataframe.io.to_parquet with CudfEngine backend"""
from dask_cudf.io import to_parquet
return to_parquet(self, path, *args, **kwargs)
@_dask_cudf_nvtx_annotate
def to_orc(self, path, **kwargs):
"""Calls dask_cudf.io.to_orc"""
from dask_cudf.io import to_orc
return to_orc(self, path, **kwargs)
@derived_from(pd.DataFrame)
@_dask_cudf_nvtx_annotate
def var(
self,
axis=None,
skipna=True,
ddof=1,
split_every=False,
dtype=None,
out=None,
naive=False,
):
axis = self._validate_axis(axis)
meta = self._meta_nonempty.var(axis=axis, skipna=skipna)
if axis == 1:
result = map_partitions(
M.var,
self,
meta=meta,
token=self._token_prefix + "var",
axis=axis,
skipna=skipna,
ddof=ddof,
)
return handle_out(out, result)
elif naive:
return _naive_var(self, meta, skipna, ddof, split_every, out)
else:
return _parallel_var(self, meta, skipna, split_every, out)
@_dask_cudf_nvtx_annotate
def shuffle(self, *args, shuffle=None, **kwargs):
"""Wraps dask.dataframe DataFrame.shuffle method"""
return super().shuffle(
*args, shuffle=_get_shuffle_type(shuffle), **kwargs
)
@_dask_cudf_nvtx_annotate
def groupby(self, by=None, **kwargs):
from .groupby import CudfDataFrameGroupBy
return CudfDataFrameGroupBy(self, by=by, **kwargs)
@_dask_cudf_nvtx_annotate
def sum_of_squares(x):
x = x.astype("f8")._column
outcol = libcudf.reduce.reduce("sum_of_squares", x)
return cudf.Series(outcol)
@_dask_cudf_nvtx_annotate
def var_aggregate(x2, x, n, ddof):
try:
with warnings.catch_warnings(record=True):
warnings.simplefilter("always")
result = (x2 / n) - (x / n) ** 2
if ddof != 0:
result = result * n / (n - ddof)
return result
except ZeroDivisionError:
return np.float64(np.nan)
@_dask_cudf_nvtx_annotate
def nlargest_agg(x, **kwargs):
return cudf.concat(x).nlargest(**kwargs)
@_dask_cudf_nvtx_annotate
def nsmallest_agg(x, **kwargs):
return cudf.concat(x).nsmallest(**kwargs)
class Series(_Frame, dd.core.Series):
_partition_type = cudf.Series
@_dask_cudf_nvtx_annotate
def count(self, split_every=False):
return reduction(
[self],
chunk=M.count,
aggregate=np.sum,
split_every=split_every,
meta="i8",
)
@_dask_cudf_nvtx_annotate
def mean(self, split_every=False):
sum = self.sum(split_every=split_every)
n = self.count(split_every=split_every)
return sum / n
@derived_from(pd.DataFrame)
@_dask_cudf_nvtx_annotate
def var(
self,
axis=None,
skipna=True,
ddof=1,
split_every=False,
dtype=None,
out=None,
naive=False,
):
axis = self._validate_axis(axis)
meta = self._meta_nonempty.var(axis=axis, skipna=skipna)
if axis == 1:
result = map_partitions(
M.var,
self,
meta=meta,
token=self._token_prefix + "var",
axis=axis,
skipna=skipna,
ddof=ddof,
)
return handle_out(out, result)
elif naive:
return _naive_var(self, meta, skipna, ddof, split_every, out)
else:
return _parallel_var(self, meta, skipna, split_every, out)
@_dask_cudf_nvtx_annotate
def groupby(self, *args, **kwargs):
from .groupby import CudfSeriesGroupBy
return CudfSeriesGroupBy(self, *args, **kwargs)
@property # type: ignore
@_dask_cudf_nvtx_annotate
def list(self):
return ListMethods(self)
@property # type: ignore
@_dask_cudf_nvtx_annotate
def struct(self):
return StructMethods(self)
class Index(Series, dd.core.Index):
_partition_type = cudf.Index # type: ignore
@_dask_cudf_nvtx_annotate
def _naive_var(ddf, meta, skipna, ddof, split_every, out):
num = ddf._get_numeric_data()
x = 1.0 * num.sum(skipna=skipna, split_every=split_every)
x2 = 1.0 * (num**2).sum(skipna=skipna, split_every=split_every)
n = num.count(split_every=split_every)
name = ddf._token_prefix + "var"
result = map_partitions(
var_aggregate, x2, x, n, token=name, meta=meta, ddof=ddof
)
if isinstance(ddf, DataFrame):
result.divisions = (min(ddf.columns), max(ddf.columns))
return handle_out(out, result)
@_dask_cudf_nvtx_annotate
def _parallel_var(ddf, meta, skipna, split_every, out):
def _local_var(x, skipna):
if skipna:
n = x.count()
avg = x.mean(skipna=skipna)
else:
# Not skipping nulls, so might as well
# avoid the full `count` operation
n = len(x)
avg = x.sum(skipna=skipna) / n
m2 = ((x - avg) ** 2).sum(skipna=skipna)
return n, avg, m2
def _aggregate_var(parts):
n, avg, m2 = parts[0]
for i in range(1, len(parts)):
n_a, avg_a, m2_a = n, avg, m2
n_b, avg_b, m2_b = parts[i]
n = n_a + n_b
avg = (n_a * avg_a + n_b * avg_b) / n
delta = avg_b - avg_a
m2 = m2_a + m2_b + delta**2 * n_a * n_b / n
return n, avg, m2
def _finalize_var(vals):
n, _, m2 = vals
return m2 / (n - 1)
# Build graph
nparts = ddf.npartitions
if not split_every:
split_every = nparts
name = "var-" + tokenize(skipna, split_every, out)
local_name = "local-" + name
num = ddf._get_numeric_data()
dsk = {
(local_name, n, 0): (_local_var, (num._name, n), skipna)
for n in range(nparts)
}
# Use reduction tree
widths = [nparts]
while nparts > 1:
nparts = math.ceil(nparts / split_every)
widths.append(nparts)
height = len(widths)
for depth in range(1, height):
for group in range(widths[depth]):
p_max = widths[depth - 1]
lstart = split_every * group
lstop = min(lstart + split_every, p_max)
node_list = [
(local_name, p, depth - 1) for p in range(lstart, lstop)
]
dsk[(local_name, group, depth)] = (_aggregate_var, node_list)
if height == 1:
group = depth = 0
dsk[(name, 0)] = (_finalize_var, (local_name, group, depth))
graph = HighLevelGraph.from_collections(name, dsk, dependencies=[num, ddf])
result = dd.core.new_dd_object(graph, name, meta, (None, None))
if isinstance(ddf, DataFrame):
result.divisions = (min(ddf.columns), max(ddf.columns))
return handle_out(out, result)
@_dask_cudf_nvtx_annotate
def _extract_meta(x):
"""
Extract internal cache data (``_meta``) from dask_cudf objects
"""
if isinstance(x, (Scalar, _Frame)):
return x._meta
elif isinstance(x, list):
return [_extract_meta(_x) for _x in x]
elif isinstance(x, tuple):
return tuple(_extract_meta(_x) for _x in x)
elif isinstance(x, dict):
return {k: _extract_meta(v) for k, v in x.items()}
return x
@_dask_cudf_nvtx_annotate
def _emulate(func, *args, **kwargs):
"""
Apply a function using args / kwargs. If arguments contain dd.DataFrame /
dd.Series, using internal cache (``_meta``) for calculation
"""
with raise_on_meta_error(funcname(func)):
return func(*_extract_meta(args), **_extract_meta(kwargs))
@_dask_cudf_nvtx_annotate
def align_partitions(args):
"""Align partitions between dask_cudf objects.
Note that if all divisions are unknown, but have equal npartitions, then
they will be passed through unchanged.
"""
dfs = [df for df in args if isinstance(df, _Frame)]
if not dfs:
return args
divisions = dfs[0].divisions
if not all(df.divisions == divisions for df in dfs):
raise NotImplementedError("Aligning mismatched partitions")
return args
@_dask_cudf_nvtx_annotate
def reduction(
args,
chunk=None,
aggregate=None,
combine=None,
meta=None,
token=None,
chunk_kwargs=None,
aggregate_kwargs=None,
combine_kwargs=None,
split_every=None,
**kwargs,
):
"""Generic tree reduction operation.
Parameters
----------
args :
Positional arguments for the `chunk` function. All `dask.dataframe`
objects should be partitioned and indexed equivalently.
chunk : function [block-per-arg] -> block
Function to operate on each block of data
aggregate : function list-of-blocks -> block
Function to operate on the list of results of chunk
combine : function list-of-blocks -> block, optional
Function to operate on intermediate lists of results of chunk
in a tree-reduction. If not provided, defaults to aggregate.
$META
token : str, optional
The name to use for the output keys.
chunk_kwargs : dict, optional
Keywords for the chunk function only.
aggregate_kwargs : dict, optional
Keywords for the aggregate function only.
combine_kwargs : dict, optional
Keywords for the combine function only.
split_every : int, optional
Group partitions into groups of this size while performing a
tree-reduction. If set to False, no tree-reduction will be used,
and all intermediates will be concatenated and passed to ``aggregate``.
Default is 8.
kwargs :
All remaining keywords will be passed to ``chunk``, ``aggregate``, and
``combine``.
"""
if chunk_kwargs is None:
chunk_kwargs = dict()
if aggregate_kwargs is None:
aggregate_kwargs = dict()
chunk_kwargs.update(kwargs)
aggregate_kwargs.update(kwargs)
if combine is None:
if combine_kwargs:
raise ValueError("`combine_kwargs` provided with no `combine`")
combine = aggregate
combine_kwargs = aggregate_kwargs
else:
if combine_kwargs is None:
combine_kwargs = dict()
combine_kwargs.update(kwargs)
if not isinstance(args, (tuple, list)):
args = [args]
npartitions = {arg.npartitions for arg in args if isinstance(arg, _Frame)}
if len(npartitions) > 1:
raise ValueError("All arguments must have same number of partitions")
npartitions = npartitions.pop()
if split_every is None:
split_every = 8
elif split_every is False:
split_every = npartitions
elif split_every < 2 or not isinstance(split_every, int):
raise ValueError("split_every must be an integer >= 2")
token_key = tokenize(
token or (chunk, aggregate),
meta,
args,
chunk_kwargs,
aggregate_kwargs,
combine_kwargs,
split_every,
)
# Chunk
a = f"{token or funcname(chunk)}-chunk-{token_key}"
if len(args) == 1 and isinstance(args[0], _Frame) and not chunk_kwargs:
dsk = {
(a, 0, i): (chunk, key)
for i, key in enumerate(args[0].__dask_keys__())
}
else:
dsk = {
(a, 0, i): (
apply,
chunk,
[(x._name, i) if isinstance(x, _Frame) else x for x in args],
chunk_kwargs,
)
for i in range(args[0].npartitions)
}
# Combine
b = f"{token or funcname(combine)}-combine-{token_key}"
k = npartitions
depth = 0
while k > split_every:
for part_i, inds in enumerate(partition_all(split_every, range(k))):
conc = (list, [(a, depth, i) for i in inds])
dsk[(b, depth + 1, part_i)] = (
(apply, combine, [conc], combine_kwargs)
if combine_kwargs
else (combine, conc)
)
k = part_i + 1
a = b
depth += 1
# Aggregate
b = f"{token or funcname(aggregate)}-agg-{token_key}"
conc = (list, [(a, depth, i) for i in range(k)])
if aggregate_kwargs:
dsk[(b, 0)] = (apply, aggregate, [conc], aggregate_kwargs)
else:
dsk[(b, 0)] = (aggregate, conc)
if meta is None:
meta_chunk = _emulate(apply, chunk, args, chunk_kwargs)
meta = _emulate(apply, aggregate, [[meta_chunk]], aggregate_kwargs)
meta = dask_make_meta(meta)
graph = HighLevelGraph.from_collections(b, dsk, dependencies=args)
return dd.core.new_dd_object(graph, b, meta, (None, None))
@_dask_cudf_nvtx_annotate
def from_cudf(data, npartitions=None, chunksize=None, sort=True, name=None):
if isinstance(getattr(data, "index", None), cudf.MultiIndex):
raise NotImplementedError(
"dask_cudf does not support MultiIndex Dataframes."
)
name = name or ("from_cudf-" + tokenize(data, npartitions or chunksize))
return dd.from_pandas(
data,
npartitions=npartitions,
chunksize=chunksize,
sort=sort,
name=name,
)
from_cudf.__doc__ = (
textwrap.dedent(
"""
Create a :class:`.DataFrame` from a :class:`cudf.DataFrame`.
This function is a thin wrapper around
:func:`dask.dataframe.from_pandas`, accepting the same
arguments (described below) excepting that it operates on cuDF
rather than pandas objects.\n
"""
)
+ textwrap.dedent(dd.from_pandas.__doc__)
)
@_dask_cudf_nvtx_annotate
def from_dask_dataframe(df):
"""
Convert a Dask :class:`dask.dataframe.DataFrame` to a Dask-cuDF
one.
Parameters
----------
df : dask.dataframe.DataFrame
The Dask dataframe to convert
Returns
-------
dask_cudf.DataFrame : A new Dask collection backed by cuDF objects
"""
return df.map_partitions(cudf.from_pandas)
for name in (
"add",
"sub",
"mul",
"truediv",
"floordiv",
"mod",
"pow",
"radd",
"rsub",
"rmul",
"rtruediv",
"rfloordiv",
"rmod",
"rpow",
):
meth = getattr(cudf.DataFrame, name)
DataFrame._bind_operator_method(name, meth, original=cudf.Series)
meth = getattr(cudf.Series, name)
Series._bind_operator_method(name, meth, original=cudf.Series)
for name in ("lt", "gt", "le", "ge", "ne", "eq"):
meth = getattr(cudf.Series, name)
Series._bind_comparison_method(name, meth, original=cudf.Series)
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/_version.py
|
# Copyright (c) 2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import importlib.resources
__version__ = (
importlib.resources.files("dask_cudf")
.joinpath("VERSION")
.read_text()
.strip()
)
__git_commit__ = ""
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/__init__.py
|
# Copyright (c) 2018-2023, NVIDIA CORPORATION.
from dask.dataframe import from_delayed
import cudf
from . import backends
from ._version import __git_commit__, __version__
from .core import DataFrame, Series, concat, from_cudf, from_dask_dataframe
from .groupby import groupby_agg
from .io import read_csv, read_json, read_orc, read_text, to_orc
try:
from .io import read_parquet
except ImportError:
pass
__all__ = [
"DataFrame",
"Series",
"from_cudf",
"from_dask_dataframe",
"concat",
"from_delayed",
]
if not hasattr(cudf.DataFrame, "mean"):
cudf.DataFrame.mean = None
del cudf
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/sorting.py
|
# Copyright (c) 2020-2023, NVIDIA CORPORATION.
from collections.abc import Iterator
import cupy
import numpy as np
import tlz as toolz
from dask import config
from dask.base import tokenize
from dask.dataframe import methods
from dask.dataframe.core import DataFrame, Index, Series
from dask.dataframe.shuffle import rearrange_by_column
from dask.highlevelgraph import HighLevelGraph
from dask.utils import M
import cudf as gd
from cudf.api.types import is_categorical_dtype
from cudf.utils.nvtx_annotation import _dask_cudf_nvtx_annotate
_SHUFFLE_SUPPORT = ("tasks", "p2p") # "disk" not supported
@_dask_cudf_nvtx_annotate
def set_index_post(df, index_name, drop, column_dtype):
df2 = df.set_index(index_name, drop=drop)
df2.columns = df2.columns.astype(column_dtype)
return df2
@_dask_cudf_nvtx_annotate
def _set_partitions_pre(s, divisions, ascending=True, na_position="last"):
if ascending:
partitions = divisions.searchsorted(s, side="right") - 1
else:
partitions = (
len(divisions) - divisions.searchsorted(s, side="right") - 1
)
partitions[(partitions < 0) | (partitions >= len(divisions) - 1)] = (
0 if ascending else (len(divisions) - 2)
)
partitions[s._columns[0].isnull().values] = (
len(divisions) - 2 if na_position == "last" else 0
)
return partitions
@_dask_cudf_nvtx_annotate
def _quantile(a, q):
n = len(a)
if not len(a):
return None, n
return (
a.quantile(q=q.tolist(), interpolation="nearest", method="table"),
n,
)
@_dask_cudf_nvtx_annotate
def merge_quantiles(finalq, qs, vals):
"""Combine several quantile calculations of different data.
[NOTE: Same logic as dask.array merge_percentiles]
"""
if isinstance(finalq, Iterator):
finalq = list(finalq)
finalq = np.array(finalq)
qs = list(map(list, qs))
vals = list(vals)
vals, Ns = zip(*vals)
Ns = list(Ns)
L = list(zip(*[(q, val, N) for q, val, N in zip(qs, vals, Ns) if N]))
if not L:
raise ValueError("No non-trivial arrays found")
qs, vals, Ns = L
if len(vals) != len(qs) or len(Ns) != len(qs):
raise ValueError("qs, vals, and Ns parameters must be the same length")
# transform qs and Ns into number of observations between quantiles
counts = []
for q, N in zip(qs, Ns):
count = np.empty(len(q))
count[1:] = np.diff(q)
count[0] = q[0]
count *= N
counts.append(count)
def _append_counts(val, count):
val["_counts"] = count
return val
# Sort by calculated quantile values, then number of observations.
combined_vals_counts = gd.core.reshape._merge_sorted(
[*map(_append_counts, vals, counts)]
)
combined_counts = cupy.asnumpy(combined_vals_counts["_counts"].values)
combined_vals = combined_vals_counts.drop(columns=["_counts"])
# quantile-like, but scaled by total number of observations
combined_q = np.cumsum(combined_counts)
# rescale finalq quantiles to match combined_q
desired_q = finalq * sum(Ns)
# TODO: Support other interpolation methods
# For now - Always use "nearest" for interpolation
left = np.searchsorted(combined_q, desired_q, side="left")
right = np.searchsorted(combined_q, desired_q, side="right") - 1
np.minimum(left, len(combined_vals) - 1, left) # don't exceed max index
lower = np.minimum(left, right)
upper = np.maximum(left, right)
lower_residual = np.abs(combined_q[lower] - desired_q)
upper_residual = np.abs(combined_q[upper] - desired_q)
mask = lower_residual > upper_residual
index = lower # alias; we no longer need lower
index[mask] = upper[mask]
rv = combined_vals.iloc[index]
return rv.reset_index(drop=True)
@_dask_cudf_nvtx_annotate
def _approximate_quantile(df, q):
"""Approximate quantiles of DataFrame or Series.
[NOTE: Same logic as dask.dataframe Series quantile]
"""
# current implementation needs q to be sorted so
# sort if array-like, otherwise leave it alone
q_ndarray = np.array(q)
if q_ndarray.ndim > 0:
q_ndarray.sort(kind="mergesort")
q = q_ndarray
# Lets assume we are dealing with a DataFrame throughout
if isinstance(df, (Series, Index)):
df = df.to_frame()
assert isinstance(df, DataFrame)
final_type = df._meta._constructor
# Create metadata
meta = df._meta_nonempty.quantile(q=q, method="table")
# Define final action (create df with quantiles as index)
def finalize_tsk(tsk):
return (final_type, tsk)
return_type = df.__class__
# pandas/cudf uses quantile in [0, 1]
# numpy / cupy uses [0, 100]
qs = np.asarray(q)
token = tokenize(df, qs)
if len(qs) == 0:
name = "quantiles-" + token
empty_index = gd.Index([], dtype=float)
return Series(
{
(name, 0): final_type(
{col: [] for col in df.columns},
name=df.name,
index=empty_index,
)
},
name,
df._meta,
[None, None],
)
else:
new_divisions = [np.min(q), np.max(q)]
name = "quantiles-1-" + token
val_dsk = {
(name, i): (_quantile, key, qs)
for i, key in enumerate(df.__dask_keys__())
}
name2 = "quantiles-2-" + token
merge_dsk = {
(name2, 0): finalize_tsk(
(merge_quantiles, qs, [qs] * df.npartitions, sorted(val_dsk))
)
}
dsk = toolz.merge(val_dsk, merge_dsk)
graph = HighLevelGraph.from_collections(name2, dsk, dependencies=[df])
df = return_type(graph, name2, meta, new_divisions)
def set_quantile_index(df):
df.index = q
return df
df = df.map_partitions(set_quantile_index, meta=meta)
return df
@_dask_cudf_nvtx_annotate
def quantile_divisions(df, by, npartitions):
qn = np.linspace(0.0, 1.0, npartitions + 1).tolist()
divisions = _approximate_quantile(df[by], qn).compute()
columns = divisions.columns
# TODO: Make sure divisions are correct for all dtypes..
if (
len(columns) == 1
and df[columns[0]].dtype != "object"
and not is_categorical_dtype(df[columns[0]].dtype)
):
dtype = df[columns[0]].dtype
divisions = divisions[columns[0]].astype("int64")
divisions.iloc[-1] += 1
divisions = sorted(
divisions.drop_duplicates().astype(dtype).to_arrow().tolist(),
key=lambda x: (x is None, x),
)
else:
for col in columns:
dtype = df[col].dtype
if dtype != "object":
divisions[col] = divisions[col].astype("int64")
divisions[col].iloc[-1] += 1
divisions[col] = divisions[col].astype(dtype)
else:
if last := divisions[col].iloc[-1]:
val = chr(ord(last[0]) + 1)
else:
val = "this string intentionally left empty" # any but ""
divisions[col].iloc[-1] = val
divisions = divisions.drop_duplicates().sort_index()
return divisions
@_dask_cudf_nvtx_annotate
def sort_values(
df,
by,
max_branch=None,
divisions=None,
set_divisions=False,
ignore_index=False,
ascending=True,
na_position="last",
shuffle=None,
sort_function=None,
sort_function_kwargs=None,
):
"""Sort by the given list/tuple of column names."""
if not isinstance(ascending, bool):
raise ValueError("ascending must be either True or False")
if na_position not in ("first", "last"):
raise ValueError("na_position must be either 'first' or 'last'")
npartitions = df.npartitions
if isinstance(by, tuple):
by = list(by)
elif not isinstance(by, list):
by = [by]
# parse custom sort function / kwargs if provided
sort_kwargs = {
"by": by,
"ascending": ascending,
"na_position": na_position,
}
if sort_function is None:
sort_function = M.sort_values
if sort_function_kwargs is not None:
sort_kwargs.update(sort_function_kwargs)
# handle single partition case
if npartitions == 1:
return df.map_partitions(sort_function, **sort_kwargs)
# Step 1 - Calculate new divisions (if necessary)
if divisions is None:
divisions = quantile_divisions(df, by, npartitions)
# Step 2 - Perform repartitioning shuffle
meta = df._meta._constructor_sliced([0])
if not isinstance(divisions, (gd.Series, gd.DataFrame)):
dtype = df[by[0]].dtype
divisions = df._meta._constructor_sliced(divisions, dtype=dtype)
partitions = df[by].map_partitions(
_set_partitions_pre,
divisions=divisions,
ascending=ascending,
na_position=na_position,
meta=meta,
)
df2 = df.assign(_partitions=partitions)
df3 = rearrange_by_column(
df2,
"_partitions",
max_branch=max_branch,
npartitions=len(divisions) - 1,
shuffle=_get_shuffle_type(shuffle),
ignore_index=ignore_index,
).drop(columns=["_partitions"])
df3.divisions = (None,) * (df3.npartitions + 1)
# Step 3 - Return final sorted df
df4 = df3.map_partitions(sort_function, **sort_kwargs)
if not isinstance(divisions, gd.DataFrame) and set_divisions:
# Can't have multi-column divisions elsewhere in dask (yet)
df4.divisions = tuple(methods.tolist(divisions))
return df4
def get_default_shuffle_method():
# Note that `dask.utils.get_default_shuffle_method`
# will return "p2p" by default when a distributed
# client is present. Dask-cudf supports "p2p", but
# will not use it by default (yet)
default = config.get("dataframe.shuffle.method", "tasks")
if default not in _SHUFFLE_SUPPORT:
default = "tasks"
return default
def _get_shuffle_type(shuffle):
# Utility to set the shuffle-kwarg default
# and to validate user-specified options
shuffle = shuffle or get_default_shuffle_method()
if shuffle not in _SHUFFLE_SUPPORT:
raise ValueError(
"Dask-cudf only supports the following shuffle "
f"methods: {_SHUFFLE_SUPPORT}. Got shuffle={shuffle}"
)
return shuffle
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/VERSION
|
24.02.00
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/DASK_LICENSE.txt
|
ο»ΏThis library contains modified code from the Dask library
(https://github.com/dask/dask). The original Dask license is below.
Copyright (c) 2014-2017, Continuum Analytics, Inc. and contributors
All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
Neither the name of Continuum Analytics nor the names of any contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
THE POSSIBILITY OF SUCH DAMAGE.
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/tests/test_accessor.py
|
# Copyright (c) 2019-2023, NVIDIA CORPORATION.
import numpy as np
import pandas as pd
import pytest
from pandas.testing import assert_series_equal
from dask import dataframe as dd
from cudf import DataFrame, Series, date_range
from cudf.testing._utils import assert_eq, does_not_raise
import dask_cudf as dgd
#############################################################################
# Datetime Accessor #
#############################################################################
def data_dt_1():
return pd.date_range("20010101", "20020215", freq="400h")
def data_dt_2():
return np.random.randn(100)
dt_fields = ["year", "month", "day", "hour", "minute", "second"]
@pytest.mark.parametrize("data", [data_dt_2()])
def test_datetime_accessor_initialization(data):
pdsr = pd.Series(data.copy())
sr = Series(pdsr)
dsr = dgd.from_cudf(sr, npartitions=5)
with pytest.raises(AttributeError):
dsr.dt
@pytest.mark.parametrize("data", [data_dt_1()])
def test_series(data):
pdsr = pd.Series(data.copy())
sr = Series(pdsr)
dsr = dgd.from_cudf(sr, npartitions=5)
np.testing.assert_equal(np.array(pdsr), dsr.compute().values_host)
@pytest.mark.parametrize("data", [data_dt_1()])
@pytest.mark.parametrize("field", dt_fields)
def test_dt_series(data, field):
pdsr = pd.Series(data.copy())
sr = Series(pdsr)
dsr = dgd.from_cudf(sr, npartitions=5)
base = getattr(pdsr.dt, field)
test = getattr(dsr.dt, field).compute().to_pandas().astype("int64")
assert_series_equal(base, test)
@pytest.mark.parametrize("data", [data_dt_1()])
def test_dt_accessor(data):
df = DataFrame({"dt_col": data.copy()})
ddf = dgd.from_cudf(df, npartitions=5)
for i in ["year", "month", "day", "hour", "minute", "second", "weekday"]:
assert i in dir(ddf.dt_col.dt)
assert_series_equal(
getattr(ddf.dt_col.dt, i).compute().to_pandas(),
getattr(df.dt_col.dt, i).to_pandas(),
)
#############################################################################
# Categorical Accessor #
#############################################################################
def data_cat_1():
cat = pd.Categorical(["a", "a", "b", "c", "a"], categories=["a", "b", "c"])
return cat
def data_cat_2():
return pd.Series([1, 2, 3])
def data_cat_3():
cat1 = pd.Categorical(
["a", "a", "b", "c", "a"], categories=["a", "b", "c"], ordered=True
)
cat2 = pd.Categorical(
["a", "b", "a", "c", "b"], categories=["a", "b", "c"], ordered=True
)
return cat1, cat2
@pytest.mark.parametrize("data", [data_cat_1()])
def test_categorical_accessor_initialization1(data):
sr = Series(data.copy())
dsr = dgd.from_cudf(sr, npartitions=5)
dsr.cat
@pytest.mark.parametrize("data", [data_cat_2()])
def test_categorical_accessor_initialization2(data):
sr = Series(data.copy())
dsr = dgd.from_cudf(sr, npartitions=5)
with pytest.raises(AttributeError):
dsr.cat
@pytest.mark.parametrize("data", [data_cat_1()])
def test_categorical_basic(data):
cat = data.copy()
pdsr = pd.Series(cat)
sr = Series(cat)
dsr = dgd.from_cudf(sr, npartitions=2)
result = dsr.compute()
np.testing.assert_array_equal(cat.codes, result.cat.codes.values_host)
assert dsr.dtype.to_pandas() == pdsr.dtype
# Test attributes
assert pdsr.cat.ordered == dsr.cat.ordered
assert_eq(pdsr.cat.categories, dsr.cat.categories)
np.testing.assert_array_equal(
pdsr.cat.codes.values, result.cat.codes.values_host
)
string = str(result)
expect_str = """
0 a
1 a
2 b
3 c
4 a
"""
assert all(x == y for x, y in zip(string.split(), expect_str.split()))
df = DataFrame()
df["a"] = ["xyz", "abc", "def"] * 10
pdf = df.to_pandas()
cddf = dgd.from_cudf(df, 1)
cddf["b"] = cddf["a"].astype("category")
ddf = dd.from_pandas(pdf, 1)
ddf["b"] = ddf["a"].astype("category")
assert_eq(ddf._meta_nonempty["b"], cddf._meta_nonempty["b"])
with pytest.raises(NotImplementedError):
cddf["b"].cat.categories
with pytest.raises(NotImplementedError):
ddf["b"].cat.categories
cddf = cddf.categorize()
ddf = ddf.categorize()
assert_eq(ddf["b"].cat.categories, cddf["b"].cat.categories)
assert_eq(ddf["b"].cat.ordered, cddf["b"].cat.ordered)
@pytest.mark.parametrize("data", [data_cat_1()])
def test_categorical_compare_unordered(data):
cat = data.copy()
pdsr = pd.Series(cat)
sr = Series(cat)
dsr = dgd.from_cudf(sr, npartitions=2)
# Test equality
out = dsr == dsr
assert out.dtype == np.bool_
assert np.all(out.compute())
assert np.all(pdsr == pdsr)
# Test inequality
out = dsr != dsr
assert not np.any(out.compute())
assert not np.any(pdsr != pdsr)
assert not dsr.cat.ordered
assert not pdsr.cat.ordered
with pytest.raises(
(TypeError, ValueError),
match="Unordered Categoricals can only compare equality or not",
):
pdsr < pdsr
with pytest.raises(
(TypeError, ValueError),
match=(
"The only binary operations supported by unordered categorical "
"columns are equality and inequality."
),
):
dsr < dsr
@pytest.mark.parametrize("data", [data_cat_3()])
def test_categorical_compare_ordered(data):
cat1 = data[0]
cat2 = data[1]
pdsr1 = pd.Series(cat1)
pdsr2 = pd.Series(cat2)
sr1 = Series(cat1)
sr2 = Series(cat2)
dsr1 = dgd.from_cudf(sr1, npartitions=2)
dsr2 = dgd.from_cudf(sr2, npartitions=2)
# Test equality
out = dsr1 == dsr1
assert out.dtype == np.bool_
assert np.all(out.compute().values_host)
assert np.all(pdsr1 == pdsr1)
# Test inequality
out = dsr1 != dsr1
assert not np.any(out.compute().values_host)
assert not np.any(pdsr1 != pdsr1)
assert dsr1.cat.ordered
assert pdsr1.cat.ordered
# Test ordered operators
np.testing.assert_array_equal(
pdsr1 < pdsr2, (dsr1 < dsr2).compute().values_host
)
np.testing.assert_array_equal(
pdsr1 > pdsr2, (dsr1 > dsr2).compute().values_host
)
#############################################################################
# String Accessor #
#############################################################################
def data_str_1():
return pd.Series(["20190101", "20190102", "20190103"])
@pytest.mark.parametrize("data", [data_str_1()])
def test_string_slicing(data):
pdsr = pd.Series(data.copy())
sr = Series(pdsr)
dsr = dgd.from_cudf(sr, npartitions=2)
base = pdsr.str.slice(0, 4)
test = dsr.str.slice(0, 4).compute()
assert_eq(base, test)
def test_categorical_categories():
df = DataFrame(
{"a": ["a", "b", "c", "d", "e", "e", "a", "d"], "b": range(8)}
)
df["a"] = df["a"].astype("category")
pdf = df.to_pandas(nullable_pd_dtype=False)
ddf = dgd.from_cudf(df, 2)
dpdf = dd.from_pandas(pdf, 2)
dd.assert_eq(
ddf.a.cat.categories.to_series().to_pandas(nullable_pd_dtype=False),
dpdf.a.cat.categories.to_series(),
check_index=False,
)
def test_categorical_as_known():
df = dgd.from_cudf(DataFrame({"col_1": [0, 1, 2, 3]}), npartitions=2)
df["col_1"] = df["col_1"].astype("category")
actual = df["col_1"].cat.as_known()
pdf = dd.from_pandas(pd.DataFrame({"col_1": [0, 1, 2, 3]}), npartitions=2)
pdf["col_1"] = pdf["col_1"].astype("category")
expected = pdf["col_1"].cat.as_known()
dd.assert_eq(expected, actual)
def test_str_slice():
df = DataFrame({"a": ["abc,def,123", "xyz,hi,bye"]})
ddf = dgd.from_cudf(df, 1)
pdf = df.to_pandas()
dd.assert_eq(
pdf.a.str.split(",", expand=True, n=1),
ddf.a.str.split(",", expand=True, n=1),
)
dd.assert_eq(
pdf.a.str.split(",", expand=True, n=2),
ddf.a.str.split(",", expand=True, n=2),
)
#############################################################################
# List Accessor #
#############################################################################
def data_test_1():
return [list(range(100)) for _ in range(100)]
def data_test_2():
return [list(i for _ in range(i)) for i in range(500)]
def data_test_non_numeric():
return [list(chr(97 + i % 20) for _ in range(i)) for i in range(500)]
def data_test_nested():
return [
list(list(y for y in range(x % 5)) for x in range(i))
for i in range(40)
]
def data_test_sort():
return [[1, 2, 3, 1, 2, 5] for _ in range(20)]
@pytest.mark.parametrize(
"data",
[
[[]],
[[[]]],
[[0]],
[[0, 1]],
[[0, 1], [2, 3]],
[[[0, 1], [2]], [[3, 4]]],
[[None]],
[[[None]]],
[[None], None],
[[1, None], [1]],
[[1, None], None],
[[[1, None], None], None],
],
)
def test_create_list_series(data):
expect = pd.Series(data)
ds_got = dgd.from_cudf(Series(data), 4)
assert_eq(expect, ds_got.compute())
@pytest.mark.parametrize(
"data",
[data_test_1(), data_test_2(), data_test_non_numeric()],
)
def test_unique(data):
expect = Series(data).list.unique()
ds = dgd.from_cudf(Series(data), 5)
assert_eq(expect, ds.list.unique().compute())
@pytest.mark.parametrize(
"data",
[data_test_2(), data_test_non_numeric()],
)
def test_len(data):
expect = Series(data).list.len()
ds = dgd.from_cudf(Series(data), 5)
assert_eq(expect, ds.list.len().compute())
@pytest.mark.parametrize(
"data, search_key",
[(data_test_2(), 1)],
)
def test_contains(data, search_key):
expect = Series(data).list.contains(search_key)
ds = dgd.from_cudf(Series(data), 5)
assert_eq(expect, ds.list.contains(search_key).compute())
@pytest.mark.parametrize(
"data, index",
[
(data_test_1(), 1),
(data_test_2(), 2),
],
)
def test_get(data, index):
expect = Series(data).list.get(index)
ds = dgd.from_cudf(Series(data), 5)
assert_eq(expect, ds.list.get(index).compute())
@pytest.mark.parametrize(
"data",
[data_test_1(), data_test_2(), data_test_nested()],
)
def test_leaves(data):
expect = Series(data).list.leaves
ds = dgd.from_cudf(Series(data), 5)
got = ds.list.leaves.compute().reset_index(drop=True)
assert_eq(expect, got)
@pytest.mark.parametrize(
"data, list_indices, expectation",
[
(
data_test_1(),
[[0, 1] for _ in range(len(data_test_1()))],
does_not_raise(),
),
(data_test_2(), [[0]], pytest.raises(ValueError)),
],
)
def test_take(data, list_indices, expectation):
with expectation:
expect = Series(data).list.take(list_indices)
if expectation == does_not_raise():
ds = dgd.from_cudf(Series(data), 5)
assert_eq(expect, ds.list.take(list_indices).compute())
@pytest.mark.parametrize(
"data, ascending, na_position, ignore_index",
[
(data_test_sort(), True, "first", False),
(data_test_sort(), False, "last", True),
],
)
def test_sorting(data, ascending, na_position, ignore_index):
expect = Series(data).list.sort_values(
ascending=ascending, na_position=na_position, ignore_index=ignore_index
)
got = (
dgd.from_cudf(Series(data), 5)
.list.sort_values(
ascending=ascending,
na_position=na_position,
ignore_index=ignore_index,
)
.compute()
.reset_index(drop=True)
)
assert_eq(expect, got)
#############################################################################
# Struct Accessor #
#############################################################################
struct_accessor_data_params = [
[{"a": 5, "b": 10}, {"a": 3, "b": 7}, {"a": -3, "b": 11}],
[{"a": None, "b": 1}, {"a": None, "b": 0}, {"a": -3, "b": None}],
[{"a": 1, "b": 2}],
[{"a": 1, "b": 3, "c": 4}],
]
@pytest.mark.parametrize(
"data",
struct_accessor_data_params,
)
def test_create_struct_series(data):
expect = pd.Series(data)
ds_got = dgd.from_cudf(Series(data), 2)
assert_eq(expect, ds_got.compute())
@pytest.mark.parametrize(
"data",
struct_accessor_data_params,
)
def test_struct_field_str(data):
for test_key in ["a", "b"]:
expect = Series(data).struct.field(test_key)
ds_got = dgd.from_cudf(Series(data), 2).struct.field(test_key)
assert_eq(expect, ds_got.compute())
@pytest.mark.parametrize(
"data",
struct_accessor_data_params,
)
def test_struct_field_integer(data):
for test_key in [0, 1]:
expect = Series(data).struct.field(test_key)
ds_got = dgd.from_cudf(Series(data), 2).struct.field(test_key)
assert_eq(expect, ds_got.compute())
@pytest.mark.parametrize(
"data",
struct_accessor_data_params,
)
def test_dask_struct_field_Key_Error(data):
got = dgd.from_cudf(Series(data), 2)
with pytest.raises(KeyError):
got.struct.field("notakey").compute()
@pytest.mark.parametrize(
"data",
struct_accessor_data_params,
)
def test_dask_struct_field_Int_Error(data):
# breakpoint()
got = dgd.from_cudf(Series(data), 2)
with pytest.raises(IndexError):
got.struct.field(1000).compute()
@pytest.mark.parametrize(
"data",
[
[{}, {}, {}],
[{"a": 100, "b": "abc"}, {"a": 42, "b": "def"}, {"a": -87, "b": ""}],
[{"a": [1, 2, 3], "b": {"c": 101}}, {"a": [4, 5], "b": {"c": 102}}],
],
)
def test_struct_explode(data):
expect = Series(data).struct.explode()
got = dgd.from_cudf(Series(data), 2).struct.explode()
# Output index will not agree for >1 partitions
assert_eq(expect, got.compute().reset_index(drop=True))
def test_tz_localize():
data = Series(date_range("2000-04-01", "2000-04-03", freq="H"))
expect = data.dt.tz_localize(
"US/Eastern", ambiguous="NaT", nonexistent="NaT"
)
got = dgd.from_cudf(data, 2).dt.tz_localize(
"US/Eastern", ambiguous="NaT", nonexistent="NaT"
)
dd.assert_eq(expect, got)
expect = expect.dt.tz_localize(None)
got = got.dt.tz_localize(None)
dd.assert_eq(expect, got)
@pytest.mark.parametrize(
"data",
[
date_range("2000-04-01", "2000-04-03", freq="H").tz_localize("UTC"),
date_range("2000-04-01", "2000-04-03", freq="H").tz_localize(
"US/Eastern"
),
],
)
def test_tz_convert(data):
expect = Series(data).dt.tz_convert("US/Pacific")
got = dgd.from_cudf(Series(data), 2).dt.tz_convert("US/Pacific")
dd.assert_eq(expect, got)
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/tests/test_distributed.py
|
# Copyright (c) 2020-2023, NVIDIA CORPORATION.
import numba.cuda
import pytest
import dask
from dask import dataframe as dd
from dask.distributed import Client
from distributed.utils_test import cleanup, loop, loop_in_thread # noqa: F401
import cudf
from cudf.testing._utils import assert_eq
import dask_cudf
dask_cuda = pytest.importorskip("dask_cuda")
def more_than_two_gpus():
ngpus = len(numba.cuda.gpus)
return ngpus >= 2
@pytest.mark.parametrize("delayed", [True, False])
def test_basic(loop, delayed): # noqa: F811
with dask_cuda.LocalCUDACluster(loop=loop) as cluster:
with Client(cluster):
pdf = dask.datasets.timeseries(dtypes={"x": int}).reset_index()
gdf = pdf.map_partitions(cudf.DataFrame.from_pandas)
if delayed:
gdf = dd.from_delayed(gdf.to_delayed())
assert_eq(pdf.head(), gdf.head())
def test_merge():
# Repro Issue#3366
with dask_cuda.LocalCUDACluster(n_workers=1) as cluster:
with Client(cluster):
r1 = cudf.DataFrame()
r1["a1"] = range(4)
r1["a2"] = range(4, 8)
r1["a3"] = range(4)
r2 = cudf.DataFrame()
r2["b0"] = range(4)
r2["b1"] = range(4)
r2["b1"] = r2.b1.astype("str")
d1 = dask_cudf.from_cudf(r1, 2)
d2 = dask_cudf.from_cudf(r2, 2)
res = d1.merge(d2, left_on=["a3"], right_on=["b0"])
assert len(res) == 4
@pytest.mark.skipif(
not more_than_two_gpus(), reason="Machine does not have more than two GPUs"
)
def test_ucx_seriesgroupby():
pytest.importorskip("ucp")
# Repro Issue#3913
with dask_cuda.LocalCUDACluster(n_workers=2, protocol="ucx") as cluster:
with Client(cluster):
df = cudf.DataFrame({"a": [1, 2, 3, 4], "b": [5, 1, 2, 5]})
dask_df = dask_cudf.from_cudf(df, npartitions=2)
dask_df_g = dask_df.groupby(["a"]).b.sum().compute()
assert dask_df_g.name == "b"
def test_str_series_roundtrip():
with dask_cuda.LocalCUDACluster(n_workers=1) as cluster:
with Client(cluster):
expected = cudf.Series(["hi", "hello", None])
dask_series = dask_cudf.from_cudf(expected, npartitions=2)
actual = dask_series.compute()
assert_eq(actual, expected)
def test_p2p_shuffle():
# Check that we can use `shuffle="p2p"`
with dask_cuda.LocalCUDACluster(n_workers=1) as cluster:
with Client(cluster):
ddf = (
dask.datasets.timeseries(
start="2000-01-01",
end="2000-01-08",
dtypes={"x": int},
)
.reset_index(drop=True)
.to_backend("cudf")
)
dd.assert_eq(
ddf.sort_values("x", shuffle="p2p").compute(),
ddf.compute().sort_values("x"),
check_index=False,
)
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/tests/test_struct.py
|
# Copyright (c) 2021-2022, NVIDIA CORPORATION.
import pytest
import cudf
import dask_cudf
@pytest.mark.parametrize(
"data, column",
[
(
{
"a": [{"a": [1, 2, 3, 4], "b": "Hello world"}, {}, {"a": []}],
"b": [1, 2, 3],
"c": ["rapids", "cudf", "hi"],
},
"a",
),
(
{"a": [{}, {}, {}], "b": [1, 2, 3], "c": ["rapids", "cudf", "hi"]},
"a",
),
(
{
"a": [{}, {}, {}],
"b": [{"a": 1}, {"b": 5}, {"c": "Hello"}],
"c": ["rapids", "cudf", "hi"],
},
"b",
),
(
{
"a": [{}, {}, {}, None],
"b": [{"a": 1}, {"b": 5}, {"c": "Hello"}, None],
"c": ["rapids", "cudf", "hi", "cool"],
},
"b",
),
(
{
"a": [{}, {}, {}, None, {}, {"a": 5}],
"b": [
{"a": 1},
{"b": 5},
{"c": "Hello"},
None,
{"a": 10, "b": 5},
{},
],
"c": ["rapids", "cudf", "hi", "cool", "hello", "world"],
},
"b",
),
],
)
def test_select_struct(data, column):
df = cudf.DataFrame(data)
ddf = dask_cudf.from_cudf(df, 2)
assert df[column].to_arrow() == ddf[column].compute().to_arrow()
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/tests/test_binops.py
|
# Copyright (c) 2022, NVIDIA CORPORATION.
import operator
import numpy as np
import pandas as pd
import pytest
from dask import dataframe as dd
import cudf
from dask_cudf.tests.utils import _make_random_frame
def _make_empty_frame(npartitions=2):
df = pd.DataFrame({"x": [], "y": []})
gdf = cudf.DataFrame.from_pandas(df)
dgf = dd.from_pandas(gdf, npartitions=npartitions)
return dgf
def _make_random_frame_float(nelem, npartitions=2):
df = pd.DataFrame(
{
"x": np.random.randint(0, 5, size=nelem),
"y": np.random.normal(size=nelem) + 1,
}
)
gdf = cudf.from_pandas(df)
dgf = dd.from_pandas(gdf, npartitions=npartitions)
return df, dgf
_binops = [
operator.add,
operator.sub,
operator.mul,
operator.truediv,
operator.floordiv,
operator.mod,
operator.pow,
operator.eq,
operator.ne,
operator.gt,
operator.ge,
operator.lt,
operator.le,
]
@pytest.mark.parametrize("binop", _binops)
def test_series_binops_integer(binop):
np.random.seed(0)
size = 1000
lhs_df, lhs_gdf = _make_random_frame(size)
rhs_df, rhs_gdf = _make_random_frame(size)
got = binop(lhs_gdf.x, rhs_gdf.y)
exp = binop(lhs_df.x, rhs_df.y)
dd.assert_eq(got, exp)
@pytest.mark.parametrize("binop", _binops)
def test_series_binops_float(binop):
np.random.seed(0)
size = 1000
lhs_df, lhs_gdf = _make_random_frame_float(size)
rhs_df, rhs_gdf = _make_random_frame_float(size)
got = binop(lhs_gdf.x, rhs_gdf.y)
exp = binop(lhs_df.x, rhs_df.y)
dd.assert_eq(got, exp)
@pytest.mark.parametrize("operator", _binops)
def test_df_series_bind_ops(operator):
np.random.seed(0)
size = 1000
lhs_df, lhs_gdf = _make_random_frame_float(size)
rhs = np.random.rand()
for col in lhs_gdf.columns:
got = getattr(lhs_gdf[col], operator.__name__)(rhs)
exp = getattr(lhs_df[col], operator.__name__)(rhs)
dd.assert_eq(got, exp)
if operator.__name__ not in ["eq", "ne", "lt", "gt", "le", "ge"]:
got = getattr(lhs_gdf, operator.__name__)(rhs)
exp = getattr(lhs_df, operator.__name__)(rhs)
dd.assert_eq(got, exp)
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/tests/test_reductions.py
|
# Copyright (c) 2021, NVIDIA CORPORATION.
import numpy as np
import pandas as pd
import pytest
from dask import dataframe as dd
import cudf
import dask_cudf as dgd
def _make_random_frame(nelem, npartitions=2):
df = pd.DataFrame(
{
"x": np.random.randint(0, 5, size=nelem),
"y": np.random.normal(size=nelem) + 1,
}
)
gdf = cudf.DataFrame.from_pandas(df)
dgf = dgd.from_cudf(gdf, npartitions=npartitions)
return df, dgf
_reducers = ["sum", "count", "mean", "var", "std", "min", "max"]
def _get_reduce_fn(name):
def wrapped(series):
fn = getattr(series, name)
return fn()
return wrapped
@pytest.mark.parametrize("reducer", _reducers)
def test_series_reduce(reducer):
reducer = _get_reduce_fn(reducer)
np.random.seed(0)
size = 10
df, gdf = _make_random_frame(size)
got = reducer(gdf.x)
exp = reducer(df.x)
dd.assert_eq(got, exp)
@pytest.mark.parametrize(
"data",
[
cudf.datasets.randomdata(
nrows=10000,
dtypes={"a": "category", "b": int, "c": float, "d": int},
),
cudf.datasets.randomdata(
nrows=10000,
dtypes={"a": "category", "b": int, "c": float, "d": str},
),
cudf.datasets.randomdata(
nrows=10000, dtypes={"a": bool, "b": int, "c": float, "d": str}
),
],
)
@pytest.mark.parametrize(
"op", ["max", "min", "sum", "prod", "mean", "var", "std"]
)
def test_rowwise_reductions(data, op):
gddf = dgd.from_cudf(data, npartitions=10)
pddf = gddf.to_dask_dataframe()
if op in ("var", "std"):
expected = getattr(pddf, op)(axis=1, ddof=0)
got = getattr(gddf, op)(axis=1, ddof=0)
else:
expected = getattr(pddf, op)(axis=1)
got = getattr(pddf, op)(axis=1)
dd.assert_eq(expected.compute(), got.compute(), check_exact=False)
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/tests/test_delayed_io.py
|
# Copyright (c) 2019-2022, NVIDIA CORPORATION.
"""
Test IO with dask.delayed API
"""
import numpy as np
import pytest
from pandas.testing import assert_frame_equal
from dask.delayed import delayed
import cudf as gd
import dask_cudf as dgd
@delayed
def load_data(nelem, ident):
df = gd.DataFrame()
df["x"] = np.arange(nelem)
df["ident"] = np.asarray([ident] * nelem)
return df
@delayed
def get_combined_column(df):
return df.x * df.ident
def test_dataframe_from_delayed():
delays = [load_data(10 * i, i) for i in range(1, 3)]
out = dgd.from_delayed(delays)
res = out.compute()
assert isinstance(res, gd.DataFrame)
expected = gd.concat([d.compute() for d in delays])
assert_frame_equal(res.to_pandas(), expected.to_pandas())
def test_series_from_delayed():
delays = [get_combined_column(load_data(10 * i, i)) for i in range(1, 3)]
out = dgd.from_delayed(delays)
res = out.compute()
assert isinstance(res, gd.Series)
expected = gd.concat([d.compute() for d in delays])
np.testing.assert_array_equal(res.to_pandas(), expected.to_pandas())
def test_dataframe_to_delayed():
nelem = 100
df = gd.DataFrame()
df["x"] = np.arange(nelem)
df["y"] = np.random.randint(nelem, size=nelem)
ddf = dgd.from_cudf(df, npartitions=5)
delays = ddf.to_delayed()
assert len(delays) == 5
# Concat the delayed partitions
got = gd.concat([d.compute() for d in delays])
assert_frame_equal(got.to_pandas(), df.to_pandas())
# Check individual partitions
divs = ddf.divisions
assert len(divs) == len(delays) + 1
for i, part in enumerate(delays):
s = divs[i]
# The last divisions in the last index
e = None if i + 1 == len(delays) else divs[i + 1]
expect = df[s:e].to_pandas()
got = part.compute().to_pandas()
assert_frame_equal(got, expect)
def test_series_to_delayed():
nelem = 100
sr = gd.Series(np.random.randint(nelem, size=nelem))
dsr = dgd.from_cudf(sr, npartitions=5)
delays = dsr.to_delayed()
assert len(delays) == 5
# Concat the delayed partitions
got = gd.concat([d.compute() for d in delays])
assert isinstance(got, gd.Series)
np.testing.assert_array_equal(got.to_pandas(), sr.to_pandas())
# Check individual partitions
divs = dsr.divisions
assert len(divs) == len(delays) + 1
for i, part in enumerate(delays):
s = divs[i]
# The last divisions in the last index
e = None if i + 1 == len(delays) else divs[i + 1]
expect = sr[s:e].to_pandas()
got = part.compute().to_pandas()
np.testing.assert_array_equal(got, expect)
def test_mixing_series_frame_error():
nelem = 20
df = gd.DataFrame()
df["x"] = np.arange(nelem)
df["y"] = np.random.randint(nelem, size=nelem)
ddf = dgd.from_cudf(df, npartitions=5)
delay_frame = ddf.to_delayed()
delay_series = ddf.x.to_delayed()
combined = dgd.from_delayed(delay_frame + delay_series)
with pytest.raises(ValueError) as raises:
combined.compute()
raises.match(r"^Metadata mismatch found in `from_delayed`.")
def test_frame_extra_columns_error():
nelem = 20
df = gd.DataFrame()
df["x"] = np.arange(nelem)
df["y"] = np.random.randint(nelem, size=nelem)
ddf1 = dgd.from_cudf(df, npartitions=5)
df["z"] = np.arange(nelem)
ddf2 = dgd.from_cudf(df, npartitions=5)
combined = dgd.from_delayed(ddf1.to_delayed() + ddf2.to_delayed())
with pytest.raises(ValueError) as raises:
combined.compute()
raises.match(r"^Metadata mismatch found in `from_delayed`.")
raises.match(r"z")
@pytest.mark.xfail(reason="")
def test_frame_dtype_error():
nelem = 20
df1 = gd.DataFrame()
df1["bad"] = np.arange(nelem)
df1["bad"] = np.arange(nelem, dtype=np.float64)
df2 = gd.DataFrame()
df2["bad"] = np.arange(nelem)
df2["bad"] = np.arange(nelem, dtype=np.float32)
ddf1 = dgd.from_cudf(df1, npartitions=5)
ddf2 = dgd.from_cudf(df2, npartitions=5)
combined = dgd.from_delayed(ddf1.to_delayed() + ddf2.to_delayed())
with pytest.raises(ValueError) as raises:
combined.compute()
raises.match(r"same type")
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/tests/test_core.py
|
# Copyright (c) 2021-2023, NVIDIA CORPORATION.
import random
import cupy as cp
import numpy as np
import pandas as pd
import pytest
from packaging import version
import dask
from dask import dataframe as dd
from dask.dataframe.core import make_meta as dask_make_meta, meta_nonempty
from dask.utils import M
import cudf
import dask_cudf as dgd
def test_from_dict_backend_dispatch():
# Test ddf.from_dict cudf-backend dispatch
np.random.seed(0)
data = {
"x": np.random.randint(0, 5, size=10000),
"y": np.random.normal(size=10000),
}
expect = cudf.DataFrame(data)
with dask.config.set({"dataframe.backend": "cudf"}):
ddf = dd.from_dict(data, npartitions=2)
assert isinstance(ddf, dgd.DataFrame)
dd.assert_eq(expect, ddf)
def test_to_backend():
np.random.seed(0)
data = {
"x": np.random.randint(0, 5, size=10000),
"y": np.random.normal(size=10000),
}
with dask.config.set({"dataframe.backend": "pandas"}):
ddf = dd.from_dict(data, npartitions=2)
assert isinstance(ddf._meta, pd.DataFrame)
gdf = ddf.to_backend("cudf")
assert isinstance(gdf, dgd.DataFrame)
dd.assert_eq(cudf.DataFrame(data), ddf)
assert isinstance(gdf.to_backend()._meta, pd.DataFrame)
def test_to_backend_kwargs():
data = {"x": [0, 2, np.nan, 3, 4, 5]}
with dask.config.set({"dataframe.backend": "pandas"}):
dser = dd.from_dict(data, npartitions=2)["x"]
assert isinstance(dser._meta, pd.Series)
# Using `nan_as_null=False` will result in a cudf-backed
# Series with a NaN element (ranther than <NA>)
gser_nan = dser.to_backend("cudf", nan_as_null=False)
assert isinstance(gser_nan, dgd.Series)
assert np.isnan(gser_nan.compute()).sum() == 1
# Using `nan_as_null=True` will result in a cudf-backed
# Series with a <NA> element (ranther than NaN)
gser_null = dser.to_backend("cudf", nan_as_null=True)
assert isinstance(gser_null, dgd.Series)
assert np.isnan(gser_null.compute()).sum() == 0
# Check `nullable` argument for `cudf.Series.to_pandas`
dser_null = gser_null.to_backend("pandas", nullable=False)
assert dser_null.compute().dtype == "float"
dser_null = gser_null.to_backend("pandas", nullable=True)
assert isinstance(dser_null.compute().dtype, pd.Float64Dtype)
# Check unsupported arguments
with pytest.raises(ValueError, match="pandas-to-cudf"):
dser.to_backend("cudf", bad_arg=True)
with pytest.raises(ValueError, match="cudf-to-cudf"):
gser_null.to_backend("cudf", bad_arg=True)
with pytest.raises(ValueError, match="cudf-to-pandas"):
gser_null.to_backend("pandas", bad_arg=True)
def test_from_cudf():
np.random.seed(0)
df = pd.DataFrame(
{
"x": np.random.randint(0, 5, size=10000),
"y": np.random.normal(size=10000),
}
)
gdf = cudf.DataFrame.from_pandas(df)
# Test simple around to/from dask
ingested = dd.from_pandas(gdf, npartitions=2)
dd.assert_eq(ingested, df)
# Test conversion to dask.dataframe
ddf = ingested.to_dask_dataframe()
dd.assert_eq(ddf, df)
def test_from_cudf_multiindex_raises():
df = cudf.DataFrame({"x": list("abc"), "y": [1, 2, 3], "z": [1, 2, 3]})
with pytest.raises(NotImplementedError):
# dask_cudf does not support MultiIndex yet
dgd.from_cudf(df.set_index(["x", "y"]))
def test_from_cudf_with_generic_idx():
cdf = cudf.DataFrame(
{
"a": list(range(20)),
"b": list(reversed(range(20))),
"c": list(range(20)),
}
)
ddf = dgd.from_cudf(cdf, npartitions=2)
assert isinstance(ddf.index.compute(), cudf.RangeIndex)
dd.assert_eq(ddf.loc[1:2, ["a"]], cdf.loc[1:2, ["a"]])
def _fragmented_gdf(df, nsplit):
n = len(df)
# Split dataframe in *nsplit*
subdivsize = n // nsplit
starts = [i * subdivsize for i in range(nsplit)]
ends = starts[1:] + [None]
frags = [df[s:e] for s, e in zip(starts, ends)]
return frags
def test_query():
np.random.seed(0)
df = pd.DataFrame(
{"x": np.random.randint(0, 5, size=10), "y": np.random.normal(size=10)}
)
gdf = cudf.DataFrame.from_pandas(df)
expr = "x > 2"
dd.assert_eq(gdf.query(expr), df.query(expr))
queried = dd.from_pandas(gdf, npartitions=2).query(expr)
got = queried
expect = gdf.query(expr)
dd.assert_eq(got, expect)
def test_query_local_dict():
np.random.seed(0)
df = pd.DataFrame(
{"x": np.random.randint(0, 5, size=10), "y": np.random.normal(size=10)}
)
gdf = cudf.DataFrame.from_pandas(df)
ddf = dgd.from_cudf(gdf, npartitions=2)
val = 2
gdf_queried = gdf.query("x > @val")
ddf_queried = ddf.query("x > @val", local_dict={"val": val})
dd.assert_eq(gdf_queried, ddf_queried)
def test_head():
np.random.seed(0)
df = pd.DataFrame(
{
"x": np.random.randint(0, 5, size=100),
"y": np.random.normal(size=100),
}
)
gdf = cudf.DataFrame.from_pandas(df)
dgf = dd.from_pandas(gdf, npartitions=2)
dd.assert_eq(dgf.head(), df.head())
def test_from_dask_dataframe():
np.random.seed(0)
df = pd.DataFrame(
{"x": np.random.randint(0, 5, size=20), "y": np.random.normal(size=20)}
)
ddf = dd.from_pandas(df, npartitions=2)
dgdf = ddf.map_partitions(cudf.from_pandas)
got = dgdf.compute().to_pandas()
expect = df
dd.assert_eq(got, expect)
@pytest.mark.parametrize("nelem", [10, 200, 1333])
@pytest.mark.parametrize("divisions", [None, "quantile"])
def test_set_index(nelem, divisions):
with dask.config.set(scheduler="single-threaded"):
np.random.seed(0)
# Use unique index range as the sort may not be stable-ordering
x = np.arange(nelem)
np.random.shuffle(x)
df = pd.DataFrame(
{"x": x, "y": np.random.randint(0, nelem, size=nelem)}
)
ddf = dd.from_pandas(df, npartitions=2)
dgdf = ddf.map_partitions(cudf.from_pandas)
expect = ddf.set_index("x")
got = dgdf.set_index("x", divisions=divisions)
dd.assert_eq(expect, got, check_index=False, check_divisions=False)
@pytest.mark.parametrize("by", ["a", "b"])
@pytest.mark.parametrize("nelem", [10, 500])
@pytest.mark.parametrize("nparts", [1, 10])
def test_set_index_quantile(nelem, nparts, by):
df = cudf.DataFrame()
df["a"] = np.ascontiguousarray(np.arange(nelem)[::-1])
df["b"] = np.random.choice(cudf.datasets.names, size=nelem)
ddf = dd.from_pandas(df, npartitions=nparts)
got = ddf.set_index(by, divisions="quantile")
expect = df.sort_values(by=by).set_index(by)
dd.assert_eq(got, expect)
def assert_frame_equal_by_index_group(expect, got):
assert sorted(expect.columns) == sorted(got.columns)
assert sorted(set(got.index)) == sorted(set(expect.index))
# Note the set_index sort is not stable,
unique_values = sorted(set(got.index))
for iv in unique_values:
sr_expect = expect.loc[[iv]]
sr_got = got.loc[[iv]]
for k in expect.columns:
# Sort each column before we compare them
sorted_expect = sr_expect.sort_values(k)[k]
sorted_got = sr_got.sort_values(k)[k]
np.testing.assert_array_equal(sorted_expect, sorted_got)
@pytest.mark.parametrize("nelem", [10, 200, 1333])
def test_set_index_2(nelem):
with dask.config.set(scheduler="single-threaded"):
np.random.seed(0)
df = pd.DataFrame(
{
"x": 100 + np.random.randint(0, nelem // 2, size=nelem),
"y": np.random.normal(size=nelem),
}
)
expect = df.set_index("x").sort_index()
dgf = dd.from_pandas(cudf.DataFrame.from_pandas(df), npartitions=4)
res = dgf.set_index("x") # sort by default
got = res.compute().to_pandas()
assert_frame_equal_by_index_group(expect, got)
@pytest.mark.xfail(reason="dask's index name '__dask_cudf.index' is correct")
def test_set_index_w_series():
with dask.config.set(scheduler="single-threaded"):
nelem = 20
np.random.seed(0)
df = pd.DataFrame(
{
"x": 100 + np.random.randint(0, nelem // 2, size=nelem),
"y": np.random.normal(size=nelem),
}
)
expect = df.set_index(df.x).sort_index()
dgf = dd.from_pandas(cudf.DataFrame.from_pandas(df), npartitions=4)
res = dgf.set_index(dgf.x) # sort by default
got = res.compute().to_pandas()
dd.assert_eq(expect, got)
def test_set_index_sorted():
with dask.config.set(scheduler="single-threaded"):
df1 = pd.DataFrame({"val": [4, 3, 2, 1, 0], "id": [0, 1, 3, 5, 7]})
ddf1 = dd.from_pandas(df1, npartitions=2)
gdf1 = cudf.from_pandas(df1)
gddf1 = dgd.from_cudf(gdf1, npartitions=2)
expect = ddf1.set_index("id", sorted=True)
got = gddf1.set_index("id", sorted=True)
dd.assert_eq(expect, got)
with pytest.raises(ValueError):
# Cannot set `sorted=True` for non-sorted column
gddf1.set_index("val", sorted=True)
@pytest.mark.parametrize("nelem", [10, 200, 1333])
@pytest.mark.parametrize("index", [None, "myindex"])
def test_rearrange_by_divisions(nelem, index):
with dask.config.set(scheduler="single-threaded"):
np.random.seed(0)
df = pd.DataFrame(
{
"x": np.random.randint(0, 20, size=nelem),
"y": np.random.normal(size=nelem),
"z": np.random.choice(["dog", "cat", "bird"], nelem),
}
)
df["z"] = df["z"].astype("category")
ddf1 = dd.from_pandas(df, npartitions=4)
gdf1 = dgd.from_cudf(cudf.DataFrame.from_pandas(df), npartitions=4)
ddf1.index.name = index
gdf1.index.name = index
divisions = (0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20)
expect = dd.shuffle.rearrange_by_divisions(
ddf1, "x", divisions=divisions, shuffle="tasks"
)
result = dd.shuffle.rearrange_by_divisions(
gdf1, "x", divisions=divisions, shuffle="tasks"
)
dd.assert_eq(expect, result)
def test_assign():
np.random.seed(0)
df = pd.DataFrame(
{"x": np.random.randint(0, 5, size=20), "y": np.random.normal(size=20)}
)
dgf = dd.from_pandas(cudf.DataFrame.from_pandas(df), npartitions=2)
pdcol = pd.Series(np.arange(20) + 1000)
newcol = dd.from_pandas(cudf.Series(pdcol), npartitions=dgf.npartitions)
got = dgf.assign(z=newcol)
dd.assert_eq(got.loc[:, ["x", "y"]], df)
np.testing.assert_array_equal(got["z"].compute().values_host, pdcol)
@pytest.mark.parametrize("data_type", ["int8", "int16", "int32", "int64"])
def test_setitem_scalar_integer(data_type):
np.random.seed(0)
scalar = np.random.randint(0, 100, dtype=data_type)
df = pd.DataFrame(
{"x": np.random.randint(0, 5, size=20), "y": np.random.normal(size=20)}
)
dgf = dd.from_pandas(cudf.DataFrame.from_pandas(df), npartitions=2)
df["z"] = scalar
dgf["z"] = scalar
got = dgf.compute().to_pandas()
np.testing.assert_array_equal(got["z"], df["z"])
@pytest.mark.parametrize("data_type", ["float32", "float64"])
def test_setitem_scalar_float(data_type):
np.random.seed(0)
scalar = np.random.randn(1).astype(data_type)[0]
df = pd.DataFrame(
{"x": np.random.randint(0, 5, size=20), "y": np.random.normal(size=20)}
)
dgf = dd.from_pandas(cudf.DataFrame.from_pandas(df), npartitions=2)
df["z"] = scalar
dgf["z"] = scalar
got = dgf.compute().to_pandas()
np.testing.assert_array_equal(got["z"], df["z"])
def test_setitem_scalar_datetime():
np.random.seed(0)
scalar = np.int64(np.random.randint(0, 100)).astype("datetime64[ms]")
df = pd.DataFrame(
{"x": np.random.randint(0, 5, size=20), "y": np.random.normal(size=20)}
)
dgf = dd.from_pandas(cudf.DataFrame.from_pandas(df), npartitions=2)
df["z"] = scalar
dgf["z"] = scalar
got = dgf.compute().to_pandas()
np.testing.assert_array_equal(got["z"], df["z"])
@pytest.mark.parametrize(
"func",
[
lambda: pd.DataFrame(
{"A": np.random.rand(10), "B": np.random.rand(10)},
index=list("abcdefghij"),
),
lambda: pd.DataFrame(
{
"A": np.random.rand(10),
"B": list("a" * 10),
"C": pd.Series(
[str(20090101 + i) for i in range(10)],
dtype="datetime64[ns]",
),
},
index=list("abcdefghij"),
),
lambda: pd.Series(list("abcdefghijklmnop")),
lambda: pd.Series(
np.random.rand(10),
index=pd.Index(
[str(20090101 + i) for i in range(10)], dtype="datetime64[ns]"
),
),
],
)
def test_repr(func):
pdf = func()
gdf = cudf.from_pandas(pdf)
gddf = dd.from_pandas(gdf, npartitions=3, sort=False)
assert repr(gddf)
if hasattr(pdf, "_repr_html_"):
assert gddf._repr_html_()
@pytest.mark.skip(reason="datetime indexes not fully supported in cudf")
@pytest.mark.parametrize("start", ["1d", "5d", "1w", "12h"])
@pytest.mark.parametrize("stop", ["1d", "3d", "8h"])
def test_repartition_timeseries(start, stop):
# This test is currently absurdly slow. It should not be unskipped without
# slimming it down.
pdf = dask.datasets.timeseries(
"2000-01-01",
"2000-01-31",
freq="1s",
partition_freq=start,
dtypes={"x": int, "y": float},
)
gdf = pdf.map_partitions(cudf.DataFrame.from_pandas)
a = pdf.repartition(freq=stop)
b = gdf.repartition(freq=stop)
assert a.divisions == b.divisions
dd.utils.assert_eq(a, b)
@pytest.mark.parametrize("start", [1, 2, 5])
@pytest.mark.parametrize("stop", [1, 3, 7])
def test_repartition_simple_divisions(start, stop):
pdf = pd.DataFrame({"x": range(100)})
pdf = dd.from_pandas(pdf, npartitions=start)
gdf = pdf.map_partitions(cudf.DataFrame.from_pandas)
a = pdf.repartition(npartitions=stop)
b = gdf.repartition(npartitions=stop)
assert a.divisions == b.divisions
dd.assert_eq(a, b)
@pytest.mark.parametrize("npartitions", [2, 17, 20])
def test_repartition_hash_staged(npartitions):
by = ["b"]
datarange = 35
size = 100
gdf = cudf.DataFrame(
{
"a": np.arange(size, dtype="int64"),
"b": np.random.randint(datarange, size=size),
}
)
# WARNING: Specific npartitions-max_branch combination
# was specifically chosen to cover changes in #4676
npartitions_initial = 17
ddf = dgd.from_cudf(gdf, npartitions=npartitions_initial)
ddf_new = ddf.shuffle(
on=by, ignore_index=True, npartitions=npartitions, max_branch=4
)
# Make sure we are getting a dask_cudf dataframe
assert type(ddf_new) == type(ddf)
# Check that the length was preserved
assert len(ddf_new) == len(ddf)
# Check that the partitions have unique keys,
# and that the key values are preserved
expect_unique = gdf[by].drop_duplicates().sort_values(by)
got_unique = cudf.concat(
[
part[by].compute().drop_duplicates()
for part in ddf_new[by].partitions
],
ignore_index=True,
).sort_values(by)
dd.assert_eq(got_unique, expect_unique, check_index=False)
@pytest.mark.parametrize("by", [["b"], ["c"], ["d"], ["b", "c"]])
@pytest.mark.parametrize("npartitions", [3, 4, 5])
@pytest.mark.parametrize("max_branch", [3, 32])
def test_repartition_hash(by, npartitions, max_branch):
npartitions_i = 4
datarange = 26
size = 100
gdf = cudf.DataFrame(
{
"a": np.arange(0, stop=size, dtype="int64"),
"b": np.random.randint(datarange, size=size),
"c": np.random.choice(list("abcdefgh"), size=size),
"d": np.random.choice(np.arange(26), size=size),
}
)
gdf.d = gdf.d.astype("datetime64[ms]")
ddf = dgd.from_cudf(gdf, npartitions=npartitions_i)
ddf_new = ddf.shuffle(
on=by,
ignore_index=True,
npartitions=npartitions,
max_branch=max_branch,
)
# Check that the length was preserved
assert len(ddf_new) == len(ddf)
# Check that the partitions have unique keys,
# and that the key values are preserved
expect_unique = gdf[by].drop_duplicates().sort_values(by)
got_unique = cudf.concat(
[
part[by].compute().drop_duplicates()
for part in ddf_new[by].partitions
],
ignore_index=True,
).sort_values(by)
dd.assert_eq(got_unique, expect_unique, check_index=False)
def test_repartition_no_extra_row():
# see https://github.com/rapidsai/cudf/issues/11930
gdf = cudf.DataFrame({"a": [10, 20, 30], "b": [1, 2, 3]}).set_index("a")
ddf = dgd.from_cudf(gdf, npartitions=1)
ddf_new = ddf.repartition([0, 5, 10, 30], force=True)
dd.assert_eq(ddf, ddf_new)
dd.assert_eq(gdf, ddf_new)
@pytest.fixture
def pdf():
return pd.DataFrame(
{"x": [1, 2, 3, 4, 5, 6], "y": [11.0, 12.0, 13.0, 14.0, 15.0, 16.0]}
)
@pytest.fixture
def gdf(pdf):
return cudf.from_pandas(pdf)
@pytest.fixture
def ddf(pdf):
return dd.from_pandas(pdf, npartitions=3)
@pytest.fixture
def gddf(gdf):
return dd.from_pandas(gdf, npartitions=3)
@pytest.mark.parametrize(
"func",
[
lambda df: df + 1,
lambda df: df.index,
lambda df: df.x.sum(),
lambda df: df.x.astype(float),
lambda df: df.assign(z=df.x.astype("int")),
],
)
def test_unary_ops(func, gdf, gddf):
p = func(gdf)
g = func(gddf)
# Fixed in https://github.com/dask/dask/pull/4657
if isinstance(p, cudf.Index):
if version.parse(dask.__version__) < version.parse("1.1.6"):
pytest.skip(
"dask.dataframe assert_eq index check hardcoded to "
"pandas prior to 1.1.6 release"
)
dd.assert_eq(p, g, check_names=False)
@pytest.mark.parametrize("series", [True, False])
def test_concat(gdf, gddf, series):
if series:
gdf = gdf.x
gddf = gddf.x
a = (
cudf.concat([gdf, gdf + 1, gdf + 2])
.sort_values()
.reset_index(drop=True)
)
b = (
dd.concat([gddf, gddf + 1, gddf + 2], interleave_partitions=True)
.compute()
.sort_values()
.reset_index(drop=True)
)
else:
a = (
cudf.concat([gdf, gdf + 1, gdf + 2])
.sort_values("x")
.reset_index(drop=True)
)
b = (
dd.concat([gddf, gddf + 1, gddf + 2], interleave_partitions=True)
.compute()
.sort_values("x")
.reset_index(drop=True)
)
dd.assert_eq(a, b)
def test_boolean_index(gdf, gddf):
gdf2 = gdf[gdf.x > 2]
gddf2 = gddf[gddf.x > 2]
dd.assert_eq(gdf2, gddf2)
def test_drop(gdf, gddf):
gdf2 = gdf.drop(columns="x")
gddf2 = gddf.drop(columns="x").compute()
dd.assert_eq(gdf2, gddf2)
@pytest.mark.parametrize("deep", [True, False])
@pytest.mark.parametrize("index", [True, False])
def test_memory_usage(gdf, gddf, index, deep):
dd.assert_eq(
gdf.memory_usage(deep=deep, index=index),
gddf.memory_usage(deep=deep, index=index),
)
@pytest.mark.parametrize("index", [True, False])
def test_hash_object_dispatch(index):
obj = cudf.DataFrame(
{"x": ["a", "b", "c"], "y": [1, 2, 3], "z": [1, 1, 0]}, index=[2, 4, 6]
)
# DataFrame
result = dd.core.hash_object_dispatch(obj, index=index)
expected = dgd.backends.hash_object_cudf(obj, index=index)
assert isinstance(result, cudf.Series)
dd.assert_eq(result, expected)
# Series
result = dd.core.hash_object_dispatch(obj["x"], index=index)
expected = dgd.backends.hash_object_cudf(obj["x"], index=index)
assert isinstance(result, cudf.Series)
dd.assert_eq(result, expected)
# DataFrame with MultiIndex
obj_multi = obj.set_index(["x", "z"], drop=True)
result = dd.core.hash_object_dispatch(obj_multi, index=index)
expected = dgd.backends.hash_object_cudf(obj_multi, index=index)
assert isinstance(result, cudf.Series)
dd.assert_eq(result, expected)
@pytest.mark.parametrize(
"index",
[
"int8",
"int32",
"int64",
"float64",
"strings",
"cats",
"time_s",
"time_ms",
"time_ns",
["int32", "int64"],
["int8", "float64", "strings"],
["cats", "int8", "float64"],
["time_ms", "cats"],
],
)
def test_make_meta_backends(index):
dtypes = ["int8", "int32", "int64", "float64"]
df = cudf.DataFrame(
{dt: np.arange(start=0, stop=3, dtype=dt) for dt in dtypes}
)
df["strings"] = ["cat", "dog", "fish"]
df["cats"] = df["strings"].astype("category")
df["time_s"] = np.array(
["2018-10-07", "2018-10-08", "2018-10-09"], dtype="datetime64[s]"
)
df["time_ms"] = df["time_s"].astype("datetime64[ms]")
df["time_ns"] = df["time_s"].astype("datetime64[ns]")
df = df.set_index(index)
# Check "empty" metadata types
chk_meta = dask_make_meta(df)
dd.assert_eq(chk_meta.dtypes, df.dtypes)
# Check "non-empty" metadata types
chk_meta_nonempty = meta_nonempty(df)
dd.assert_eq(chk_meta.dtypes, chk_meta_nonempty.dtypes)
# Check dask code path if not MultiIndex
if not isinstance(df.index, cudf.MultiIndex):
ddf = dgd.from_cudf(df, npartitions=1)
# Check "empty" metadata types
dd.assert_eq(ddf._meta.dtypes, df.dtypes)
# Check "non-empty" metadata types
dd.assert_eq(ddf._meta.dtypes, ddf._meta_nonempty.dtypes)
@pytest.mark.parametrize(
"data",
[
pd.Series([], dtype="float64"),
pd.DataFrame({"abc": [], "xyz": []}),
pd.Series([1, 2, 10, 11]),
pd.DataFrame({"abc": [1, 2, 10, 11], "xyz": [100, 12, 120, 1]}),
],
)
def test_dataframe_series_replace(data):
pdf = data.copy()
gdf = cudf.from_pandas(pdf)
ddf = dgd.from_cudf(gdf, npartitions=5)
dd.assert_eq(ddf.replace(1, 2), pdf.replace(1, 2))
def test_dataframe_assign_col():
df = cudf.DataFrame(list(range(100)))
pdf = pd.DataFrame(list(range(100)))
ddf = dgd.from_cudf(df, npartitions=4)
ddf["fold"] = 0
ddf["fold"] = ddf["fold"].map_partitions(
lambda cudf_df: cp.random.randint(0, 4, len(cudf_df))
)
pddf = dd.from_pandas(pdf, npartitions=4)
pddf["fold"] = 0
pddf["fold"] = pddf["fold"].map_partitions(
lambda p_df: np.random.randint(0, 4, len(p_df))
)
dd.assert_eq(ddf[0], pddf[0])
dd.assert_eq(len(ddf["fold"]), len(pddf["fold"]))
def test_dataframe_set_index():
random.seed(0)
df = cudf.datasets.randomdata(26, dtypes={"a": float, "b": int})
df["str"] = list("abcdefghijklmnopqrstuvwxyz")
pdf = df.to_pandas()
ddf = dgd.from_cudf(df, npartitions=4)
ddf = ddf.set_index("str")
pddf = dd.from_pandas(pdf, npartitions=4)
pddf = pddf.set_index("str")
from cudf.testing._utils import assert_eq
assert_eq(ddf.compute(), pddf.compute())
def test_series_describe():
random.seed(0)
sr = cudf.datasets.randomdata(20)["x"]
psr = sr.to_pandas()
dsr = dgd.from_cudf(sr, npartitions=4)
pdsr = dd.from_pandas(psr, npartitions=4)
dd.assert_eq(
dsr.describe(),
pdsr.describe(),
check_less_precise=3,
)
def test_dataframe_describe():
random.seed(0)
df = cudf.datasets.randomdata(20)
pdf = df.to_pandas()
ddf = dgd.from_cudf(df, npartitions=4)
pddf = dd.from_pandas(pdf, npartitions=4)
dd.assert_eq(
ddf.describe(), pddf.describe(), check_exact=False, atol=0.0001
)
def test_zero_std_describe():
num = 84886781
df = cudf.DataFrame(
{
"x": np.full((20,), num, dtype=np.float64),
"y": np.full((20,), num, dtype=np.float64),
}
)
pdf = df.to_pandas()
ddf = dgd.from_cudf(df, npartitions=4)
pddf = dd.from_pandas(pdf, npartitions=4)
dd.assert_eq(ddf.describe(), pddf.describe(), check_less_precise=3)
def test_large_numbers_var():
num = 8488678001
df = cudf.DataFrame(
{
"x": np.arange(num, num + 1000, dtype=np.float64),
"y": np.arange(num, num + 1000, dtype=np.float64),
}
)
pdf = df.to_pandas()
ddf = dgd.from_cudf(df, npartitions=4)
pddf = dd.from_pandas(pdf, npartitions=4)
dd.assert_eq(ddf.var(), pddf.var(), check_less_precise=3)
def test_index_map_partitions():
# https://github.com/rapidsai/cudf/issues/6738
ddf = dd.from_pandas(pd.DataFrame({"a": range(10)}), npartitions=2)
mins_pd = ddf.index.map_partitions(M.min, meta=ddf.index).compute()
gddf = dgd.from_cudf(cudf.DataFrame({"a": range(10)}), npartitions=2)
mins_gd = gddf.index.map_partitions(M.min, meta=gddf.index).compute()
dd.assert_eq(mins_pd, mins_gd)
def test_merging_categorical_columns():
try:
from dask.dataframe.dispatch import ( # noqa: F401
union_categoricals_dispatch,
)
except ImportError:
pytest.skip(
"need a version of dask that has union_categoricals_dispatch"
)
df_1 = cudf.DataFrame(
{"id_1": [0, 1, 2, 3], "cat_col": ["a", "b", "f", "f"]}
)
ddf_1 = dgd.from_cudf(df_1, npartitions=2)
ddf_1 = dd.categorical.categorize(ddf_1, columns=["cat_col"])
df_2 = cudf.DataFrame(
{"id_2": [111, 112, 113], "cat_col": ["g", "h", "f"]}
)
ddf_2 = dgd.from_cudf(df_2, npartitions=2)
ddf_2 = dd.categorical.categorize(ddf_2, columns=["cat_col"])
expected = cudf.DataFrame(
{
"id_1": [2, 3],
"cat_col": cudf.Series(
["f", "f"],
dtype=cudf.CategoricalDtype(
categories=["a", "b", "f", "g", "h"], ordered=False
),
),
"id_2": [113, 113],
}
)
dd.assert_eq(ddf_1.merge(ddf_2), expected)
def test_correct_meta():
try:
from dask.dataframe.dispatch import make_meta_obj # noqa: F401
except ImportError:
pytest.skip("need make_meta_obj to be preset")
# Need these local imports in this specific order.
# For context: https://github.com/rapidsai/cudf/issues/7946
import pandas as pd
from dask import dataframe as dd
import dask_cudf # noqa: F401
df = pd.DataFrame({"a": [3, 4], "b": [1, 2]})
ddf = dd.from_pandas(df, npartitions=1)
emb = ddf["a"].apply(pd.Series, meta={"c0": "int64", "c1": "int64"})
assert isinstance(emb, dd.DataFrame)
assert isinstance(emb._meta, pd.DataFrame)
def test_categorical_dtype_round_trip():
s = cudf.Series(4 * ["foo"], dtype="category")
assert s.dtype.ordered is False
ds = dgd.from_cudf(s, npartitions=2)
pds = dd.from_pandas(s.to_pandas(), npartitions=2)
dd.assert_eq(ds, pds)
assert ds.dtype.ordered is False
# Below validations are required, see:
# https://github.com/rapidsai/cudf/issues/11487#issuecomment-1208912383
actual = ds.compute()
expected = pds.compute()
assert actual.dtype.ordered == expected.dtype.ordered
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/tests/test_sort.py
|
# Copyright (c) 2019-2023, NVIDIA CORPORATION.
import cupy as cp
import numpy as np
import pytest
import dask
from dask import dataframe as dd
import cudf
import dask_cudf
@pytest.mark.parametrize("ascending", [True, False])
@pytest.mark.parametrize("by", ["a", "b", "c", "d", ["a", "b"], ["c", "d"]])
@pytest.mark.parametrize("nelem", [10, 500])
@pytest.mark.parametrize("nparts", [1, 10])
def test_sort_values(nelem, nparts, by, ascending):
np.random.seed(0)
df = cudf.DataFrame()
df["a"] = np.ascontiguousarray(np.arange(nelem)[::-1])
df["b"] = np.arange(100, nelem + 100)
df["c"] = df["b"].astype("str")
df["d"] = df["a"].astype("category")
ddf = dd.from_pandas(df, npartitions=nparts)
with dask.config.set(scheduler="single-threaded"):
got = ddf.sort_values(by=by, ascending=ascending)
expect = df.sort_values(by=by, ascending=ascending)
dd.assert_eq(got, expect, check_index=False)
@pytest.mark.parametrize("ascending", [True, False])
@pytest.mark.parametrize("by", ["a", "b", ["a", "b"]])
def test_sort_values_single_partition(by, ascending):
df = cudf.DataFrame()
nelem = 1000
df["a"] = np.ascontiguousarray(np.arange(nelem)[::-1])
df["b"] = np.arange(100, nelem + 100)
ddf = dd.from_pandas(df, npartitions=1)
with dask.config.set(scheduler="single-threaded"):
got = ddf.sort_values(by=by, ascending=ascending)
expect = df.sort_values(by=by, ascending=ascending)
dd.assert_eq(got, expect)
def test_sort_repartition():
ddf = dask_cudf.from_cudf(
cudf.DataFrame({"a": [0, 0, 1, 2, 3, 4, 2]}), npartitions=2
)
new_ddf = ddf.shuffle(on="a", ignore_index=True, npartitions=3)
dd.assert_eq(len(new_ddf), len(ddf))
@pytest.mark.parametrize("na_position", ["first", "last"])
@pytest.mark.parametrize("ascending", [True, False])
@pytest.mark.parametrize("by", ["a", "b", ["a", "b"]])
@pytest.mark.parametrize(
"data",
[
{
"a": [None] * 100 + list(range(100, 150)),
"b": list(range(50)) + [None] * 50 + list(range(50, 100)),
},
{"a": list(range(15)) + [None] * 5, "b": list(reversed(range(20)))},
],
)
def test_sort_values_with_nulls(data, by, ascending, na_position):
np.random.seed(0)
cp.random.seed(0)
df = cudf.DataFrame(data)
ddf = dd.from_pandas(df, npartitions=5)
with dask.config.set(scheduler="single-threaded"):
got = ddf.sort_values(
by=by, ascending=ascending, na_position=na_position
)
expect = df.sort_values(
by=by, ascending=ascending, na_position=na_position
)
# cudf ordering for nulls is non-deterministic
dd.assert_eq(got[by], expect[by], check_index=False)
@pytest.mark.parametrize("by", [["a", "b"], ["b", "a"]])
@pytest.mark.parametrize("nparts", [1, 10])
def test_sort_values_custom_function(by, nparts):
df = cudf.DataFrame({"a": [1, 2, 3] * 20, "b": [4, 5, 6, 7] * 15})
ddf = dd.from_pandas(df, npartitions=nparts)
def f(partition, by_columns, ascending, na_position, **kwargs):
return partition.sort_values(
by_columns, ascending=ascending, na_position=na_position
)
with dask.config.set(scheduler="single-threaded"):
got = ddf.sort_values(
by=by[0], sort_function=f, sort_function_kwargs={"by_columns": by}
)
expect = df.sort_values(by=by)
dd.assert_eq(got, expect, check_index=False)
@pytest.mark.parametrize("by", ["a", "b", ["a", "b"], ["b", "a"]])
def test_sort_values_empty_string(by):
df = cudf.DataFrame({"a": [3, 2, 1, 4], "b": [""] * 4})
ddf = dd.from_pandas(df, npartitions=2)
got = ddf.sort_values(by)
if "a" in by:
expect = df.sort_values(by)
assert dd.assert_eq(got, expect, check_index=False)
def test_disk_shuffle():
try:
from dask.dataframe.dispatch import partd_encode_dispatch # noqa: F401
except ImportError:
pytest.skip("need a version of dask that has partd_encode_dispatch")
df = cudf.DataFrame({"a": [1, 2, 3] * 20, "b": [4, 5, 6, 7] * 15})
ddf = dd.from_pandas(df, npartitions=4)
got = dd.DataFrame.shuffle(ddf, "a", shuffle="disk")
dd.assert_eq(got, df)
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/tests/test_onehot.py
|
# Copyright (c) 2019-2022, NVIDIA CORPORATION.
import pandas as pd
import pytest
from dask import dataframe as dd
import cudf
import dask_cudf
def test_get_dummies_cat():
df = pd.DataFrame({"C": [], "A": []})
ddf = dd.from_pandas(df, npartitions=10)
dd.assert_eq(dd.get_dummies(ddf).compute(), pd.get_dummies(df))
gdf = cudf.from_pandas(df)
gddf = dask_cudf.from_cudf(gdf, npartitions=10)
dd.assert_eq(
dd.get_dummies(ddf).compute(),
dd.get_dummies(gddf).compute(),
check_dtype=False,
)
df = pd.DataFrame({"A": ["a", "b", "c", "a", "z"], "C": [1, 2, 3, 4, 5]})
df["B"] = df["A"].astype("category")
df["A"] = df["A"].astype("category")
ddf = dd.from_pandas(df, npartitions=10)
dd.assert_eq(dd.get_dummies(ddf).compute(), pd.get_dummies(df))
gdf = cudf.from_pandas(df)
gddf = dask_cudf.from_cudf(gdf, npartitions=10)
dd.assert_eq(
dd.get_dummies(ddf).compute(),
dd.get_dummies(gddf).compute(),
check_dtype=False,
)
df = pd.DataFrame(
{
"A": ["a", "b", "c", "a", "z"],
"C": pd.Series([1, 2, 3, 4, 5], dtype="category"),
}
)
df["B"] = df["A"].astype("category")
df["A"] = df["A"].astype("category")
ddf = dd.from_pandas(df, npartitions=10)
dd.assert_eq(dd.get_dummies(ddf).compute(), pd.get_dummies(df))
gdf = cudf.from_pandas(df)
gddf = dask_cudf.from_cudf(gdf, npartitions=10)
dd.assert_eq(
dd.get_dummies(ddf).compute(),
dd.get_dummies(gddf).compute(),
check_dtype=False,
)
def test_get_dummies_non_cat():
df = pd.DataFrame({"C": pd.Series([1, 2, 3, 4, 5])})
ddf = dd.from_pandas(df, npartitions=10)
dd.assert_eq(dd.get_dummies(ddf).compute(), pd.get_dummies(df))
with pytest.raises(NotImplementedError):
dd.get_dummies(ddf, columns=["C"]).compute()
gdf = cudf.from_pandas(df)
gddf = dask_cudf.from_cudf(gdf, npartitions=10)
with pytest.raises(NotImplementedError):
dd.get_dummies(gddf, columns=["C"]).compute()
dd.assert_eq(
dd.get_dummies(ddf).compute(),
dd.get_dummies(gddf).compute(),
check_dtype=False,
)
def test_get_dummies_cat_index():
df = pd.DataFrame({"C": pd.CategoricalIndex([1, 2, 3, 4, 5])})
ddf = dd.from_pandas(df, npartitions=10)
dd.assert_eq(dd.get_dummies(ddf).compute(), pd.get_dummies(df))
gdf = cudf.from_pandas(df)
gddf = dask_cudf.from_cudf(gdf, npartitions=10)
dd.assert_eq(
dd.get_dummies(ddf).compute(),
dd.get_dummies(gddf).compute(),
check_dtype=False,
)
def test_get_dummies_large():
gdf = cudf.datasets.randomdata(
nrows=200000,
dtypes={
"C": int,
"first": "category",
"b": float,
"second": "category",
},
)
df = gdf.to_pandas()
ddf = dd.from_pandas(df, npartitions=25)
dd.assert_eq(dd.get_dummies(ddf).compute(), pd.get_dummies(df))
gddf = dask_cudf.from_cudf(gdf, npartitions=25)
dd.assert_eq(
dd.get_dummies(ddf).compute(),
dd.get_dummies(gddf).compute(),
check_dtype=False,
)
def test_get_dummies_categorical():
# https://github.com/rapidsai/cudf/issues/7111
gdf = cudf.DataFrame({"A": ["a", "b", "b"], "B": [1, 2, 3]})
pdf = gdf.to_pandas()
gddf = dask_cudf.from_cudf(gdf, npartitions=1)
gddf = gddf.categorize(columns=["B"])
pddf = dd.from_pandas(pdf, npartitions=1)
pddf = pddf.categorize(columns=["B"])
expect = dd.get_dummies(pddf, columns=["B"])
got = dd.get_dummies(gddf, columns=["B"])
dd.assert_eq(
expect,
got,
)
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/tests/test_dispatch.py
|
# Copyright (c) 2021-2023, NVIDIA CORPORATION.
import numpy as np
import pandas as pd
import pytest
from dask.base import tokenize
from dask.dataframe import assert_eq
from dask.dataframe.methods import is_categorical_dtype
import cudf
def test_is_categorical_dispatch():
assert is_categorical_dtype(pd.CategoricalDtype([1, 2, 3]))
assert is_categorical_dtype(cudf.CategoricalDtype([1, 2, 3]))
assert is_categorical_dtype(cudf.Series([1, 2, 3], dtype="category"))
assert is_categorical_dtype(pd.Series([1, 2, 3], dtype="category"))
assert is_categorical_dtype(pd.Index([1, 2, 3], dtype="category"))
assert is_categorical_dtype(cudf.Index([1, 2, 3], dtype="category"))
@pytest.mark.parametrize("preserve_index", [True, False])
def test_pyarrow_conversion_dispatch(preserve_index):
from dask.dataframe.dispatch import (
from_pyarrow_table_dispatch,
to_pyarrow_table_dispatch,
)
df1 = cudf.DataFrame(np.random.randn(10, 3), columns=list("abc"))
df2 = from_pyarrow_table_dispatch(
df1, to_pyarrow_table_dispatch(df1, preserve_index=preserve_index)
)
assert type(df1) == type(df2)
assert_eq(df1, df2)
# Check that preserve_index does not produce a RangeIndex
if preserve_index:
assert not isinstance(df2.index, cudf.RangeIndex)
@pytest.mark.parametrize("index", [None, [1, 2] * 5])
def test_deterministic_tokenize(index):
# Checks that `dask.base.normalize_token` correctly
# dispatches to the logic defined in `backends.py`
# (making `tokenize(<cudf-data>)` deterministic).
df = cudf.DataFrame(
{"A": range(10), "B": ["dog", "cat"] * 5, "C": range(10, 0, -1)},
index=index,
)
# Matching data should produce the same token
assert tokenize(df) == tokenize(df)
assert tokenize(df.A) == tokenize(df.A)
assert tokenize(df.index) == tokenize(df.index)
assert tokenize(df) == tokenize(df.copy(deep=True))
assert tokenize(df.A) == tokenize(df.A.copy(deep=True))
assert tokenize(df.index) == tokenize(df.index.copy(deep=True))
# Modifying a column element should change the token
original_token = tokenize(df)
original_token_a = tokenize(df.A)
df.A.iloc[2] = 10
assert original_token != tokenize(df)
assert original_token_a != tokenize(df.A)
# Modifying an index element should change the token
original_token = tokenize(df)
original_token_index = tokenize(df.index)
new_index = df.index.values
new_index[2] = 10
df.index = new_index
assert original_token != tokenize(df)
assert original_token_index != tokenize(df.index)
# Check MultiIndex case
df2 = df.set_index(["B", "C"], drop=False)
assert tokenize(df) != tokenize(df2)
assert tokenize(df2) == tokenize(df2)
@pytest.mark.parametrize("preserve_index", [True, False])
def test_pyarrow_schema_dispatch(preserve_index):
from dask.dataframe.dispatch import (
pyarrow_schema_dispatch,
to_pyarrow_table_dispatch,
)
df = cudf.DataFrame(np.random.randn(10, 3), columns=list("abc"))
df["d"] = cudf.Series(["cat", "dog"] * 5)
table = to_pyarrow_table_dispatch(df, preserve_index=preserve_index)
schema = pyarrow_schema_dispatch(df, preserve_index=preserve_index)
assert schema.equals(table.schema)
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/tests/test_groupby.py
|
# Copyright (c) 2021-2023, NVIDIA CORPORATION.
import numpy as np
import pandas as pd
import pytest
import dask
from dask import dataframe as dd
from dask.utils_test import hlg_layer
import cudf
import dask_cudf
from dask_cudf.groupby import OPTIMIZED_AGGS, _aggs_optimized
def assert_cudf_groupby_layers(ddf):
for prefix in ("cudf-aggregate-chunk", "cudf-aggregate-agg"):
try:
hlg_layer(ddf.dask, prefix)
except KeyError:
raise AssertionError(
"Expected Dask dataframe to contain groupby layer with "
f"prefix {prefix}"
)
@pytest.fixture(params=["non_null", "null"])
def pdf(request):
np.random.seed(0)
# note that column name "x" is a substring of the groupby key;
# this gives us coverage for cudf#10829
pdf = pd.DataFrame(
{
"xx": np.random.randint(0, 5, size=10000),
"x": np.random.normal(size=10000),
"y": np.random.normal(size=10000),
}
)
# insert nulls into dataframe at random
if request.param == "null":
pdf = pdf.mask(np.random.choice([True, False], size=pdf.shape))
return pdf
@pytest.mark.parametrize("aggregation", OPTIMIZED_AGGS)
@pytest.mark.parametrize("series", [False, True])
def test_groupby_basic(series, aggregation, pdf):
gdf = cudf.DataFrame.from_pandas(pdf)
gdf_grouped = gdf.groupby("xx")
ddf_grouped = dask_cudf.from_cudf(gdf, npartitions=5).groupby("xx")
if series:
gdf_grouped = gdf_grouped.xx
ddf_grouped = ddf_grouped.xx
check_dtype = aggregation != "count"
expect = getattr(gdf_grouped, aggregation)()
actual = getattr(ddf_grouped, aggregation)()
assert_cudf_groupby_layers(actual)
dd.assert_eq(expect, actual, check_dtype=check_dtype)
expect = gdf_grouped.agg({"xx": aggregation})
actual = ddf_grouped.agg({"xx": aggregation})
assert_cudf_groupby_layers(actual)
dd.assert_eq(expect, actual, check_dtype=check_dtype)
# TODO: explore adding support with `.agg()`
@pytest.mark.parametrize("series", [True, False])
@pytest.mark.parametrize(
"aggregation",
[
"cumsum",
pytest.param(
"cumcount",
marks=pytest.mark.xfail(
reason="https://github.com/rapidsai/cudf/issues/13390"
),
),
],
)
def test_groupby_cumulative(aggregation, pdf, series):
gdf = cudf.DataFrame.from_pandas(pdf)
ddf = dask_cudf.from_cudf(gdf, npartitions=5)
gdf_grouped = gdf.groupby("xx")
ddf_grouped = ddf.groupby("xx")
if series:
gdf_grouped = gdf_grouped.xx
ddf_grouped = ddf_grouped.xx
a = getattr(gdf_grouped, aggregation)()
b = getattr(ddf_grouped, aggregation)()
dd.assert_eq(a, b)
@pytest.mark.parametrize("aggregation", OPTIMIZED_AGGS)
@pytest.mark.parametrize(
"func",
[
lambda df, agg: df.groupby("xx").agg({"y": agg}),
lambda df, agg: df.groupby("xx").y.agg({"y": agg}),
lambda df, agg: df.groupby("xx").agg([agg]),
lambda df, agg: df.groupby("xx").y.agg([agg]),
lambda df, agg: df.groupby("xx").agg(agg),
lambda df, agg: df.groupby("xx").y.agg(agg),
],
)
def test_groupby_agg(func, aggregation, pdf):
gdf = cudf.DataFrame.from_pandas(pdf)
ddf = dask_cudf.from_cudf(gdf, npartitions=5)
actual = func(ddf, aggregation)
expect = func(gdf, aggregation)
check_dtype = aggregation != "count"
assert_cudf_groupby_layers(actual)
# groupby.agg should add an explicit getitem layer
# to improve/enable column projection
assert hlg_layer(actual.dask, "getitem")
dd.assert_eq(expect, actual, check_names=False, check_dtype=check_dtype)
@pytest.mark.parametrize("split_out", [1, 3])
def test_groupby_agg_empty_partition(tmpdir, split_out):
# Write random and empty cudf DataFrames
# to two distinct files.
df = cudf.datasets.randomdata()
df.to_parquet(str(tmpdir.join("f0.parquet")))
cudf.DataFrame(
columns=["id", "x", "y"],
dtype={"id": "int64", "x": "float64", "y": "float64"},
).to_parquet(str(tmpdir.join("f1.parquet")))
# Read back our two partitions as a single
# dask_cudf DataFrame (one partition is now empty)
ddf = dask_cudf.read_parquet(str(tmpdir))
gb = ddf.groupby(["id"]).agg({"x": ["sum"]}, split_out=split_out)
expect = df.groupby(["id"]).agg({"x": ["sum"]}).sort_index()
dd.assert_eq(gb.compute().sort_index(), expect)
# reason gotattr in cudf
@pytest.mark.parametrize(
"func",
[
lambda df: df.groupby(["a", "b"]).x.sum(),
lambda df: df.groupby(["a", "b"]).sum(),
pytest.param(
lambda df: df.groupby(["a", "b"]).agg({"x", "sum"}),
marks=pytest.mark.xfail,
),
],
)
def test_groupby_multi_column(func):
pdf = pd.DataFrame(
{
"a": np.random.randint(0, 20, size=1000),
"b": np.random.randint(0, 5, size=1000),
"x": np.random.normal(size=1000),
}
)
gdf = cudf.DataFrame.from_pandas(pdf)
ddf = dask_cudf.from_cudf(gdf, npartitions=5)
a = func(gdf).to_pandas()
b = func(ddf).compute().to_pandas()
dd.assert_eq(a, b)
def test_reset_index_multiindex():
df = cudf.DataFrame()
df["id_1"] = ["a", "a", "b"]
df["id_2"] = [0, 0, 1]
df["val"] = [1, 2, 3]
df_lookup = cudf.DataFrame()
df_lookup["id_1"] = ["a", "b"]
df_lookup["metadata"] = [0, 1]
gddf = dask_cudf.from_cudf(df, npartitions=2)
gddf_lookup = dask_cudf.from_cudf(df_lookup, npartitions=2)
ddf = dd.from_pandas(df.to_pandas(), npartitions=2)
ddf_lookup = dd.from_pandas(df_lookup.to_pandas(), npartitions=2)
# Note: 'id_2' has wrong type (object) until after compute
dd.assert_eq(
gddf.groupby(by=["id_1", "id_2"])
.val.sum()
.reset_index()
.merge(gddf_lookup, on="id_1")
.compute(),
ddf.groupby(by=["id_1", "id_2"])
.val.sum()
.reset_index()
.merge(ddf_lookup, on="id_1"),
)
@pytest.mark.parametrize("split_out", [1, 2, 3])
@pytest.mark.parametrize(
"column", ["c", "d", "e", ["b", "c"], ["b", "d"], ["b", "e"]]
)
def test_groupby_split_out(split_out, column):
df = pd.DataFrame(
{
"a": np.arange(8),
"b": [1, 0, 0, 2, 1, 1, 2, 0],
"c": [0, 1] * 4,
"d": ["dog", "cat", "cat", "dog", "dog", "dog", "cat", "bird"],
}
).fillna(0)
df["e"] = df["d"].astype("category")
gdf = cudf.from_pandas(df)
ddf = dd.from_pandas(df, npartitions=3)
gddf = dask_cudf.from_cudf(gdf, npartitions=3)
ddf_result = (
ddf.groupby(column)
.a.mean(split_out=split_out)
.compute()
.sort_values()
.dropna()
)
gddf_result = (
gddf.groupby(column)
.a.mean(split_out=split_out)
.compute()
.sort_values()
)
dd.assert_eq(gddf_result, ddf_result, check_index=False)
@pytest.mark.parametrize("dropna", [False, True, None])
@pytest.mark.parametrize(
"by", ["a", "b", "c", "d", ["a", "b"], ["a", "c"], ["a", "d"]]
)
def test_groupby_dropna_cudf(dropna, by):
# NOTE: This test is borrowed from upstream dask
# (dask/dask/dataframe/tests/test_groupby.py)
df = cudf.DataFrame(
{
"a": [1, 2, 3, 4, None, None, 7, 8],
"b": [1, None, 1, 3, None, 3, 1, 3],
"c": ["a", "b", None, None, "e", "f", "g", "h"],
"e": [4, 5, 6, 3, 2, 1, 0, 0],
}
)
df["b"] = df["b"].astype("datetime64[ns]")
df["d"] = df["c"].astype("category")
ddf = dask_cudf.from_cudf(df, npartitions=3)
if dropna is None:
dask_result = ddf.groupby(by).e.sum()
cudf_result = df.groupby(by).e.sum()
else:
dask_result = ddf.groupby(by, dropna=dropna).e.sum()
cudf_result = df.groupby(by, dropna=dropna).e.sum()
if by in ["c", "d"]:
# Loose string/category index name in cudf...
dask_result = dask_result.compute()
dask_result.index.name = cudf_result.index.name
dd.assert_eq(dask_result, cudf_result)
@pytest.mark.parametrize(
"dropna,by",
[
(False, "a"),
(False, "b"),
(False, "c"),
pytest.param(
False,
"d",
marks=pytest.mark.xfail(
reason="dropna=False is broken in Dask CPU for groupbys on "
"categorical columns"
),
),
pytest.param(
False,
["a", "b"],
marks=pytest.mark.xfail(
reason="https://github.com/dask/dask/issues/8817"
),
),
pytest.param(
False,
["a", "c"],
marks=pytest.mark.xfail(
reason="https://github.com/dask/dask/issues/8817"
),
),
pytest.param(
False,
["a", "d"],
marks=pytest.mark.xfail(
reason="multi-col groupbys on categorical columns are broken "
"in Dask CPU"
),
),
(True, "a"),
(True, "b"),
(True, "c"),
(True, "d"),
(True, ["a", "b"]),
(True, ["a", "c"]),
pytest.param(
True,
["a", "d"],
marks=pytest.mark.xfail(
reason="multi-col groupbys on categorical columns are broken "
"in Dask CPU"
),
),
(None, "a"),
(None, "b"),
(None, "c"),
(None, "d"),
(None, ["a", "b"]),
(None, ["a", "c"]),
pytest.param(
None,
["a", "d"],
marks=pytest.mark.xfail(
reason="multi-col groupbys on categorical columns are broken "
"in Dask CPU"
),
),
],
)
def test_groupby_dropna_dask(dropna, by):
# NOTE: This test is borrowed from upstream dask
# (dask/dask/dataframe/tests/test_groupby.py)
df = pd.DataFrame(
{
"a": [1, 2, 3, 4, None, None, 7, 8],
"b": [1, None, 1, 3, None, 3, 1, 3],
"c": ["a", "b", None, None, "e", "f", "g", "h"],
"e": [4, 5, 6, 3, 2, 1, 0, 0],
}
)
df["b"] = df["b"].astype("datetime64[ns]")
df["d"] = df["c"].astype("category")
gdf = cudf.from_pandas(df)
ddf = dd.from_pandas(df, npartitions=3)
gddf = dask_cudf.from_cudf(gdf, npartitions=3)
if dropna is None:
dask_cudf_result = gddf.groupby(by).e.sum()
dask_result = ddf.groupby(by).e.sum()
else:
dask_cudf_result = gddf.groupby(by, dropna=dropna).e.sum()
dask_result = ddf.groupby(by, dropna=dropna).e.sum()
dd.assert_eq(dask_cudf_result, dask_result)
@pytest.mark.parametrize("myindex", [[1, 2] * 4, ["s1", "s2"] * 4])
def test_groupby_string_index_name(myindex):
# GH-Issue #3420
data = {"index": myindex, "data": [0, 1] * 4}
df = cudf.DataFrame(data=data)
ddf = dask_cudf.from_cudf(df, npartitions=2)
gdf = ddf.groupby("index").agg({"data": "count"})
assert gdf.compute().index.name == gdf.index.name
@pytest.mark.parametrize(
"agg_func",
[
lambda gb: gb.agg({"c": ["count"]}, split_out=2),
lambda gb: gb.agg({"c": "count"}, split_out=2),
lambda gb: gb.agg({"c": ["count", "sum"]}, split_out=2),
lambda gb: gb.count(split_out=2),
lambda gb: gb.c.count(split_out=2),
],
)
def test_groupby_split_out_multiindex(agg_func):
df = cudf.DataFrame(
{
"a": np.random.randint(0, 10, 100),
"b": np.random.randint(0, 5, 100),
"c": np.random.random(100),
}
)
ddf = dask_cudf.from_cudf(df, 5)
pddf = dd.from_pandas(df.to_pandas(), 5)
gr = agg_func(ddf.groupby(["a", "b"]))
pr = agg_func(pddf.groupby(["a", "b"]))
dd.assert_eq(gr.compute(), pr.compute())
@pytest.mark.parametrize("npartitions", [1, 2])
def test_groupby_multiindex_reset_index(npartitions):
df = cudf.DataFrame(
{"a": [1, 1, 2, 3, 4], "b": [5, 2, 1, 2, 5], "c": [1, 2, 2, 3, 5]}
)
ddf = dask_cudf.from_cudf(df, npartitions=npartitions)
pddf = dd.from_pandas(df.to_pandas(), npartitions=npartitions)
gr = ddf.groupby(["a", "c"]).agg({"b": ["count"]}).reset_index()
pr = pddf.groupby(["a", "c"]).agg({"b": ["count"]}).reset_index()
# CuDF uses "int32" for count. Pandas uses "int64"
gr_out = gr.compute().sort_values(by=["a", "c"]).reset_index(drop=True)
gr_out[("b", "count")] = gr_out[("b", "count")].astype("int64")
dd.assert_eq(
gr_out,
pr.compute().sort_values(by=["a", "c"]).reset_index(drop=True),
)
@pytest.mark.parametrize(
"groupby_keys", [["a"], ["a", "b"], ["a", "b", "dd"], ["a", "dd", "b"]]
)
@pytest.mark.parametrize(
"agg_func",
[
lambda gb: gb.agg({"c": ["count"]}),
lambda gb: gb.agg({"c": "count"}),
lambda gb: gb.agg({"c": ["count", "sum"]}),
lambda gb: gb.count(),
lambda gb: gb.c.count(),
],
)
def test_groupby_reset_index_multiindex(groupby_keys, agg_func):
df = cudf.DataFrame(
{
"a": np.random.randint(0, 10, 10),
"b": np.random.randint(0, 5, 10),
"c": np.random.randint(0, 5, 10),
"dd": np.random.randint(0, 5, 10),
}
)
ddf = dask_cudf.from_cudf(df, 5)
pddf = dd.from_pandas(df.to_pandas(), 5)
gr = agg_func(ddf.groupby(groupby_keys)).reset_index()
pr = agg_func(pddf.groupby(groupby_keys)).reset_index()
gf = gr.compute().sort_values(groupby_keys).reset_index(drop=True)
pf = pr.compute().sort_values(groupby_keys).reset_index(drop=True)
dd.assert_eq(gf, pf)
def test_groupby_reset_index_drop_True():
df = cudf.DataFrame(
{"a": np.random.randint(0, 10, 10), "b": np.random.randint(0, 5, 10)}
)
ddf = dask_cudf.from_cudf(df, 5)
pddf = dd.from_pandas(df.to_pandas(), 5)
gr = ddf.groupby(["a"]).agg({"b": ["count"]}).reset_index(drop=True)
pr = pddf.groupby(["a"]).agg({"b": ["count"]}).reset_index(drop=True)
gf = gr.compute().sort_values(by=["b"]).reset_index(drop=True)
pf = pr.compute().sort_values(by=[("b", "count")]).reset_index(drop=True)
dd.assert_eq(gf, pf)
def test_groupby_mean_sort_false():
df = cudf.datasets.randomdata(nrows=150, dtypes={"a": int, "b": int})
ddf = dask_cudf.from_cudf(df, 1)
pddf = dd.from_pandas(df.to_pandas(), 1)
gr = ddf.groupby(["a"]).agg({"b": "mean"})
pr = pddf.groupby(["a"]).agg({"b": "mean"})
assert pr.index.name == gr.index.name
assert pr.head(0).index.name == gr.head(0).index.name
gf = gr.compute().sort_values(by=["b"]).reset_index(drop=True)
pf = pr.compute().sort_values(by=["b"]).reset_index(drop=True)
dd.assert_eq(gf, pf)
def test_groupby_reset_index_dtype():
# Make sure int8 dtype is properly preserved
# Through various cudf/dask_cudf ops
#
# Note: GitHub Issue#4090 reproducer
df = cudf.DataFrame()
df["a"] = np.arange(10, dtype="int8")
df["b"] = np.arange(10, dtype="int8")
df = dask_cudf.from_cudf(df, 1)
a = df.groupby("a").agg({"b": ["count"]})
assert a.index.dtype == "int8"
assert a.reset_index().dtypes[0] == "int8"
def test_groupby_reset_index_names():
df = cudf.datasets.randomdata(
nrows=10, dtypes={"a": str, "b": int, "c": int}
)
pdf = df.to_pandas()
gddf = dask_cudf.from_cudf(df, 2)
pddf = dd.from_pandas(pdf, 2)
g_res = gddf.groupby("a", sort=True).sum()
p_res = pddf.groupby("a", sort=True).sum()
got = g_res.reset_index().compute().sort_values(["a", "b", "c"])
expect = p_res.reset_index().compute().sort_values(["a", "b", "c"])
dd.assert_eq(got, expect)
def test_groupby_reset_index_string_name():
df = cudf.DataFrame({"value": range(5), "key": ["a", "a", "b", "a", "c"]})
pdf = df.to_pandas()
gddf = dask_cudf.from_cudf(df, npartitions=1)
pddf = dd.from_pandas(pdf, npartitions=1)
g_res = (
gddf.groupby(["key"]).agg({"value": "mean"}).reset_index(drop=False)
)
p_res = (
pddf.groupby(["key"]).agg({"value": "mean"}).reset_index(drop=False)
)
got = g_res.compute().sort_values(["key", "value"]).reset_index(drop=True)
expect = (
p_res.compute().sort_values(["key", "value"]).reset_index(drop=True)
)
dd.assert_eq(got, expect)
assert len(g_res) == len(p_res)
def test_groupby_categorical_key():
# See https://github.com/rapidsai/cudf/issues/4608
df = dask.datasets.timeseries()
gddf = dask_cudf.from_dask_dataframe(df)
gddf["name"] = gddf["name"].astype("category")
ddf = gddf.to_dask_dataframe()
got = gddf.groupby("name", sort=True).agg(
{"x": ["mean", "max"], "y": ["mean", "count"]}
)
# Use `compute` to avoid upstream issue for now
# (See: https://github.com/dask/dask/issues/9515)
expect = (
ddf.compute()
.groupby("name", sort=True)
.agg({"x": ["mean", "max"], "y": ["mean", "count"]})
)
dd.assert_eq(expect, got)
@pytest.mark.parametrize("as_index", [True, False])
@pytest.mark.parametrize("split_out", ["use_dask_default", 1, 2])
@pytest.mark.parametrize("split_every", [False, 4])
@pytest.mark.parametrize("npartitions", [1, 10])
def test_groupby_agg_params(npartitions, split_every, split_out, as_index):
df = cudf.datasets.randomdata(
nrows=150,
dtypes={"name": str, "a": int, "b": int, "c": float},
)
df["a"] = [0, 1, 2] * 50
ddf = dask_cudf.from_cudf(df, npartitions)
pddf = dd.from_pandas(df.to_pandas(), npartitions)
agg_dict = {
"a": "sum",
"b": ["min", "max", "mean"],
"c": ["mean", "std", "var"],
}
split_kwargs = {"split_every": split_every, "split_out": split_out}
if split_out == "use_dask_default":
split_kwargs.pop("split_out")
# Check `sort=True` behavior
if split_out == 1:
gf = (
ddf.groupby(["name", "a"], sort=True, as_index=as_index)
.aggregate(
agg_dict,
**split_kwargs,
)
.compute()
)
if as_index:
# Groupby columns became the index.
# Sorting the index should not change anything.
dd.assert_eq(gf.index, gf.sort_index().index)
else:
# Groupby columns are did NOT become the index.
# Sorting by these columns should not change anything.
sort_cols = [("name", ""), ("a", "")]
dd.assert_eq(
gf[sort_cols],
gf[sort_cols].sort_values(sort_cols),
check_index=False,
)
# Full check (`sort=False`)
gr = ddf.groupby(["name", "a"], sort=False, as_index=as_index).aggregate(
agg_dict,
**split_kwargs,
)
pr = pddf.groupby(["name", "a"], sort=False).agg(
agg_dict,
**split_kwargs,
)
# Test `as_index` argument
if as_index:
# Groupby columns should NOT be in columns
assert ("name", "") not in gr.columns and ("a", "") not in gr.columns
else:
# Groupby columns SHOULD be in columns
assert ("name", "") in gr.columns and ("a", "") in gr.columns
# Check `split_out` argument
assert gr.npartitions == (
1 if split_out == "use_dask_default" else split_out
)
# Compute for easier multiindex handling
gf = gr.compute()
pf = pr.compute()
# Reset index and sort by groupby columns
if as_index:
gf = gf.reset_index(drop=False)
sort_cols = [("name", ""), ("a", ""), ("c", "mean")]
gf = gf.sort_values(sort_cols).reset_index(drop=True)
pf = (
pf.reset_index(drop=False)
.sort_values(sort_cols)
.reset_index(drop=True)
)
dd.assert_eq(gf, pf)
@pytest.mark.parametrize(
"aggregations", [(sum, "sum"), (max, "max"), (min, "min")]
)
def test_groupby_agg_redirect(aggregations):
pdf = pd.DataFrame(
{
"x": np.random.randint(0, 5, size=10000),
"y": np.random.normal(size=10000),
}
)
gdf = cudf.DataFrame.from_pandas(pdf)
ddf = dask_cudf.from_cudf(gdf, npartitions=5)
a = ddf.groupby("x").agg({"x": aggregations[0]}).compute()
b = ddf.groupby("x").agg({"x": aggregations[1]}).compute()
dd.assert_eq(a, b)
@pytest.mark.parametrize(
"arg,supported",
[
("sum", True),
(["sum"], True),
({"a": "sum"}, True),
({"a": ["sum"]}, True),
("not_supported", False),
(["not_supported"], False),
({"a": "not_supported"}, False),
({"a": ["not_supported"]}, False),
],
)
def test_is_supported(arg, supported):
assert _aggs_optimized(arg, OPTIMIZED_AGGS) is supported
def test_groupby_unique_lists():
df = pd.DataFrame({"a": [0, 0, 0, 1, 1, 1], "b": [10, 10, 10, 7, 8, 9]})
ddf = dd.from_pandas(df, 2)
gdf = cudf.from_pandas(df)
gddf = dask_cudf.from_cudf(gdf, 2)
dd.assert_eq(
ddf.groupby("a").b.unique().compute(),
gddf.groupby("a").b.unique().compute(),
)
dd.assert_eq(
gdf.groupby("a").b.unique(),
gddf.groupby("a").b.unique().compute(),
)
@pytest.mark.parametrize(
"data",
[
{"a": [], "b": []},
{"a": [2, 1, 2, 1, 1, 3], "b": [None, 1, 2, None, 2, None]},
{"a": [None], "b": [None]},
{"a": [2, 1, 1], "b": [None, 1, 0], "c": [None, 0, 1]},
],
)
@pytest.mark.parametrize("agg", ["first", "last"])
def test_groupby_first_last(data, agg):
pdf = pd.DataFrame(data)
gdf = cudf.DataFrame.from_pandas(pdf)
ddf = dd.from_pandas(pdf, npartitions=2)
gddf = dask_cudf.from_cudf(gdf, npartitions=2)
dd.assert_eq(
ddf.groupby("a").agg(agg),
gddf.groupby("a").agg(agg),
)
dd.assert_eq(
getattr(ddf.groupby("a"), agg)(),
getattr(gddf.groupby("a"), agg)(),
)
dd.assert_eq(gdf.groupby("a").agg(agg), gddf.groupby("a").agg(agg))
dd.assert_eq(
getattr(gdf.groupby("a"), agg)(),
getattr(gddf.groupby("a"), agg)(),
)
def test_groupby_with_list_of_series():
df = cudf.DataFrame({"a": [1, 2, 3, 4, 5]})
gdf = dask_cudf.from_cudf(df, npartitions=2)
gs = cudf.Series([1, 1, 1, 2, 2], name="id")
ggs = dask_cudf.from_cudf(gs, npartitions=2)
ddf = dd.from_pandas(df.to_pandas(), npartitions=2)
pgs = dd.from_pandas(gs.to_pandas(), npartitions=2)
dd.assert_eq(
gdf.groupby([ggs]).agg(["sum"]), ddf.groupby([pgs]).agg(["sum"])
)
@pytest.mark.parametrize(
"func",
[
lambda df: df.groupby("x").agg({"y": {"foo": "sum"}}),
lambda df: df.groupby("x").agg({"y": {"foo": "sum", "bar": "count"}}),
],
)
def test_groupby_nested_dict(func):
pdf = pd.DataFrame(
{
"x": np.random.randint(0, 5, size=10000),
"y": np.random.normal(size=10000),
}
)
ddf = dd.from_pandas(pdf, npartitions=5)
c_ddf = ddf.map_partitions(cudf.from_pandas)
a = func(ddf).compute()
b = func(c_ddf).compute().to_pandas()
a.index.name = None
a.name = None
b.index.name = None
b.name = None
dd.assert_eq(a, b)
@pytest.mark.parametrize(
"func",
[
lambda df: df.groupby(["x", "y"]).min(),
pytest.param(
lambda df: df.groupby(["x", "y"]).agg("min"),
marks=pytest.mark.skip(
reason="https://github.com/dask/dask/issues/9093"
),
),
lambda df: df.groupby(["x", "y"]).y.min(),
lambda df: df.groupby(["x", "y"]).y.agg("min"),
],
)
def test_groupby_all_columns(func):
pdf = pd.DataFrame(
{
"x": np.random.randint(0, 5, size=10000),
"y": np.random.normal(size=10000),
}
)
ddf = dd.from_pandas(pdf, npartitions=5)
gddf = ddf.map_partitions(cudf.from_pandas)
expect = func(ddf)
actual = func(gddf)
dd.assert_eq(expect, actual)
def test_groupby_shuffle():
df = cudf.datasets.randomdata(
nrows=640, dtypes={"a": str, "b": int, "c": int}
)
gddf = dask_cudf.from_cudf(df, 8)
spec = {"b": "mean", "c": "max"}
expect = df.groupby("a", sort=True).agg(spec)
# Sorted aggregation, single-partition output
# (sort=True, split_out=1)
got = gddf.groupby("a", sort=True).agg(spec, shuffle=True, split_out=1)
dd.assert_eq(expect, got)
# Sorted aggregation, multi-partition output
# (sort=True, split_out=2)
got = gddf.groupby("a", sort=True).agg(spec, shuffle=True, split_out=2)
dd.assert_eq(expect, got)
# Un-sorted aggregation, single-partition output
# (sort=False, split_out=1)
got = gddf.groupby("a", sort=False).agg(spec, shuffle=True, split_out=1)
dd.assert_eq(expect.sort_index(), got.compute().sort_index())
# Un-sorted aggregation, multi-partition output
# (sort=False, split_out=2)
# NOTE: `shuffle=True` should be default
got = gddf.groupby("a", sort=False).agg(spec, split_out=2)
dd.assert_eq(expect, got.compute().sort_index())
# Sorted aggregation fails with split_out>1 when shuffle is False
# (sort=True, split_out=2, shuffle=False)
with pytest.raises(ValueError):
gddf.groupby("a", sort=True).agg(spec, shuffle=False, split_out=2)
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/tests/test_applymap.py
|
# Copyright (c) 2022, NVIDIA CORPORATION.
import pytest
from pandas import NA
from dask import dataframe as dd
from dask_cudf.tests.utils import _make_random_frame
@pytest.mark.parametrize(
"func",
[
lambda x: x + 1,
lambda x: x - 0.5,
lambda x: 2 if x is NA else 2 + (x + 1) / 4.1,
lambda x: 42,
],
)
@pytest.mark.parametrize("has_na", [True, False])
def test_applymap_basic(func, has_na):
size = 2000
pdf, dgdf = _make_random_frame(size, include_na=False)
dpdf = dd.from_pandas(pdf, npartitions=dgdf.npartitions)
expect = dpdf.applymap(func)
got = dgdf.applymap(func)
dd.assert_eq(expect, got, check_dtype=False)
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/tests/test_join.py
|
# Copyright (c) 2019-2022, NVIDIA CORPORATION.
from functools import partial
import numpy as np
import pandas as pd
import pytest
from dask import dataframe as dd
import cudf
import dask_cudf as dgd
param_nrows = [5, 10, 50, 100]
@pytest.mark.parametrize("left_nrows", param_nrows)
@pytest.mark.parametrize("right_nrows", param_nrows)
@pytest.mark.parametrize("left_nkeys", [4, 5])
@pytest.mark.parametrize("right_nkeys", [4, 5])
def test_join_inner(left_nrows, right_nrows, left_nkeys, right_nkeys):
chunksize = 50
np.random.seed(0)
# cuDF
left = cudf.DataFrame(
{
"x": np.random.randint(0, left_nkeys, size=left_nrows),
"a": np.arange(left_nrows),
}
)
right = cudf.DataFrame(
{
"x": np.random.randint(0, right_nkeys, size=right_nrows),
"a": 1000 * np.arange(right_nrows),
}
)
expect = left.set_index("x").join(
right.set_index("x"), how="inner", sort=True, lsuffix="l", rsuffix="r"
)
expect = expect.to_pandas()
# dask_cudf
left = dgd.from_cudf(left, chunksize=chunksize)
right = dgd.from_cudf(right, chunksize=chunksize)
joined = left.set_index("x").join(
right.set_index("x"), how="inner", lsuffix="l", rsuffix="r"
)
got = joined.compute().to_pandas()
if len(got.columns):
got = got.sort_values(list(got.columns))
expect = expect.sort_values(list(expect.columns))
# Check index
np.testing.assert_array_equal(expect.index.values, got.index.values)
# Check rows in each groups
expect_rows = {}
got_rows = {}
def gather(df, grows):
grows[df["x"].values[0]] = (set(df.al), set(df.ar))
expect.reset_index().groupby("x").apply(partial(gather, grows=expect_rows))
expect.reset_index().groupby("x").apply(partial(gather, grows=got_rows))
assert got_rows == expect_rows
@pytest.mark.parametrize("left_nrows", param_nrows)
@pytest.mark.parametrize("right_nrows", param_nrows)
@pytest.mark.parametrize("left_nkeys", [4, 5])
@pytest.mark.parametrize("right_nkeys", [4, 5])
@pytest.mark.parametrize("how", ["left", "right"])
def test_join_left(left_nrows, right_nrows, left_nkeys, right_nkeys, how):
chunksize = 50
np.random.seed(0)
# cuDF
left = cudf.DataFrame(
{
"x": np.random.randint(0, left_nkeys, size=left_nrows),
"a": np.arange(left_nrows, dtype=np.float64),
}
)
right = cudf.DataFrame(
{
"x": np.random.randint(0, right_nkeys, size=right_nrows),
"a": 1000 * np.arange(right_nrows, dtype=np.float64),
}
)
expect = left.set_index("x").join(
right.set_index("x"), how=how, sort=True, lsuffix="l", rsuffix="r"
)
expect = expect.to_pandas()
# dask_cudf
left = dgd.from_cudf(left, chunksize=chunksize)
right = dgd.from_cudf(right, chunksize=chunksize)
joined = left.set_index("x").join(
right.set_index("x"), how=how, lsuffix="l", rsuffix="r"
)
got = joined.compute().to_pandas()
if len(got.columns):
got = got.sort_values(list(got.columns))
expect = expect.sort_values(list(expect.columns))
# Check index
np.testing.assert_array_equal(expect.index.values, got.index.values)
# Check rows in each groups
expect_rows = {}
got_rows = {}
def gather(df, grows):
cola = np.sort(np.asarray(df.al))
colb = np.sort(np.asarray(df.ar))
grows[df["x"].values[0]] = (cola, colb)
expect.reset_index().groupby("x").apply(partial(gather, grows=expect_rows))
expect.reset_index().groupby("x").apply(partial(gather, grows=got_rows))
for k in expect_rows:
np.testing.assert_array_equal(expect_rows[k][0], got_rows[k][0])
np.testing.assert_array_equal(expect_rows[k][1], got_rows[k][1])
@pytest.mark.parametrize("left_nrows", param_nrows)
@pytest.mark.parametrize("right_nrows", param_nrows)
@pytest.mark.parametrize("left_nkeys", [4, 5])
@pytest.mark.parametrize("right_nkeys", [4, 5])
def test_merge_left(
left_nrows, right_nrows, left_nkeys, right_nkeys, how="left"
):
chunksize = 3
np.random.seed(0)
# cuDF
left = cudf.DataFrame(
{
"x": np.random.randint(0, left_nkeys, size=left_nrows),
"y": np.random.randint(0, left_nkeys, size=left_nrows),
"a": np.arange(left_nrows, dtype=np.float64),
}
)
right = cudf.DataFrame(
{
"x": np.random.randint(0, right_nkeys, size=right_nrows),
"y": np.random.randint(0, right_nkeys, size=right_nrows),
"a": 1000 * np.arange(right_nrows, dtype=np.float64),
}
)
expect = left.merge(right, on=("x", "y"), how=how)
def normalize(df):
return (
df.to_pandas()
.sort_values(["x", "y", "a_x", "a_y"])
.reset_index(drop=True)
)
# dask_cudf
left = dgd.from_cudf(left, chunksize=chunksize)
right = dgd.from_cudf(right, chunksize=chunksize)
result = left.merge(right, on=("x", "y"), how=how).compute(
scheduler="single-threaded"
)
dd.assert_eq(normalize(expect), normalize(result))
@pytest.mark.parametrize("left_nrows", [2, 5])
@pytest.mark.parametrize("right_nrows", [5, 10])
@pytest.mark.parametrize("left_nkeys", [4])
@pytest.mark.parametrize("right_nkeys", [4])
def test_merge_1col_left(
left_nrows, right_nrows, left_nkeys, right_nkeys, how="left"
):
chunksize = 3
np.random.seed(0)
# cuDF
left = cudf.DataFrame(
{
"x": np.random.randint(0, left_nkeys, size=left_nrows),
"a": np.arange(left_nrows, dtype=np.float64),
}
)
right = cudf.DataFrame(
{
"x": np.random.randint(0, right_nkeys, size=right_nrows),
"a": 1000 * np.arange(right_nrows, dtype=np.float64),
}
)
expect = left.merge(right, on=["x"], how=how)
expect = (
expect.to_pandas()
.sort_values(["x", "a_x", "a_y"])
.reset_index(drop=True)
)
# dask_cudf
left = dgd.from_cudf(left, chunksize=chunksize)
right = dgd.from_cudf(right, chunksize=chunksize)
joined = left.merge(right, on=["x"], how=how)
got = joined.compute().to_pandas()
got = got.sort_values(["x", "a_x", "a_y"]).reset_index(drop=True)
dd.assert_eq(expect, got)
def test_merge_should_fail():
# Expected failure cases described in #2694
df1 = cudf.DataFrame()
df1["a"] = [1, 2, 3, 4, 5, 6] * 2
df1["b"] = np.random.randint(0, 12, 12)
df2 = cudf.DataFrame()
df2["a"] = [7, 2, 3, 8, 5, 9] * 2
df2["c"] = np.random.randint(0, 12, 12)
left = dgd.from_cudf(df1, 1).groupby("a").b.min().to_frame()
right = dgd.from_cudf(df2, 1).groupby("a").c.min().to_frame()
with pytest.raises(KeyError):
left.merge(right, how="left", on=["nonCol"])
with pytest.raises(KeyError):
left.merge(right, how="left", on=["b"])
with pytest.raises(KeyError):
left.merge(right, how="left", on=["c"])
# Same column names
df2["b"] = np.random.randint(0, 12, 12)
right = dgd.from_cudf(df2, 1).groupby("a").b.min().to_frame()
with pytest.raises(KeyError):
left.merge(right, how="left", on="NonCol")
@pytest.mark.parametrize("how", ["inner", "left"])
def test_indexed_join(how):
p_left = pd.DataFrame({"x": np.arange(10)}, index=np.arange(10) * 2)
p_right = pd.DataFrame({"y": 1}, index=np.arange(15))
g_left = cudf.from_pandas(p_left)
g_right = cudf.from_pandas(p_right)
dg_left = dd.from_pandas(g_left, npartitions=4)
dg_right = dd.from_pandas(g_right, npartitions=5)
d = g_left.merge(g_right, left_index=True, right_index=True, how=how)
dg = dg_left.merge(dg_right, left_index=True, right_index=True, how=how)
# occasionally order is not correct (possibly do to hashing in the merge)
d = d.sort_values("x") # index is preserved
dg = dg.sort_values(
"x"
) # index is reset -- sort_values will slow test down
dd.assert_eq(d, dg, check_index=False)
@pytest.mark.parametrize("how", ["left", "inner"])
def test_how(how):
left = cudf.DataFrame(
{"x": [1, 2, 3, 4, None], "y": [1.0, 2.0, 3.0, 4.0, 0.0]}
)
right = cudf.DataFrame({"x": [2, 3, None, 2], "y": [20, 30, 0, 20]})
dleft = dd.from_pandas(left, npartitions=2)
dright = dd.from_pandas(right, npartitions=3)
expected = left.merge(right, how=how, on="x")
result = dleft.merge(dright, how=how, on="x")
dd.assert_eq(
result.compute().to_pandas().sort_values("x"),
expected.to_pandas().sort_values("x"),
check_index=False,
)
@pytest.mark.parametrize("daskify", [True, False])
def test_single_dataframe_merge(daskify):
right = cudf.DataFrame({"x": [1, 2, 1, 2], "y": [1, 2, 3, 4]})
left = cudf.DataFrame({"x": np.arange(100) % 10, "z": np.arange(100)})
dleft = dd.from_pandas(left, npartitions=10)
if daskify:
dright = dd.from_pandas(right, npartitions=1)
else:
dright = right
expected = left.merge(right, how="inner")
result = dd.merge(dleft, dright, how="inner")
assert len(result.dask) < 25
dd.assert_eq(
result.compute().to_pandas().sort_values(["z", "y"]),
expected.to_pandas().sort_values(["z", "y"]),
check_index=False,
)
@pytest.mark.parametrize("how", ["inner", "left"])
@pytest.mark.parametrize("on", ["id_1", ["id_1"], ["id_1", "id_2"]])
def test_on(how, on):
left = cudf.DataFrame(
{"id_1": [1, 2, 3, 4, 5], "id_2": [1.0, 2.0, 3.0, 4.0, 0.0]}
)
right = cudf.DataFrame(
{"id_1": [2, 3, None, 2], "id_2": [2.0, 3.0, 4.0, 20]}
)
dleft = dd.from_pandas(left, npartitions=2)
dright = dd.from_pandas(right, npartitions=3)
expected = left.merge(right, how=how, on=on)
result = dleft.merge(dright, how=how, on=on)
dd.assert_eq(
result.compute().to_pandas().sort_values(on),
expected.to_pandas().sort_values(on),
check_index=False,
)
def test_single_partition():
left = cudf.DataFrame({"x": range(200), "y": range(200)})
right = cudf.DataFrame({"x": range(100), "z": range(100)})
dleft = dd.from_pandas(left, npartitions=1)
dright = dd.from_pandas(right, npartitions=10)
m = dleft.merge(dright, how="inner")
assert len(m.dask) < len(dleft.dask) + len(dright.dask) * 3
dleft = dd.from_pandas(left, npartitions=5)
m2 = dleft.merge(right, how="inner")
assert len(m2.dask) < len(dleft.dask) * 3
assert len(m2) == 100
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/tests/utils.py
|
# Copyright (c) 2022, NVIDIA CORPORATION.
import numpy as np
import pandas as pd
import dask.dataframe as dd
import cudf
def _make_random_frame(nelem, npartitions=2, include_na=False):
df = pd.DataFrame(
{"x": np.random.random(size=nelem), "y": np.random.random(size=nelem)}
)
if include_na:
df["x"][::2] = pd.NA
gdf = cudf.DataFrame.from_pandas(df)
dgf = dd.from_pandas(gdf, npartitions=npartitions)
return df, dgf
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/io/orc.py
|
# Copyright (c) 2020-2023, NVIDIA CORPORATION.
from io import BufferedWriter, IOBase
from fsspec.core import get_fs_token_paths
from fsspec.utils import stringify_path
from pyarrow import orc as orc
from dask import dataframe as dd
from dask.base import tokenize
from dask.dataframe.io.utils import _get_pyarrow_dtypes
import cudf
def _read_orc_stripe(fs, path, stripe, columns, kwargs=None):
"""Pull out specific columns from specific stripe"""
if kwargs is None:
kwargs = {}
with fs.open(path, "rb") as f:
df_stripe = cudf.read_orc(
f, stripes=[stripe], columns=columns, **kwargs
)
return df_stripe
def read_orc(path, columns=None, filters=None, storage_options=None, **kwargs):
"""Read ORC files into a :class:`.DataFrame`.
Note that this function is mostly borrowed from upstream Dask.
Parameters
----------
path : str or list[str]
Location of file(s), which can be a full URL with protocol specifier,
and may include glob character if a single string.
columns : None or list[str]
Columns to load. If None, loads all.
filters : None or list of tuple or list of lists of tuples
If not None, specifies a filter predicate used to filter out
row groups using statistics stored for each row group as
Parquet metadata. Row groups that do not match the given
filter predicate are not read. The predicate is expressed in
`disjunctive normal form (DNF)
<https://en.wikipedia.org/wiki/Disjunctive_normal_form>`__
like ``[[('x', '=', 0), ...], ...]``. DNF allows arbitrary
boolean logical combinations of single column predicates. The
innermost tuples each describe a single column predicate. The
list of inner predicates is interpreted as a conjunction
(AND), forming a more selective and multiple column predicate.
Finally, the outermost list combines these filters as a
disjunction (OR). Predicates may also be passed as a list of
tuples. This form is interpreted as a single conjunction. To
express OR in predicates, one must use the (preferred)
notation of list of lists of tuples.
storage_options : None or dict
Further parameters to pass to the bytes backend.
See Also
--------
dask.dataframe.read_orc
Returns
-------
dask_cudf.DataFrame
"""
storage_options = storage_options or {}
fs, fs_token, paths = get_fs_token_paths(
path, mode="rb", storage_options=storage_options
)
schema = None
nstripes_per_file = []
for path in paths:
with fs.open(path, "rb") as f:
o = orc.ORCFile(f)
if schema is None:
schema = o.schema
elif schema != o.schema:
raise ValueError(
"Incompatible schemas while parsing ORC files"
)
nstripes_per_file.append(o.nstripes)
schema = _get_pyarrow_dtypes(schema, categories=None)
if columns is not None:
ex = set(columns) - set(schema)
if ex:
raise ValueError(
f"Requested columns ({ex}) not in schema ({set(schema)})"
)
else:
columns = list(schema)
with fs.open(paths[0], "rb") as f:
meta = cudf.read_orc(
f,
stripes=[0] if nstripes_per_file[0] else None,
columns=columns,
**kwargs,
)
name = "read-orc-" + tokenize(fs_token, path, columns, **kwargs)
dsk = {}
N = 0
for path, n in zip(paths, nstripes_per_file):
for stripe in (
range(n)
if filters is None
else cudf.io.orc._filter_stripes(filters, path)
):
dsk[(name, N)] = (
_read_orc_stripe,
fs,
path,
stripe,
columns,
kwargs,
)
N += 1
divisions = [None] * (len(dsk) + 1)
return dd.core.new_dd_object(dsk, name, meta, divisions)
def write_orc_partition(df, path, fs, filename, compression="snappy"):
full_path = fs.sep.join([path, filename])
with fs.open(full_path, mode="wb") as out_file:
if not isinstance(out_file, IOBase):
out_file = BufferedWriter(out_file)
cudf.io.to_orc(df, out_file, compression=compression)
return full_path
def to_orc(
df,
path,
write_index=True,
storage_options=None,
compression="snappy",
compute=True,
**kwargs,
):
"""
Write a :class:`.DataFrame` to ORC file(s) (one file per partition).
Parameters
----------
df : DataFrame
path : str or pathlib.Path
Destination directory for data. Prepend with protocol like ``s3://``
or ``hdfs://`` for remote data.
write_index : boolean, optional
Whether or not to write the index. Defaults to True.
storage_options : None or dict
Further parameters to pass to the bytes backend.
compression : string or dict, optional
compute : bool, optional
If True (default) then the result is computed immediately. If
False then a :class:`~dask.delayed.Delayed` object is returned
for future computation.
"""
from dask import compute as dask_compute, delayed
# TODO: Use upstream dask implementation once available
# (see: Dask Issue#5596)
if hasattr(path, "name"):
path = stringify_path(path)
fs, _, _ = get_fs_token_paths(
path, mode="wb", storage_options=storage_options
)
# Trim any protocol information from the path before forwarding
path = fs._strip_protocol(path)
if write_index:
df = df.reset_index()
else:
# Not writing index - might as well drop it
df = df.reset_index(drop=True)
fs.mkdirs(path, exist_ok=True)
# Use i_offset and df.npartitions to define file-name list
filenames = ["part.%i.orc" % i for i in range(df.npartitions)]
# write parts
dwrite = delayed(write_orc_partition)
parts = [
dwrite(d, path, fs, filename, compression=compression)
for d, filename in zip(df.to_delayed(), filenames)
]
if compute:
return dask_compute(*parts)
return delayed(list)(parts)
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/io/csv.py
|
# Copyright (c) 2020-2023, NVIDIA CORPORATION.
import os
from glob import glob
from warnings import warn
from fsspec.utils import infer_compression
from dask import dataframe as dd
from dask.base import tokenize
from dask.dataframe.io.csv import make_reader
from dask.utils import apply, parse_bytes
import cudf
def read_csv(path, blocksize="default", **kwargs):
"""
Read CSV files into a :class:`.DataFrame`.
This API parallelizes the :func:`cudf:cudf.read_csv` function in
the following ways:
It supports loading many files at once using globstrings:
>>> import dask_cudf
>>> df = dask_cudf.read_csv("myfiles.*.csv")
In some cases it can break up large files:
>>> df = dask_cudf.read_csv("largefile.csv", blocksize="256 MiB")
It can read CSV files from external resources (e.g. S3, HTTP, FTP)
>>> df = dask_cudf.read_csv("s3://bucket/myfiles.*.csv")
>>> df = dask_cudf.read_csv("https://www.mycloud.com/sample.csv")
Internally ``read_csv`` uses :func:`cudf:cudf.read_csv` and
supports many of the same keyword arguments with the same
performance guarantees. See the docstring for
:func:`cudf:cudf.read_csv` for more information on available
keyword arguments.
Parameters
----------
path : str, path object, or file-like object
Either a path to a file (a str, :py:class:`pathlib.Path`, or
py._path.local.LocalPath), URL (including http, ftp, and S3
locations), or any object with a read() method (such as
builtin :py:func:`open` file handler function or
:py:class:`~io.StringIO`).
blocksize : int or str, default "256 MiB"
The target task partition size. If ``None``, a single block
is used for each file.
**kwargs : dict
Passthrough key-word arguments that are sent to
:func:`cudf:cudf.read_csv`.
Notes
-----
If any of `skipfooter`/`skiprows`/`nrows` are passed,
`blocksize` will default to None.
Examples
--------
>>> import dask_cudf
>>> ddf = dask_cudf.read_csv("sample.csv", usecols=["a", "b"])
>>> ddf.compute()
a b
0 1 hi
1 2 hello
2 3 ai
"""
# Handle `chunksize` deprecation
if "chunksize" in kwargs:
chunksize = kwargs.pop("chunksize", "default")
warn(
"`chunksize` is deprecated and will be removed in the future. "
"Please use `blocksize` instead.",
FutureWarning,
)
if blocksize == "default":
blocksize = chunksize
# Set default `blocksize`
if blocksize == "default":
if (
kwargs.get("skipfooter", 0) != 0
or kwargs.get("skiprows", 0) != 0
or kwargs.get("nrows", None) is not None
):
# Cannot read in blocks if skipfooter,
# skiprows or nrows is passed.
blocksize = None
else:
blocksize = "256 MiB"
if "://" in str(path):
func = make_reader(cudf.read_csv, "read_csv", "CSV")
return func(path, blocksize=blocksize, **kwargs)
else:
return _internal_read_csv(path=path, blocksize=blocksize, **kwargs)
def _internal_read_csv(path, blocksize="256 MiB", **kwargs):
if isinstance(blocksize, str):
blocksize = parse_bytes(blocksize)
if isinstance(path, list):
filenames = path
elif isinstance(path, str):
filenames = sorted(glob(path))
elif hasattr(path, "__fspath__"):
filenames = sorted(glob(path.__fspath__()))
else:
raise TypeError(f"Path type not understood:{type(path)}")
if not filenames:
msg = f"A file in: {filenames} does not exist."
raise FileNotFoundError(msg)
name = "read-csv-" + tokenize(
path, tokenize, **kwargs
) # TODO: get last modified time
compression = kwargs.get("compression", "infer")
if compression == "infer":
# Infer compression from first path by default
compression = infer_compression(filenames[0])
if compression and blocksize:
# compressed CSVs reading must read the entire file
kwargs.pop("byte_range", None)
warn(
"Warning %s compression does not support breaking apart files\n"
"Please ensure that each individual file can fit in memory and\n"
"use the keyword ``blocksize=None to remove this message``\n"
"Setting ``blocksize=(size of file)``" % compression
)
blocksize = None
if blocksize is None:
return read_csv_without_blocksize(path, **kwargs)
# Let dask.dataframe generate meta
dask_reader = make_reader(cudf.read_csv, "read_csv", "CSV")
kwargs1 = kwargs.copy()
usecols = kwargs1.pop("usecols", None)
dtype = kwargs1.pop("dtype", None)
meta = dask_reader(filenames[0], **kwargs1)._meta
names = meta.columns
if usecols or dtype:
# Regenerate meta with original kwargs if
# `usecols` or `dtype` was specified
meta = dask_reader(filenames[0], **kwargs)._meta
dsk = {}
i = 0
dtypes = meta.dtypes.values
for fn in filenames:
size = os.path.getsize(fn)
for start in range(0, size, blocksize):
kwargs2 = kwargs.copy()
kwargs2["byte_range"] = (
start,
blocksize,
) # specify which chunk of the file we care about
if start != 0:
kwargs2["names"] = names # no header in the middle of the file
kwargs2["header"] = None
dsk[(name, i)] = (apply, _read_csv, [fn, dtypes], kwargs2)
i += 1
divisions = [None] * (len(dsk) + 1)
return dd.core.new_dd_object(dsk, name, meta, divisions)
def _read_csv(fn, dtypes=None, **kwargs):
return cudf.read_csv(fn, **kwargs)
def read_csv_without_blocksize(path, **kwargs):
"""Read entire CSV with optional compression (gzip/zip)
Parameters
----------
path : str
path to files (support for glob)
"""
if isinstance(path, list):
filenames = path
elif isinstance(path, str):
filenames = sorted(glob(path))
elif hasattr(path, "__fspath__"):
filenames = sorted(glob(path.__fspath__()))
else:
raise TypeError(f"Path type not understood:{type(path)}")
name = "read-csv-" + tokenize(path, **kwargs)
meta_kwargs = kwargs.copy()
if "skipfooter" in meta_kwargs:
meta_kwargs.pop("skipfooter")
if "nrows" in meta_kwargs:
meta_kwargs.pop("nrows")
# Read "head" of first file (first 5 rows).
# Convert to empty df for metadata.
meta = cudf.read_csv(filenames[0], nrows=5, **meta_kwargs).iloc[:0]
graph = {
(name, i): (apply, cudf.read_csv, [fn], kwargs)
for i, fn in enumerate(filenames)
}
divisions = [None] * (len(filenames) + 1)
return dd.core.new_dd_object(graph, name, meta, divisions)
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/io/parquet.py
|
# Copyright (c) 2019-2023, NVIDIA CORPORATION.
import itertools
import warnings
from contextlib import ExitStack
from functools import partial
from io import BufferedWriter, BytesIO, IOBase
import numpy as np
from pyarrow import dataset as pa_ds, parquet as pq
from dask import dataframe as dd
from dask.dataframe.io.parquet.arrow import ArrowDatasetEngine
try:
from dask.dataframe.io.parquet import (
create_metadata_file as create_metadata_file_dd,
)
except ImportError:
create_metadata_file_dd = None
import cudf
from cudf.core.column import as_column, build_categorical_column
from cudf.io import write_to_dataset
from cudf.io.parquet import (
_apply_post_filters,
_default_open_file_options,
_normalize_filters,
)
from cudf.utils.dtypes import cudf_dtype_from_pa_type
from cudf.utils.ioutils import (
_ROW_GROUP_SIZE_BYTES_DEFAULT,
_is_local_filesystem,
_open_remote_files,
)
class CudfEngine(ArrowDatasetEngine):
@classmethod
def _create_dd_meta(cls, dataset_info, **kwargs):
# Start with pandas-version of meta
meta_pd = super()._create_dd_meta(dataset_info, **kwargs)
# Convert to cudf
meta_cudf = cudf.from_pandas(meta_pd)
# Re-set "object" dtypes to align with pa schema
kwargs = dataset_info.get("kwargs", {})
set_object_dtypes_from_pa_schema(
meta_cudf,
kwargs.get("schema", None),
)
return meta_cudf
@classmethod
def multi_support(cls):
# Assert that this class is CudfEngine
# and that multi-part reading is supported
return cls == CudfEngine
@classmethod
def _read_paths(
cls,
paths,
fs,
columns=None,
row_groups=None,
filters=None,
partitions=None,
partitioning=None,
partition_keys=None,
open_file_options=None,
dataset_kwargs=None,
**kwargs,
):
# Simplify row_groups if all None
if row_groups == [None for path in paths]:
row_groups = None
# Make sure we read in the columns needed for row-wise
# filtering after IO. This means that one or more columns
# will be dropped almost immediately after IO. However,
# we do NEED these columns for accurate filtering.
filters = _normalize_filters(filters)
projected_columns = None
if columns and filters:
projected_columns = [c for c in columns if c is not None]
columns = sorted(
set(v[0] for v in itertools.chain.from_iterable(filters))
| set(projected_columns)
)
dataset_kwargs = dataset_kwargs or {}
dataset_kwargs["partitioning"] = partitioning or "hive"
with ExitStack() as stack:
# Non-local filesystem handling
paths_or_fobs = paths
if not _is_local_filesystem(fs):
paths_or_fobs = _open_remote_files(
paths_or_fobs,
fs,
context_stack=stack,
**_default_open_file_options(
open_file_options, columns, row_groups
),
)
# Use cudf to read in data
try:
df = cudf.read_parquet(
paths_or_fobs,
engine="cudf",
columns=columns,
row_groups=row_groups if row_groups else None,
dataset_kwargs=dataset_kwargs,
categorical_partitions=False,
**kwargs,
)
except RuntimeError as err:
# TODO: Remove try/except after null-schema issue is resolved
# (See: https://github.com/rapidsai/cudf/issues/12702)
if len(paths_or_fobs) > 1:
df = cudf.concat(
[
cudf.read_parquet(
pof,
engine="cudf",
columns=columns,
row_groups=row_groups[i]
if row_groups
else None,
dataset_kwargs=dataset_kwargs,
categorical_partitions=False,
**kwargs,
)
for i, pof in enumerate(paths_or_fobs)
]
)
else:
raise err
# Apply filters (if any are defined)
df = _apply_post_filters(df, filters)
if projected_columns:
# Elements of `projected_columns` may now be in the index.
# We must filter these names from our projection
projected_columns = [
col for col in projected_columns if col in df._column_names
]
df = df[projected_columns]
if partitions and partition_keys is None:
# Use `HivePartitioning` by default
ds = pa_ds.dataset(
paths,
filesystem=fs,
**dataset_kwargs,
)
frag = next(ds.get_fragments())
if frag:
# Extract hive-partition keys, and make sure they
# are ordered the same as they are in `partitions`
raw_keys = pa_ds._get_partition_keys(frag.partition_expression)
partition_keys = [
(hive_part.name, raw_keys[hive_part.name])
for hive_part in partitions
]
if partition_keys:
if partitions is None:
raise ValueError("Must pass partition sets")
for i, (name, index2) in enumerate(partition_keys):
if len(partitions[i].keys):
# Build a categorical column from `codes` directly
# (since the category is often a larger dtype)
codes = as_column(
partitions[i].keys.get_loc(index2),
length=len(df),
)
df[name] = build_categorical_column(
categories=partitions[i].keys,
codes=codes,
size=codes.size,
offset=codes.offset,
ordered=False,
)
elif name not in df.columns:
# Add non-categorical partition column
df[name] = as_column(index2, length=len(df))
return df
@classmethod
def read_partition(
cls,
fs,
pieces,
columns,
index,
categories=(),
partitions=(),
filters=None,
partitioning=None,
schema=None,
open_file_options=None,
**kwargs,
):
if columns is not None:
columns = [c for c in columns]
if isinstance(index, list):
columns += index
dataset_kwargs = kwargs.get("dataset", {})
partitioning = partitioning or dataset_kwargs.get("partitioning", None)
if isinstance(partitioning, dict):
partitioning = pa_ds.partitioning(**partitioning)
# Check if we are actually selecting any columns
read_columns = columns
if schema and columns:
ignored = set(schema.names) - set(columns)
if not ignored:
read_columns = None
if not isinstance(pieces, list):
pieces = [pieces]
# Extract supported kwargs from `kwargs`
read_kwargs = kwargs.get("read", {})
read_kwargs.update(open_file_options or {})
check_file_size = read_kwargs.pop("check_file_size", None)
# Wrap reading logic in a `try` block so that we can
# inform the user that the `read_parquet` partition
# size is too large for the available memory
try:
# Assume multi-piece read
paths = []
rgs = []
last_partition_keys = None
dfs = []
for i, piece in enumerate(pieces):
(path, row_group, partition_keys) = piece
row_group = None if row_group == [None] else row_group
# File-size check to help "protect" users from change
# to up-stream `split_row_groups` default. We only
# check the file size if this partition corresponds
# to a full file, and `check_file_size` is defined
if check_file_size and len(pieces) == 1 and row_group is None:
file_size = fs.size(path)
if file_size > check_file_size:
warnings.warn(
f"A large parquet file ({file_size}B) is being "
f"used to create a DataFrame partition in "
f"read_parquet. This may cause out of memory "
f"exceptions in operations downstream. See the "
f"notes on split_row_groups in the read_parquet "
f"documentation. Setting split_row_groups "
f"explicitly will silence this warning."
)
if i > 0 and partition_keys != last_partition_keys:
dfs.append(
cls._read_paths(
paths,
fs,
columns=read_columns,
row_groups=rgs if rgs else None,
filters=filters,
partitions=partitions,
partitioning=partitioning,
partition_keys=last_partition_keys,
dataset_kwargs=dataset_kwargs,
**read_kwargs,
)
)
paths = []
rgs = []
last_partition_keys = None
paths.append(path)
rgs.append(
[row_group]
if not isinstance(row_group, list)
and row_group is not None
else row_group
)
last_partition_keys = partition_keys
dfs.append(
cls._read_paths(
paths,
fs,
columns=read_columns,
row_groups=rgs if rgs else None,
filters=filters,
partitions=partitions,
partitioning=partitioning,
partition_keys=last_partition_keys,
dataset_kwargs=dataset_kwargs,
**read_kwargs,
)
)
df = cudf.concat(dfs) if len(dfs) > 1 else dfs[0]
# Re-set "object" dtypes align with pa schema
set_object_dtypes_from_pa_schema(df, schema)
if index and (index[0] in df.columns):
df = df.set_index(index[0])
elif index is False and df.index.names != (None,):
# If index=False, we shouldn't have a named index
df.reset_index(inplace=True)
except MemoryError as err:
raise MemoryError(
"Parquet data was larger than the available GPU memory!\n\n"
"See the notes on split_row_groups in the read_parquet "
"documentation.\n\n"
"Original Error: " + str(err)
)
raise err
return df
@staticmethod
def write_partition(
df,
path,
fs,
filename,
partition_on,
return_metadata,
fmd=None,
compression="snappy",
index_cols=None,
**kwargs,
):
preserve_index = False
if len(index_cols) and set(index_cols).issubset(set(df.columns)):
df.set_index(index_cols, drop=True, inplace=True)
preserve_index = True
if partition_on:
md = write_to_dataset(
df=df,
root_path=path,
compression=compression,
filename=filename,
partition_cols=partition_on,
fs=fs,
preserve_index=preserve_index,
return_metadata=return_metadata,
statistics=kwargs.get("statistics", "ROWGROUP"),
int96_timestamps=kwargs.get("int96_timestamps", False),
row_group_size_bytes=kwargs.get(
"row_group_size_bytes", _ROW_GROUP_SIZE_BYTES_DEFAULT
),
row_group_size_rows=kwargs.get("row_group_size_rows", None),
max_page_size_bytes=kwargs.get("max_page_size_bytes", None),
max_page_size_rows=kwargs.get("max_page_size_rows", None),
storage_options=kwargs.get("storage_options", None),
)
else:
with fs.open(fs.sep.join([path, filename]), mode="wb") as out_file:
if not isinstance(out_file, IOBase):
out_file = BufferedWriter(out_file)
md = df.to_parquet(
path=out_file,
engine=kwargs.get("engine", "cudf"),
index=kwargs.get("index", None),
partition_cols=kwargs.get("partition_cols", None),
partition_file_name=kwargs.get(
"partition_file_name", None
),
partition_offsets=kwargs.get("partition_offsets", None),
statistics=kwargs.get("statistics", "ROWGROUP"),
int96_timestamps=kwargs.get("int96_timestamps", False),
row_group_size_bytes=kwargs.get(
"row_group_size_bytes", _ROW_GROUP_SIZE_BYTES_DEFAULT
),
row_group_size_rows=kwargs.get(
"row_group_size_rows", None
),
storage_options=kwargs.get("storage_options", None),
metadata_file_path=filename if return_metadata else None,
)
# Return the schema needed to write the metadata
if return_metadata:
return [{"meta": md}]
else:
return []
@staticmethod
def write_metadata(parts, fmd, fs, path, append=False, **kwargs):
if parts:
# Aggregate metadata and write to _metadata file
metadata_path = fs.sep.join([path, "_metadata"])
_meta = []
if append and fmd is not None:
_meta = [fmd]
_meta.extend([parts[i][0]["meta"] for i in range(len(parts))])
_meta = (
cudf.io.merge_parquet_filemetadata(_meta)
if len(_meta) > 1
else _meta[0]
)
with fs.open(metadata_path, "wb") as fil:
fil.write(memoryview(_meta))
@classmethod
def collect_file_metadata(cls, path, fs, file_path):
with fs.open(path, "rb") as f:
meta = pq.ParquetFile(f).metadata
if file_path:
meta.set_file_path(file_path)
with BytesIO() as myio:
meta.write_metadata_file(myio)
myio.seek(0)
meta = np.frombuffer(myio.read(), dtype="uint8")
return meta
@classmethod
def aggregate_metadata(cls, meta_list, fs, out_path):
meta = (
cudf.io.merge_parquet_filemetadata(meta_list)
if len(meta_list) > 1
else meta_list[0]
)
if out_path:
metadata_path = fs.sep.join([out_path, "_metadata"])
with fs.open(metadata_path, "wb") as fil:
fil.write(memoryview(meta))
return None
else:
return meta
def set_object_dtypes_from_pa_schema(df, schema):
# Simple utility to modify cudf DataFrame
# "object" dtypes to agree with a specific
# pyarrow schema.
if schema:
for col_name, col in df._data.items():
if col_name is None:
# Pyarrow cannot handle `None` as a field name.
# However, this should be a simple range index that
# we can ignore anyway
continue
typ = cudf_dtype_from_pa_type(schema.field(col_name).type)
if (
col_name in schema.names
and not isinstance(typ, (cudf.ListDtype, cudf.StructDtype))
and isinstance(col, cudf.core.column.StringColumn)
):
df._data[col_name] = col.astype(typ)
def read_parquet(path, columns=None, **kwargs):
"""
Read parquet files into a :class:`.DataFrame`.
Calls :func:`dask.dataframe.read_parquet` with ``engine=CudfEngine``
to coordinate the execution of :func:`cudf.read_parquet`, and to
ultimately create a :class:`.DataFrame` collection.
See the :func:`dask.dataframe.read_parquet` documentation for
all available options.
Examples
--------
>>> from dask_cudf import read_parquet
>>> df = read_parquet("/path/to/dataset/") # doctest: +SKIP
When dealing with one or more large parquet files having an
in-memory footprint >15% device memory, the ``split_row_groups``
argument should be used to map Parquet **row-groups** to DataFrame
partitions (instead of **files** to partitions). For example, the
following code will map each row-group to a distinct partition:
>>> df = read_parquet(..., split_row_groups=True) # doctest: +SKIP
To map **multiple** row-groups to each partition, an integer can be
passed to ``split_row_groups`` to specify the **maximum** number of
row-groups allowed in each output partition:
>>> df = read_parquet(..., split_row_groups=10) # doctest: +SKIP
See Also
--------
cudf.read_parquet
dask.dataframe.read_parquet
"""
if isinstance(columns, str):
columns = [columns]
# Set "check_file_size" option to determine whether we
# should check the parquet-file size. This check is meant
# to "protect" users from `split_row_groups` default changes
check_file_size = kwargs.pop("check_file_size", 500_000_000)
if (
check_file_size
and ("split_row_groups" not in kwargs)
and ("chunksize" not in kwargs)
):
# User is not specifying `split_row_groups` or `chunksize`,
# so we should warn them if/when a file is ~>0.5GB on disk.
# They can set `split_row_groups` explicitly to silence/skip
# this check
if "read" not in kwargs:
kwargs["read"] = {}
kwargs["read"]["check_file_size"] = check_file_size
return dd.read_parquet(path, columns=columns, engine=CudfEngine, **kwargs)
to_parquet = partial(dd.to_parquet, engine=CudfEngine)
if create_metadata_file_dd is None:
create_metadata_file = create_metadata_file_dd
else:
create_metadata_file = partial(create_metadata_file_dd, engine=CudfEngine)
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/io/json.py
|
# Copyright (c) 2019-2023, NVIDIA CORPORATION.
from functools import partial
import dask
import cudf
from dask_cudf.backends import _default_backend
def read_json(url_path, engine="auto", **kwargs):
"""Read JSON data into a :class:`.DataFrame`.
This function wraps :func:`dask.dataframe.read_json`, and passes
``engine=partial(cudf.read_json, engine="auto")`` by default.
Parameters
----------
url_path : str, list of str
Location to read from. If a string, can include a glob character to
find a set of file names.
Supports protocol specifications such as ``"s3://"``.
engine : str or Callable, default "auto"
If str, this value will be used as the ``engine`` argument
when :func:`cudf.read_json` is used to create each partition.
If a :obj:`~typing.Callable`, this value will be used as the
underlying function used to create each partition from JSON
data. The default value is "auto", so that
``engine=partial(cudf.read_json, engine="auto")`` will be
passed to :func:`dask.dataframe.read_json` by default.
**kwargs :
Key-word arguments to pass through to :func:`dask.dataframe.read_json`.
Returns
-------
:class:`.DataFrame`
Examples
--------
Load single file
>>> from dask_cudf import read_json
>>> read_json('myfile.json') # doctest: +SKIP
Load large line-delimited JSON files using partitions of approx
256MB size
>>> read_json('data/file*.csv', blocksize=2**28) # doctest: +SKIP
Load nested JSON data
>>> read_json('myfile.json') # doctest: +SKIP
See Also
--------
dask.dataframe.read_json
"""
# TODO: Add optimized code path to leverage the
# `byte_range` argument in `cudf.read_json` for
# local storage (see `dask_cudf.read_csv`)
return _default_backend(
dask.dataframe.read_json,
url_path,
engine=(
partial(cudf.read_json, engine=engine)
if isinstance(engine, str)
else engine
),
**kwargs,
)
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/io/text.py
|
# Copyright (c) 2022, NVIDIA CORPORATION.
import os
from glob import glob
import dask.dataframe as dd
from dask.base import tokenize
from dask.utils import apply, parse_bytes
import cudf
def read_text(path, chunksize="256 MiB", **kwargs):
if isinstance(chunksize, str):
chunksize = parse_bytes(chunksize)
if isinstance(path, list):
filenames = path
elif isinstance(path, str):
filenames = sorted(glob(path))
elif hasattr(path, "__fspath__"):
filenames = sorted(glob(path.__fspath__()))
else:
raise TypeError(f"Path type not understood:{type(path)}")
if not filenames:
msg = f"A file in: {filenames} does not exist."
raise FileNotFoundError(msg)
name = "read-text-" + tokenize(path, tokenize, **kwargs)
if chunksize:
dsk = {}
i = 0
for fn in filenames:
size = os.path.getsize(fn)
for start in range(0, size, chunksize):
kwargs1 = kwargs.copy()
kwargs1["byte_range"] = (
start,
chunksize,
) # specify which chunk of the file we care about
dsk[(name, i)] = (apply, cudf.read_text, [fn], kwargs1)
i += 1
else:
dsk = {
(name, i): (apply, cudf.read_text, [fn], kwargs)
for i, fn in enumerate(filenames)
}
meta = cudf.Series([], dtype="O")
divisions = [None] * (len(dsk) + 1)
return dd.core.new_dd_object(dsk, name, meta, divisions)
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/io/__init__.py
|
# Copyright (c) 2018-2022, NVIDIA CORPORATION.
from .csv import read_csv
from .json import read_json
from .orc import read_orc, to_orc
from .text import read_text
try:
from .parquet import read_parquet, to_parquet
except ImportError:
pass
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/io
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/io/tests/test_parquet.py
|
# Copyright (c) 2019-2023, NVIDIA CORPORATION.
import glob
import math
import os
import numpy as np
import pandas as pd
import pytest
import dask
from dask import dataframe as dd
from dask.utils import natural_sort_key
import cudf
import dask_cudf
# Check if create_metadata_file is supported by
# the current dask.dataframe version
need_create_meta = pytest.mark.skipif(
dask_cudf.io.parquet.create_metadata_file is None,
reason="Need create_metadata_file support in dask.dataframe.",
)
nrows = 40
npartitions = 15
df = pd.DataFrame(
{
"x": [i * 7 % 5 for i in range(nrows)], # Not sorted
"y": [i * 2.5 for i in range(nrows)],
},
index=pd.Index(range(nrows), name="index"),
) # Sorted
ddf = dd.from_pandas(df, npartitions=npartitions)
def test_roundtrip_backend_dispatch(tmpdir):
# Test ddf.read_parquet cudf-backend dispatch
tmpdir = str(tmpdir)
ddf.to_parquet(tmpdir, engine="pyarrow")
with dask.config.set({"dataframe.backend": "cudf"}):
ddf2 = dd.read_parquet(tmpdir, index=False)
assert isinstance(ddf2, dask_cudf.DataFrame)
dd.assert_eq(ddf.reset_index(drop=False), ddf2)
@pytest.mark.parametrize("write_metadata_file", [True, False])
@pytest.mark.parametrize("divisions", [True, False])
def test_roundtrip_from_dask(tmpdir, divisions, write_metadata_file):
tmpdir = str(tmpdir)
ddf.to_parquet(
tmpdir, write_metadata_file=write_metadata_file, engine="pyarrow"
)
files = sorted(
(os.path.join(tmpdir, f) for f in os.listdir(tmpdir)),
key=natural_sort_key,
)
# Read list of parquet files
ddf2 = dask_cudf.read_parquet(files, calculate_divisions=divisions)
dd.assert_eq(ddf, ddf2, check_divisions=divisions)
# Specify columns=['x']
ddf2 = dask_cudf.read_parquet(
files, columns=["x"], calculate_divisions=divisions
)
dd.assert_eq(ddf[["x"]], ddf2, check_divisions=divisions)
# Specify columns='y'
ddf2 = dask_cudf.read_parquet(
files, columns="y", calculate_divisions=divisions
)
dd.assert_eq(ddf[["y"]], ddf2, check_divisions=divisions)
# Now include metadata
ddf2 = dask_cudf.read_parquet(tmpdir, calculate_divisions=divisions)
dd.assert_eq(ddf, ddf2, check_divisions=divisions)
# Specify columns=['x'] (with metadata)
ddf2 = dask_cudf.read_parquet(
tmpdir, columns=["x"], calculate_divisions=divisions
)
dd.assert_eq(ddf[["x"]], ddf2, check_divisions=divisions)
# Specify columns='y' (with metadata)
ddf2 = dask_cudf.read_parquet(
tmpdir, columns="y", calculate_divisions=divisions
)
dd.assert_eq(ddf[["y"]], ddf2, check_divisions=divisions)
def test_roundtrip_from_dask_index_false(tmpdir):
tmpdir = str(tmpdir)
ddf.to_parquet(tmpdir, engine="pyarrow")
ddf2 = dask_cudf.read_parquet(tmpdir, index=False)
dd.assert_eq(ddf.reset_index(drop=False), ddf2)
def test_roundtrip_from_dask_none_index_false(tmpdir):
tmpdir = str(tmpdir)
path = os.path.join(tmpdir, "test.parquet")
df2 = ddf.reset_index(drop=True).compute()
df2.to_parquet(path, engine="pyarrow")
ddf3 = dask_cudf.read_parquet(path, index=False)
dd.assert_eq(df2, ddf3)
@pytest.mark.parametrize("write_meta", [True, False])
def test_roundtrip_from_dask_cudf(tmpdir, write_meta):
tmpdir = str(tmpdir)
gddf = dask_cudf.from_dask_dataframe(ddf)
gddf.to_parquet(tmpdir, write_metadata_file=write_meta)
gddf2 = dask_cudf.read_parquet(tmpdir, calculate_divisions=True)
dd.assert_eq(gddf, gddf2)
def test_roundtrip_none_rangeindex(tmpdir):
fn = str(tmpdir.join("test.parquet"))
gdf = cudf.DataFrame(
{"id": [0, 1, 2, 3], "val": [None, None, 0, 1]},
index=pd.RangeIndex(start=5, stop=9),
)
dask_cudf.from_cudf(gdf, npartitions=2).to_parquet(fn)
ddf2 = dask_cudf.read_parquet(fn)
dd.assert_eq(gdf, ddf2, check_index=True)
def test_roundtrip_from_pandas(tmpdir):
fn = str(tmpdir.join("test.parquet"))
# First without specifying an index
dfp = df.copy()
dfp.to_parquet(fn, engine="pyarrow", index=False)
dfp = dfp.reset_index(drop=True)
ddf2 = dask_cudf.read_parquet(fn)
dd.assert_eq(dfp, ddf2, check_index=True)
# Now, specifying an index
dfp = df.copy()
dfp.to_parquet(fn, engine="pyarrow", index=True)
ddf2 = dask_cudf.read_parquet(fn, index=["index"])
dd.assert_eq(dfp, ddf2, check_index=True)
def test_strings(tmpdir):
fn = str(tmpdir)
dfp = pd.DataFrame(
{"a": ["aa", "bbb", "cccc"], "b": ["hello", "dog", "man"]}
)
dfp.set_index("a", inplace=True, drop=True)
ddf2 = dd.from_pandas(dfp, npartitions=2)
ddf2.to_parquet(fn, engine="pyarrow")
read_df = dask_cudf.read_parquet(fn, index=["a"])
dd.assert_eq(ddf2, read_df.compute().to_pandas())
def test_dask_timeseries_from_pandas(tmpdir):
fn = str(tmpdir.join("test.parquet"))
ddf2 = dask.datasets.timeseries(freq="D")
pdf = ddf2.compute()
pdf.to_parquet(fn, engine="pyarrow")
read_df = dask_cudf.read_parquet(fn)
dd.assert_eq(ddf2, read_df.compute())
@pytest.mark.parametrize("index", [False, None])
@pytest.mark.parametrize("divisions", [False, True])
def test_dask_timeseries_from_dask(tmpdir, index, divisions):
fn = str(tmpdir)
ddf2 = dask.datasets.timeseries(freq="D")
ddf2.to_parquet(fn, engine="pyarrow", write_index=index)
read_df = dask_cudf.read_parquet(
fn, index=index, calculate_divisions=divisions
)
dd.assert_eq(
ddf2, read_df, check_divisions=(divisions and index), check_index=index
)
@pytest.mark.parametrize("index", [False, None])
@pytest.mark.parametrize("divisions", [False, True])
def test_dask_timeseries_from_daskcudf(tmpdir, index, divisions):
fn = str(tmpdir)
ddf2 = dask_cudf.from_cudf(
cudf.datasets.timeseries(freq="D"), npartitions=4
)
ddf2.name = ddf2.name.astype("object")
ddf2.to_parquet(fn, write_index=index)
read_df = dask_cudf.read_parquet(
fn, index=index, calculate_divisions=divisions
)
dd.assert_eq(
ddf2, read_df, check_divisions=(divisions and index), check_index=index
)
@pytest.mark.parametrize("index", [False, True])
def test_empty(tmpdir, index):
fn = str(tmpdir)
dfp = pd.DataFrame({"a": [11.0, 12.0, 12.0], "b": [4, 5, 6]})[:0]
if index:
dfp.set_index("a", inplace=True, drop=True)
ddf2 = dd.from_pandas(dfp, npartitions=2)
ddf2.to_parquet(fn, write_index=index, engine="pyarrow")
read_df = dask_cudf.read_parquet(fn)
dd.assert_eq(ddf2, read_df.compute())
def test_filters(tmpdir):
tmp_path = str(tmpdir)
df = pd.DataFrame({"x": range(10), "y": list("aabbccddee")})
ddf = dd.from_pandas(df, npartitions=5)
assert ddf.npartitions == 5
ddf.to_parquet(tmp_path, engine="pyarrow")
a = dask_cudf.read_parquet(
tmp_path, filters=[("x", ">", 4)], split_row_groups=True
)
assert a.npartitions == 3
assert (a.x > 3).all().compute()
b = dask_cudf.read_parquet(
tmp_path, filters=[("y", "==", "c")], split_row_groups=True
)
assert b.npartitions == 1
b = b.compute().to_pandas()
assert (b.y == "c").all()
c = dask_cudf.read_parquet(
tmp_path,
filters=[("y", "==", "c"), ("x", ">", 6)],
split_row_groups=True,
)
assert c.npartitions <= 1
assert not len(c)
@pytest.mark.parametrize("numeric", [True, False])
@pytest.mark.parametrize("null", [np.nan, None])
def test_isna_filters(tmpdir, null, numeric):
tmp_path = str(tmpdir)
df = pd.DataFrame(
{
"x": range(10),
"y": list("aabbccddee"),
"i": [0] * 4 + [np.nan] * 2 + [0] * 4,
"j": [""] * 4 + [None] * 2 + [""] * 4,
}
)
ddf = dd.from_pandas(df, npartitions=5)
assert ddf.npartitions == 5
ddf.to_parquet(tmp_path, engine="pyarrow")
# Test "is"
col = "i" if numeric else "j"
filters = [(col, "is", null)]
out = dask_cudf.read_parquet(
tmp_path, filters=filters, split_row_groups=True
)
assert len(out) == 2
assert list(out.x.compute().values) == [4, 5]
# Test "is not"
filters = [(col, "is not", null)]
out = dask_cudf.read_parquet(
tmp_path, filters=filters, split_row_groups=True
)
assert len(out) == 8
assert list(out.x.compute().values) == [0, 1, 2, 3, 6, 7, 8, 9]
def test_filters_at_row_group_level(tmpdir):
tmp_path = str(tmpdir)
df = pd.DataFrame({"x": range(10), "y": list("aabbccddee")})
ddf = dd.from_pandas(df, npartitions=5)
assert ddf.npartitions == 5
ddf.to_parquet(tmp_path, engine="pyarrow", row_group_size=10 / 5)
a = dask_cudf.read_parquet(
tmp_path, filters=[("x", "==", 1)], split_row_groups=True
)
assert a.npartitions == 1
assert (a.shape[0] == 1).compute()
ddf.to_parquet(tmp_path, engine="pyarrow", row_group_size=1)
b = dask_cudf.read_parquet(
tmp_path, filters=[("x", "==", 1)], split_row_groups=True
)
assert b.npartitions == 1
assert (b.shape[0] == 1).compute()
@pytest.mark.parametrize("metadata", [True, False])
@pytest.mark.parametrize("daskcudf", [True, False])
@pytest.mark.parametrize(
"parts", [["year", "month", "day"], ["year", "month"], ["year"]]
)
def test_roundtrip_from_dask_partitioned(tmpdir, parts, daskcudf, metadata):
tmpdir = str(tmpdir)
df = pd.DataFrame()
df["year"] = [2018, 2019, 2019, 2019, 2020, 2021]
df["month"] = [1, 2, 3, 3, 3, 2]
df["day"] = [1, 1, 1, 2, 2, 1]
df["data"] = [0, 0, 0, 0, 0, 0]
df.index.name = "index"
if daskcudf:
ddf2 = dask_cudf.from_cudf(cudf.from_pandas(df), npartitions=2)
ddf2.to_parquet(
tmpdir, write_metadata_file=metadata, partition_on=parts
)
else:
ddf2 = dd.from_pandas(df, npartitions=2)
ddf2.to_parquet(
tmpdir,
engine="pyarrow",
write_metadata_file=metadata,
partition_on=parts,
)
df_read = dd.read_parquet(tmpdir, engine="pyarrow")
gdf_read = dask_cudf.read_parquet(tmpdir)
# TODO: Avoid column selection after `CudfEngine`
# can be aligned with dask/dask#6534
columns = list(df_read.columns)
assert set(df_read.columns) == set(gdf_read.columns)
dd.assert_eq(
df_read.compute(scheduler=dask.get)[columns],
gdf_read.compute(scheduler=dask.get)[columns],
)
assert gdf_read.index.name == "index"
# Check that we don't have uuid4 file names
for _, _, files in os.walk(tmpdir):
for fn in files:
if not fn.startswith("_"):
assert "part" in fn
# Check that we can aggregate files by a partition name
df_read = dd.read_parquet(
tmpdir, engine="pyarrow", aggregate_files="year", split_row_groups=2
)
gdf_read = dask_cudf.read_parquet(
tmpdir, aggregate_files="year", split_row_groups=2
)
dd.assert_eq(df_read, gdf_read)
@pytest.mark.parametrize("row_groups", [1, 3, 10, 12])
@pytest.mark.parametrize("index", [False, True])
def test_split_row_groups(tmpdir, row_groups, index):
nparts = 2
df_size = 100
row_group_size = 5
file_row_groups = 10 # Known apriori
npartitions_expected = math.ceil(file_row_groups / row_groups) * 2
df = pd.DataFrame(
{
"a": np.random.choice(["apple", "banana", "carrot"], size=df_size),
"b": np.random.random(size=df_size),
"c": np.random.randint(1, 5, size=df_size),
"index": np.arange(0, df_size),
}
)
if index:
df = df.set_index("index")
ddf1 = dd.from_pandas(df, npartitions=nparts)
ddf1.to_parquet(
str(tmpdir),
engine="pyarrow",
row_group_size=row_group_size,
write_metadata_file=True,
)
ddf2 = dask_cudf.read_parquet(
str(tmpdir),
split_row_groups=row_groups,
)
dd.assert_eq(ddf1, ddf2, check_divisions=False)
assert ddf2.npartitions == npartitions_expected
@need_create_meta
@pytest.mark.parametrize("partition_on", [None, "a"])
def test_create_metadata_file(tmpdir, partition_on):
tmpdir = str(tmpdir)
# Write ddf without a _metadata file
df1 = cudf.DataFrame({"b": range(100), "a": ["A", "B", "C", "D"] * 25})
df1.index.name = "myindex"
ddf1 = dask_cudf.from_cudf(df1, npartitions=10)
ddf1.to_parquet(
tmpdir,
write_metadata_file=False,
partition_on=partition_on,
)
# Add global _metadata file
if partition_on:
fns = glob.glob(os.path.join(tmpdir, partition_on + "=*/*.parquet"))
else:
fns = glob.glob(os.path.join(tmpdir, "*.parquet"))
dask_cudf.io.parquet.create_metadata_file(
fns,
split_every=3, # Force tree reduction
)
# Check that we can now read the ddf
# with the _metadata file present
ddf2 = dask_cudf.read_parquet(
tmpdir,
split_row_groups=False,
index="myindex",
calculate_divisions=True,
)
if partition_on:
ddf1 = df1.sort_values("b")
ddf2 = ddf2.compute().sort_values("b")
ddf2.a = ddf2.a.astype("object")
dd.assert_eq(ddf1, ddf2)
@need_create_meta
def test_create_metadata_file_inconsistent_schema(tmpdir):
# NOTE: This test demonstrates that the CudfEngine
# can be used to generate a global `_metadata` file
# even if there are inconsistent schemas in the dataset.
# Write file 0
df0 = pd.DataFrame({"a": [None] * 10, "b": range(10)})
p0 = os.path.join(tmpdir, "part.0.parquet")
df0.to_parquet(p0, engine="pyarrow")
# Write file 1
b = list(range(10))
b[1] = None
df1 = pd.DataFrame({"a": range(10), "b": b})
p1 = os.path.join(tmpdir, "part.1.parquet")
df1.to_parquet(p1, engine="pyarrow")
# New pyarrow-dataset base can handle an inconsistent
# schema (even without a _metadata file), but computing
# and dtype validation may fail
ddf1 = dask_cudf.read_parquet(str(tmpdir), calculate_divisions=True)
# Add global metadata file.
# Dask-CuDF can do this without requiring schema
# consistency.
dask_cudf.io.parquet.create_metadata_file([p0, p1])
# Check that we can still read the ddf
# with the _metadata file present
ddf2 = dask_cudf.read_parquet(str(tmpdir), calculate_divisions=True)
# Check that the result is the same with and
# without the _metadata file. Note that we must
# call `compute` on `ddf1`, because the dtype of
# the inconsistent column ("a") may be "object"
# before computing, and "int" after
dd.assert_eq(ddf1.compute(), ddf2)
dd.assert_eq(ddf1.compute(), ddf2.compute())
@pytest.mark.parametrize(
"data",
[
["dog", "cat", "fish"],
[[0], [1, 2], [3]],
[None, [1, 2], [3]],
[{"f1": 1}, {"f1": 0, "f2": "dog"}, {"f2": "cat"}],
[None, {"f1": 0, "f2": "dog"}, {"f2": "cat"}],
],
)
def test_cudf_dtypes_from_pandas(tmpdir, data):
# Simple test that we can read in list and struct types
fn = str(tmpdir.join("test.parquet"))
dfp = pd.DataFrame({"data": data})
dfp.to_parquet(fn, engine="pyarrow", index=True)
# Use `split_row_groups=True` to avoid "fast path" where
# schema is not is passed through in older Dask versions
ddf2 = dask_cudf.read_parquet(fn, split_row_groups=True)
dd.assert_eq(cudf.from_pandas(dfp), ddf2)
def test_cudf_list_struct_write(tmpdir):
df = cudf.DataFrame(
{
"a": [1, 2, 3],
"b": [[[1, 2]], [[2, 3]], None],
"c": [[[["a", "z"]]], [[["b", "d", "e"]]], None],
}
)
df["d"] = df.to_struct()
ddf = dask_cudf.from_cudf(df, 3)
temp_file = str(tmpdir.join("list_struct.parquet"))
ddf.to_parquet(temp_file)
new_ddf = dask_cudf.read_parquet(temp_file)
dd.assert_eq(df, new_ddf)
def test_check_file_size(tmpdir):
# Test simple file-size check to help warn users
# of upstream change to `split_row_groups` default
fn = str(tmpdir.join("test.parquet"))
cudf.DataFrame({"a": np.arange(1000)}).to_parquet(fn)
with pytest.warns(match="large parquet file"):
dask_cudf.read_parquet(fn, check_file_size=1).compute()
def test_null_partition(tmpdir):
import pyarrow as pa
from pyarrow.dataset import HivePartitioning
ids = pd.Series([0, 1, None], dtype="Int64")
df = pd.DataFrame({"id": ids, "x": [1, 2, 3]})
ddf = dd.from_pandas(df, npartitions=1).to_backend("cudf")
ddf.to_parquet(str(tmpdir), partition_on="id")
fns = glob.glob(os.path.join(tmpdir, "id" + "=*/*.parquet"))
assert len(fns) == 3
partitioning = HivePartitioning(pa.schema([("id", pa.int64())]))
ddf_read = dask_cudf.read_parquet(
str(tmpdir),
dataset={"partitioning": partitioning},
)
dd.assert_eq(
ddf[["x", "id"]],
ddf_read[["x", "id"]],
check_divisions=False,
)
def test_nullable_schema_mismatch(tmpdir):
# See: https://github.com/rapidsai/cudf/issues/12702
path0 = str(tmpdir.join("test.0.parquet"))
path1 = str(tmpdir.join("test.1.parquet"))
cudf.DataFrame.from_dict({"a": [1, 2, 3]}).to_parquet(path0)
cudf.DataFrame.from_dict({"a": [4, 5, None]}).to_parquet(path1)
with dask.config.set({"dataframe.backend": "cudf"}):
ddf = dd.read_parquet(
[path0, path1], split_row_groups=2, aggregate_files=True
)
expect = pd.read_parquet([path0, path1])
dd.assert_eq(ddf, expect, check_index=False)
def test_parquet_read_filter_and_project(tmpdir):
# Filter on columns that are not included
# in the current column projection
# Write parquet data
path = str(tmpdir.join("test.parquet"))
df = cudf.DataFrame(
{
"a": [1, 2, 3, 4, 5] * 10,
"b": [0, 1, 2, 3, 4] * 10,
"c": range(50),
"d": [6, 7] * 25,
"e": [8, 9] * 25,
}
)
df.to_parquet(path)
# Read back with filter and projection
columns = ["b"]
filters = [[("a", "==", 5), ("c", ">", 20)]]
got = dask_cudf.read_parquet(path, columns=columns, filters=filters)
# Check result
expected = df[(df.a == 5) & (df.c > 20)][columns].reset_index(drop=True)
dd.assert_eq(got, expected)
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/io
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/io/tests/test_csv.py
|
# Copyright (c) 2019-2023, NVIDIA CORPORATION.
import gzip
import os
import warnings
import numpy as np
import pandas as pd
import pytest
import dask
from dask import dataframe as dd
import cudf
import dask_cudf
@pytest.fixture
def csv_begin_bad_lines(tmp_path):
lines = """x
x
x
A, B, C, D
1, 2, 3, 4
2, 3, 5, 1
4, 5, 2, 5"""
file = tmp_path / "test_read_csv_begin.csv"
with open(file, "w") as fp:
fp.write(lines)
return file
@pytest.fixture
def csv_end_bad_lines(tmp_path):
lines = """A, B, C, D
1, 2, 3, 4
2, 3, 5, 1
4, 5, 2, 5
x
x
x"""
file = tmp_path / "test_read_csv_end.csv"
with open(file, "w") as fp:
fp.write(lines)
return file
def test_csv_roundtrip_backend_dispatch(tmp_path):
# Test ddf.read_csv cudf-backend dispatch
df = cudf.DataFrame({"x": [1, 2, 3, 4], "id": ["a", "b", "c", "d"]})
ddf = dask_cudf.from_cudf(df, npartitions=2)
csv_path = str(tmp_path / "data-*.csv")
ddf.to_csv(csv_path, index=False)
with dask.config.set({"dataframe.backend": "cudf"}):
ddf2 = dd.read_csv(csv_path)
assert isinstance(ddf2, dask_cudf.DataFrame)
dd.assert_eq(ddf, ddf2, check_divisions=False, check_index=False)
def test_csv_roundtrip(tmp_path):
df = cudf.DataFrame({"x": [1, 2, 3, 4], "id": ["a", "b", "c", "d"]})
ddf = dask_cudf.from_cudf(df, npartitions=2)
csv_path = str(tmp_path / "data-*.csv")
ddf.to_csv(csv_path, index=False)
ddf2 = dask_cudf.read_csv(csv_path)
dd.assert_eq(ddf, ddf2, check_divisions=False, check_index=False)
def test_csv_roundtrip_filepath(tmp_path):
df = cudf.DataFrame({"x": [1, 2, 3, 4], "id": ["a", "b", "c", "d"]})
ddf = dask_cudf.from_cudf(df, npartitions=2)
stmp_path = str(tmp_path / "data-*.csv")
ddf.to_csv(f"file://{stmp_path}", index=False)
ddf2 = dask_cudf.read_csv(f"file://{stmp_path}")
dd.assert_eq(ddf, ddf2, check_divisions=False, check_index=False)
def test_read_csv(tmp_path):
df = dask.datasets.timeseries(
dtypes={"x": int, "y": int}, freq="120s"
).reset_index(drop=True)
csv_path = str(tmp_path / "data-*.csv")
df.to_csv(csv_path, index=False)
df2 = dask_cudf.read_csv(csv_path)
dd.assert_eq(df, df2)
# file path test
stmp_path = str(csv_path)
df3 = dask_cudf.read_csv(f"file://{stmp_path}")
dd.assert_eq(df2, df3)
# file list test
list_paths = [
os.path.join(tmp_path, fname) for fname in sorted(os.listdir(tmp_path))
]
df4 = dask_cudf.read_csv(list_paths)
dd.assert_eq(df, df4)
def test_raises_FileNotFoundError():
with pytest.raises(FileNotFoundError):
dask_cudf.read_csv("foo.csv")
def test_read_csv_w_bytes(tmp_path):
df = dask.datasets.timeseries(
dtypes={"x": int, "y": int}, freq="120s"
).reset_index(drop=True)
df = pd.DataFrame(dict(x=np.arange(20), y=np.arange(20)))
df.to_csv(tmp_path / "data-*.csv", index=False)
df2 = dask_cudf.read_csv(tmp_path / "*.csv", blocksize="50 B")
assert df2.npartitions == 3
dd.assert_eq(df2, df, check_index=False)
def test_read_csv_compression(tmp_path):
df = pd.DataFrame(dict(x=np.arange(20), y=np.arange(20)))
df.to_csv(tmp_path / "data.csv.gz", index=False)
with pytest.warns(UserWarning) as w:
df2 = dask_cudf.read_csv(tmp_path / "*.csv.gz", blocksize="50 B")
assert len(w) == 1
msg = str(w[0].message)
assert "gzip" in msg
assert df2.npartitions == 1
dd.assert_eq(df2, df, check_index=False)
with warnings.catch_warnings(record=True) as record:
df2 = dask_cudf.read_csv(tmp_path / "*.csv.gz", blocksize=None)
assert not record
def test_read_csv_compression_file_list(tmp_path):
# Repro from Issue#3412
lines = """col1,col2
0,1
2,3"""
files = [tmp_path / "test1.csv", tmp_path / "test2.csv"]
for fn in files:
with gzip.open(fn, "wb") as fp:
fp.write(lines.encode("utf-8"))
ddf_cpu = dd.read_csv(files, compression="gzip").compute()
ddf_gpu = dask_cudf.read_csv(files, compression="gzip").compute()
dd.assert_eq(ddf_cpu, ddf_gpu)
@pytest.mark.parametrize("size", [0, 3, 20])
@pytest.mark.parametrize("compression", ["gzip", None])
def test_read_csv_blocksize_none(tmp_path, compression, size):
df = pd.DataFrame(dict(x=np.arange(size), y=np.arange(size)))
path = (
tmp_path / "data.csv.gz"
if compression == "gzip"
else tmp_path / "data.csv"
)
# Types need to be specified for empty csv files
if size == 0:
typ = {"x": df.x.dtype, "y": df.y.dtype}
else:
typ = None
df.to_csv(path, index=False, compression=compression)
df2 = dask_cudf.read_csv(path, blocksize=None, dtype=typ)
dd.assert_eq(df, df2)
# Test chunksize deprecation
with pytest.warns(FutureWarning, match="deprecated"):
df3 = dask_cudf.read_csv(path, chunksize=None, dtype=typ)
dd.assert_eq(df, df3)
@pytest.mark.parametrize("dtype", [{"b": str, "c": int}, None])
def test_csv_reader_usecols(tmp_path, dtype):
df = cudf.DataFrame(
{
"a": [1, 2, 3, 4] * 100,
"b": ["a", "b", "c", "d"] * 100,
"c": [10, 11, 12, 13] * 100,
}
)
csv_path = str(tmp_path / "usecols_data.csv")
df.to_csv(csv_path, index=False)
ddf = dask_cudf.from_cudf(df[["b", "c"]], npartitions=5)
ddf2 = dask_cudf.read_csv(csv_path, usecols=["b", "c"], dtype=dtype)
dd.assert_eq(ddf, ddf2, check_divisions=False, check_index=False)
def test_read_csv_skiprows(csv_begin_bad_lines):
# Repro from Issue#13552
ddf_cpu = dd.read_csv(csv_begin_bad_lines, skiprows=3).compute()
ddf_gpu = dask_cudf.read_csv(csv_begin_bad_lines, skiprows=3).compute()
dd.assert_eq(ddf_cpu, ddf_gpu)
def test_read_csv_skiprows_error(csv_begin_bad_lines):
# Repro from Issue#13552
with pytest.raises(ValueError):
dask_cudf.read_csv(
csv_begin_bad_lines, skiprows=3, blocksize="100 MiB"
).compute()
def test_read_csv_skipfooter(csv_end_bad_lines):
# Repro from Issue#13552
ddf_cpu = dd.read_csv(csv_end_bad_lines, skipfooter=3).compute()
ddf_gpu = dask_cudf.read_csv(csv_end_bad_lines, skipfooter=3).compute()
dd.assert_eq(ddf_cpu, ddf_gpu, check_dtype=False)
def test_read_csv_skipfooter_error(csv_end_bad_lines):
with pytest.raises(ValueError):
dask_cudf.read_csv(
csv_end_bad_lines, skipfooter=3, blocksize="100 MiB"
).compute()
def test_read_csv_nrows(csv_end_bad_lines):
ddf_cpu = pd.read_csv(csv_end_bad_lines, nrows=2)
ddf_gpu = dask_cudf.read_csv(csv_end_bad_lines, nrows=2).compute()
dd.assert_eq(ddf_cpu, ddf_gpu)
def test_read_csv_nrows_error(csv_end_bad_lines):
with pytest.raises(ValueError):
dask_cudf.read_csv(
csv_end_bad_lines, nrows=2, blocksize="100 MiB"
).compute()
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/io
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/io/tests/test_orc.py
|
# Copyright (c) 2018-2022, NVIDIA CORPORATION.
import glob
import os
from datetime import datetime, timezone
import pytest
import dask
from dask import dataframe as dd
import cudf
import dask_cudf
cur_dir = os.path.dirname(__file__)
sample_orc = os.path.join(cur_dir, "data/orc/sample.orc")
def test_read_orc_backend_dispatch():
# Test ddf.read_orc cudf-backend dispatch
df1 = cudf.read_orc(sample_orc)
with dask.config.set({"dataframe.backend": "cudf"}):
df2 = dd.read_orc(sample_orc)
assert isinstance(df2, dask_cudf.DataFrame)
dd.assert_eq(df1, df2, check_index=False)
def test_read_orc_defaults():
df1 = cudf.read_orc(sample_orc)
df2 = dask_cudf.read_orc(sample_orc)
dd.assert_eq(df1, df2, check_index=False)
def test_filepath_read_orc_defaults():
path = "file://%s" % sample_orc
df1 = cudf.read_orc(path)
df2 = dask_cudf.read_orc(path)
dd.assert_eq(df1, df2, check_index=False)
def test_filelist_read_orc_defaults():
path = [sample_orc]
df1 = cudf.read_orc(path[0])
df2 = dask_cudf.read_orc(path)
dd.assert_eq(df1, df2, check_index=False)
@pytest.mark.parametrize("engine", ["cudf", "pyarrow"])
@pytest.mark.parametrize("columns", [["time", "date"], ["time"]])
def test_read_orc_cols(engine, columns):
df1 = cudf.read_orc(sample_orc, engine=engine, columns=columns)
df2 = dask_cudf.read_orc(sample_orc, engine=engine, columns=columns)
dd.assert_eq(df1, df2, check_index=False)
@pytest.mark.parametrize("engine", ["cudf", "pyarrow"])
@pytest.mark.parametrize(
"predicate,expected_len",
[
(None, 70_000),
(
[("date", "==", datetime(1900, 12, 25, tzinfo=timezone.utc))],
15_000,
),
(
[("date", "<=", datetime(1928, 12, 25, tzinfo=timezone.utc))],
30_000,
),
(
[
[("date", ">", datetime(1950, 12, 25, tzinfo=timezone.utc))],
[("date", "<=", datetime(1928, 12, 25, tzinfo=timezone.utc))],
],
55_000,
),
],
)
def test_read_orc_filtered(tmpdir, engine, predicate, expected_len):
df = dask_cudf.read_orc(sample_orc, engine=engine, filters=predicate)
dd.assert_eq(len(df), expected_len)
def test_read_orc_first_file_empty(tmpdir):
# Write a 3-file dataset where the first file is empty
# See: https://github.com/rapidsai/cudf/issues/8011
path = str(tmpdir)
os.makedirs(path, exist_ok=True)
df1 = cudf.DataFrame({"id": [1, 2], "float": [1.0, 2.0]})
df1.iloc[:0].to_orc(os.path.join(path, "data.0"))
df1.iloc[:1].to_orc(os.path.join(path, "data.1"))
df1.iloc[1:].to_orc(os.path.join(path, "data.2"))
# Read back the files with dask_cudf,
# and check the result.
df2 = dask_cudf.read_orc(os.path.join(path, "*"))
dd.assert_eq(df1, df2, check_index=False)
@pytest.mark.parametrize("compute", [True, False])
@pytest.mark.parametrize("compression", [None, "snappy"])
@pytest.mark.parametrize(
"dtypes",
[
{"index": int, "c": int, "a": str},
{"index": int, "c": int, "a": str, "b": float},
{"index": int, "c": str, "a": object},
],
)
def test_to_orc(tmpdir, dtypes, compression, compute):
# Create cudf and dask_cudf dataframes
df = cudf.datasets.randomdata(nrows=10, dtypes=dtypes, seed=1)
df = df.set_index("index").sort_index()
ddf = dask_cudf.from_cudf(df, npartitions=3)
# Write cudf dataframe as single file
# (preserve index by setting to column)
fname = tmpdir.join("test.orc")
df.reset_index().to_orc(fname, compression=compression)
# Write dask_cudf dataframe as multiple files
# (preserve index by `write_index=True`)
to = ddf.to_orc(
str(tmpdir), write_index=True, compression=compression, compute=compute
)
if not compute:
to.compute()
# Read back cudf dataframe
df_read = cudf.read_orc(fname).set_index("index")
# Read back dask_cudf dataframe
paths = glob.glob(str(tmpdir) + "/part.*.orc")
ddf_read = dask_cudf.read_orc(paths).set_index("index")
# Make sure the dask_cudf dataframe matches
# the cudf dataframes (df and df_read)
dd.assert_eq(df, ddf_read)
dd.assert_eq(df_read, ddf_read)
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/io
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/io/tests/test_s3.py
|
# Copyright (c) 2020-2023, NVIDIA CORPORATION.
import os
import socket
from contextlib import contextmanager
from io import BytesIO
import pandas as pd
import pyarrow.fs as pa_fs
import pytest
import dask_cudf
moto = pytest.importorskip("moto", minversion="3.1.6")
boto3 = pytest.importorskip("boto3")
s3fs = pytest.importorskip("s3fs")
ThreadedMotoServer = pytest.importorskip("moto.server").ThreadedMotoServer
@pytest.fixture(scope="session")
def endpoint_ip():
return "127.0.0.1"
@pytest.fixture(scope="session")
def endpoint_port():
# Return a free port per worker session.
sock = socket.socket()
sock.bind(("127.0.0.1", 0))
port = sock.getsockname()[1]
sock.close()
return port
@contextmanager
def ensure_safe_environment_variables():
"""
Get a context manager to safely set environment variables
All changes will be undone on close, hence environment variables set
within this contextmanager will neither persist nor change global state.
"""
saved_environ = dict(os.environ)
try:
yield
finally:
os.environ.clear()
os.environ.update(saved_environ)
@pytest.fixture(scope="session")
def s3_base(endpoint_ip, endpoint_port):
"""
Fixture to set up moto server in separate process
"""
with ensure_safe_environment_variables():
# Fake aws credentials exported to prevent botocore looking for
# system aws credentials, https://github.com/spulec/moto/issues/1793
os.environ["AWS_ACCESS_KEY_ID"] = "foobar_key"
os.environ["AWS_SECRET_ACCESS_KEY"] = "foobar_secret"
os.environ["S3FS_LOGGING_LEVEL"] = "DEBUG"
os.environ["AWS_SECURITY_TOKEN"] = "foobar_security_token"
os.environ["AWS_SESSION_TOKEN"] = "foobar_session_token"
os.environ["AWS_DEFAULT_REGION"] = "us-east-1"
# Launching moto in server mode, i.e., as a separate process
# with an S3 endpoint on localhost
endpoint_uri = f"http://{endpoint_ip}:{endpoint_port}/"
server = ThreadedMotoServer(ip_address=endpoint_ip, port=endpoint_port)
server.start()
yield endpoint_uri
server.stop()
@pytest.fixture()
def s3so(endpoint_ip, endpoint_port):
"""
Returns s3 storage options to pass to fsspec
"""
endpoint_uri = f"http://{endpoint_ip}:{endpoint_port}/"
return {"client_kwargs": {"endpoint_url": endpoint_uri}}
@contextmanager
def s3_context(s3_base, bucket, files=None):
if files is None:
files = {}
with ensure_safe_environment_variables():
client = boto3.client("s3", endpoint_url=s3_base)
client.create_bucket(Bucket=bucket, ACL="public-read-write")
for f, data in files.items():
client.put_object(Bucket=bucket, Key=f, Body=data)
yield s3fs.S3FileSystem(client_kwargs={"endpoint_url": s3_base})
for f, data in files.items():
try:
client.delete_object(Bucket=bucket, Key=f)
except Exception:
pass
def test_read_csv(s3_base, s3so):
with s3_context(
s3_base=s3_base, bucket="daskcsv", files={"a.csv": b"a,b\n1,2\n3,4\n"}
):
df = dask_cudf.read_csv(
"s3://daskcsv/*.csv", chunksize="50 B", storage_options=s3so
)
assert df.a.sum().compute() == 4
@pytest.mark.parametrize(
"open_file_options",
[
{"precache_options": {"method": None}},
{"precache_options": {"method": "parquet"}},
{"open_file_func": None},
],
)
def test_read_parquet(s3_base, s3so, open_file_options):
pdf = pd.DataFrame({"a": [1, 2, 3, 4], "b": [2.1, 2.2, 2.3, 2.4]})
buffer = BytesIO()
pdf.to_parquet(path=buffer)
buffer.seek(0)
with s3_context(
s3_base=s3_base, bucket="daskparquet", files={"file.parq": buffer}
):
if "open_file_func" in open_file_options:
fs = pa_fs.S3FileSystem(
endpoint_override=s3so["client_kwargs"]["endpoint_url"],
)
open_file_options["open_file_func"] = fs.open_input_file
df = dask_cudf.read_parquet(
"s3://daskparquet/*.parq",
storage_options=s3so,
open_file_options=open_file_options,
)
assert df.a.sum().compute() == 10
assert df.b.sum().compute() == 9
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/io
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/io/tests/test_text.py
|
# Copyright (c) 2022, NVIDIA CORPORATION.
import os
import pytest
import dask.dataframe as dd
import cudf
import dask_cudf
cur_dir = os.path.dirname(__file__)
text_file = os.path.join(cur_dir, "data/text/sample.pgn")
@pytest.mark.parametrize("file", [text_file, [text_file]])
@pytest.mark.parametrize("chunksize", [12, "50 B", None])
def test_read_text(file, chunksize):
df1 = cudf.read_text(text_file, delimiter='"]')
df2 = dask_cudf.read_text(file, chunksize=chunksize, delimiter='"]')
dd.assert_eq(df1, df2, check_index=False)
@pytest.mark.parametrize("offset", [0, 100, 250, 500, 1000])
@pytest.mark.parametrize("size", [64, 128, 256])
def test_read_text_byte_range(offset, size):
df1 = cudf.read_text(text_file, delimiter=".", byte_range=(offset, size))
df2 = dask_cudf.read_text(
text_file, chunksize=None, delimiter=".", byte_range=(offset, size)
)
dd.assert_eq(df1, df2, check_index=False)
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/io
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/io/tests/test_json.py
|
# Copyright (c) 2019-2023, NVIDIA CORPORATION.
import os
import pandas as pd
import pytest
import dask
import dask.dataframe as dd
from dask.utils import tmpfile
import dask_cudf
def test_read_json_backend_dispatch(tmp_path):
# Test ddf.read_json cudf-backend dispatch
df1 = dask.datasets.timeseries(
dtypes={"x": int, "y": int}, freq="120s"
).reset_index(drop=True)
json_path = str(tmp_path / "data-*.json")
df1.to_json(json_path)
with dask.config.set({"dataframe.backend": "cudf"}):
df2 = dd.read_json(json_path)
assert isinstance(df2, dask_cudf.DataFrame)
dd.assert_eq(df1, df2)
def test_read_json(tmp_path):
df1 = dask.datasets.timeseries(
dtypes={"x": int, "y": int}, freq="120s"
).reset_index(drop=True)
json_path = str(tmp_path / "data-*.json")
df1.to_json(json_path)
df2 = dask_cudf.read_json(json_path)
dd.assert_eq(df1, df2)
# file path test
stmp_path = str(tmp_path / "data-*.json")
df3 = dask_cudf.read_json(f"file://{stmp_path}")
dd.assert_eq(df1, df3)
# file list test
list_paths = [
os.path.join(tmp_path, fname) for fname in sorted(os.listdir(tmp_path))
]
df4 = dask_cudf.read_json(list_paths)
dd.assert_eq(df1, df4)
@pytest.mark.filterwarnings("ignore:Using CPU")
@pytest.mark.parametrize("orient", ["split", "index", "columns", "values"])
def test_read_json_basic(orient):
df = pd.DataFrame({"x": ["a", "b", "c", "d"], "y": [1, 2, 3, 4]})
with tmpfile("json") as f:
df.to_json(f, orient=orient, lines=False)
actual = dask_cudf.read_json(f, orient=orient, lines=False)
actual_pd = pd.read_json(f, orient=orient, lines=False)
dd.assert_eq(actual, actual_pd)
@pytest.mark.filterwarnings("ignore:Using CPU")
@pytest.mark.parametrize("lines", [True, False])
def test_read_json_lines(lines):
df = pd.DataFrame({"x": ["a", "b", "c", "d"], "y": [1, 2, 3, 4]})
with tmpfile("json") as f:
df.to_json(f, orient="records", lines=lines)
actual = dask_cudf.read_json(f, orient="records", lines=lines)
actual_pd = pd.read_json(f, orient="records", lines=lines)
dd.assert_eq(actual, actual_pd)
def test_read_json_nested(tmp_path):
# Check that `engine="cudf"` can
# be used to support nested data
df = pd.DataFrame(
{
"a": [{"y": 2}, {"y": 4}, {"y": 6}, {"y": 8}],
"b": [[1, 2, 3], [4, 5], [6], [7]],
"c": [1, 3, 5, 7],
}
)
kwargs = dict(orient="records", lines=True)
with tmp_path / "data.json" as f:
df.to_json(f, **kwargs)
# Ensure engine='cudf' is tested.
actual = dask_cudf.read_json(f, engine="cudf", **kwargs)
actual_pd = pd.read_json(f, **kwargs)
dd.assert_eq(actual, actual_pd)
# Ensure not passing engine='cudf'(default i.e., auto) is tested.
actual = dask_cudf.read_json(f, **kwargs)
dd.assert_eq(actual, actual_pd)
# Ensure not passing kwargs also reads the file.
actual = dask_cudf.read_json(f)
dd.assert_eq(actual, actual_pd)
| 0 |
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/io/tests/data
|
rapidsai_public_repos/cudf/python/dask_cudf/dask_cudf/io/tests/data/text/sample.pgn
|
[Event "Rated Bullet tournament https://lichess.org/tournament/IaRkDsvp"]
[Site "https://lichess.org/r0cYFhsy"]
[White "GreatGig"]
[Black "hackattack"]
[Result "0-1"]
[UTCDate "2016.04.30"]
[UTCTime "22:00:03"]
[WhiteElo "1777"]
[BlackElo "1809"]
[WhiteRatingDiff "-11"]
[BlackRatingDiff "+11"]
[ECO "B01"]
[Opening "Scandinavian Defense: Mieses-Kotroc Variation"]
[TimeControl "60+0"]
[Termination "Time forfeit"]
1. e4 d5 2. exd5 Qxd5 3. Nc3 Qd8 4. d4 Nf6 5. Nf3 Bg4 6. h3 Bxf3 7. gxf3 c6 8. Bg2 Nbd7 9. Be3 e6 10. Qd2 Nd5 11. Nxd5 cxd5 12. O-O-O Be7 13. c3 Qc7 14. Kb1 O-O-O 15. f4 Kb8 16. Rhg1 Ka8 17. Bh1 g6 18. h4 Bxh4 19. f3 Be7 20. Qc2 Nf6 21. Bg2 Nh5 22. Bh3 Nxf4 23. Bxf4 Qxf4 24. Rdf1 Qd6 25. Rg4 Rdf8 26. Rfg1 f5 27. R4g2 Bf6 28. Rg3 Rfg8 29. Bf1 Rg7 30. Bd3 Rhg8 31. Qh2 Qb8 32. Qg2 Qc8 33. f4 Qc6 34. Qf2 Bh4 35. Rxg6 Bxf2 36. Rxg7 Rxg7 37. Rxg7 a6 38. Rg8+ Ka7 39. Rh8 Qd7 40. Rxh7 Qxh7 0-1
[Event "Rated Bullet tournament https://lichess.org/tournament/IaRkDsvp"]
[Site "https://lichess.org/s7lpBNiu"]
[White "kh447"]
[Black "blueskyminer23"]
[Result "0-1"]
[UTCDate "2016.04.30"]
[UTCTime "22:00:03"]
[WhiteElo "2025"]
[BlackElo "2046"]
[WhiteRatingDiff "-12"]
[BlackRatingDiff "+11"]
[ECO "D94"]
[Opening "Gruenfeld Defense: Three Knights Variation, Paris Variation"]
[TimeControl "60+0"]
[Termination "Time forfeit"]
1. d4 Nf6 2. c4 g6 3. Nc3 d5 4. Nf3 Bg7 5. e3 O-O 6. Bd3 c5 7. cxd5 Nxd5 8. O-O Nc6 9. Qe2 Bg4 10. dxc5 Nxc3 11. bxc3 Bxf3 12. gxf3 Bxc3 13. Rb1 Rb8 14. Kh1 Qd5 15. Rg1 Qxc5 16. Qc2 Qa5 17. Qb3 Bg7 18. Qc2 Ne5 19. Be4 Rfc8 20. Qe2 f5 21. Bc2 Qd5 22. e4 Qc6 23. Bb3+ Kh8 24. Bf4 fxe4 25. fxe4 Nd7 26. Rbc1 Qf6 27. Bxb8 Rxb8 28. Rg3 Ne5 29. Rcg1 h6 30. f4 Qxf4 31. Rxg6 Nxg6 32. Qg2 Qe5 33. Bc2 Kg8 34. Qe2 Rf8 35. Qf3 Rf7 36. Qe2 Rf6 37. Qg2 Re6 0-1
[Event "Rated Bullet tournament https://lichess.org/tournament/IaRkDsvp"]
[Site "https://lichess.org/9CTXrWUB"]
[White "Demis115"]
[Black "churrosagogo"]
[Result "1-0"]
[UTCDate "2016.04.30"]
[UTCTime "22:00:03"]
[WhiteElo "1944"]
[BlackElo "2007"]
[WhiteRatingDiff "+14"]
[BlackRatingDiff "-13"]
[ECO "C28"]
[Opening "Bishop's Opening: Vienna Hybrid"]
[TimeControl "60+0"]
[Termination "Normal"]
1. e4 e5 2. Nc3 Nc6 3. Bc4 Nf6 4. d3 Bc5 5. Be3 O-O 6. Bxc5 d6 7. Be3 a6 8. Nge2 b5 9. Bb3 b4 10. Nd5 Na5 11. Bg5 Nxb3 12. axb3 Be6 13. Bxf6 gxf6 14. Ng3 Bxd5 15. exd5 Qd7 16. O-O Kh8 17. Qe2 f5 18. Qh5 f4 19. Nf5 Rg8 20. Nh6 Rg6 21. Rfe1 Rag8 22. g3 f5 23. Qxf5 Qg7 24. Nf7+ Qxf7 25. Qxf7 fxg3 26. hxg3 R6g7 27. Qf6 1-0
| 0 |
rapidsai_public_repos/cudf/python
|
rapidsai_public_repos/cudf/python/cudf_kafka/pyproject.toml
|
# Copyright (c) 2021-2022, NVIDIA CORPORATION.
[build-system]
requires = [
"cython>=3.0.3",
"numpy>=1.21,<1.25",
"pyarrow==14.0.1.*",
"scikit-build>=0.13.1",
"setuptools",
"wheel",
] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit ../../dependencies.yaml and run `rapids-dependency-file-generator`.
[project]
name = "cudf_kafka"
dynamic = ["version"]
description = "cuDF Kafka Datasource"
readme = { file = "README.md", content-type = "text/markdown" }
authors = [
{ name = "NVIDIA Corporation" },
]
license = { text = "Apache 2.0" }
requires-python = ">=3.9"
dependencies = [
"cudf==24.2.*",
] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit ../../dependencies.yaml and run `rapids-dependency-file-generator`.
[project.optional-dependencies]
test = [
"pytest",
"pytest-cov",
"pytest-xdist",
] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit ../../dependencies.yaml and run `rapids-dependency-file-generator`.
[project.urls]
Homepage = "https://github.com/rapidsai/cudf"
Documentation = "https://docs.rapids.ai/api/cudf/stable/"
[tool.setuptools]
license-files = ["LICENSE"]
[tool.setuptools.dynamic]
version = {file = "cudf_kafka/VERSION"}
[tool.isort]
line_length = 79
multi_line_output = 3
include_trailing_comma = true
force_grid_wrap = 0
combine_as_imports = true
order_by_type = true
known_dask = [
"dask",
"distributed",
"dask_cuda",
"streamz",
]
known_rapids = [
"rmm",
"cudf",
"dask_cudf",
]
known_first_party = [
"cudf_kafka",
]
default_section = "THIRDPARTY"
sections = [
"FUTURE",
"STDLIB",
"THIRDPARTY",
"DASK",
"RAPIDS",
"FIRSTPARTY",
"LOCALFOLDER",
]
skip = [
"thirdparty",
".eggs",
".git",
".hg",
".mypy_cache",
".tox",
".venv",
"_build",
"buck-out",
"build",
"dist",
"__init__.py",
]
| 0 |
rapidsai_public_repos/cudf/python
|
rapidsai_public_repos/cudf/python/cudf_kafka/CMakeLists.txt
|
# =============================================================================
# Copyright (c) 2022-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
# in compliance with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
# or implied. See the License for the specific language governing permissions and limitations under
# the License.
# =============================================================================
cmake_minimum_required(VERSION 3.26.4 FATAL_ERROR)
set(cudf_kafka_version 24.02.00)
include(../../fetch_rapids.cmake)
project(
cudf-kafka-python
VERSION ${cudf_kafka_version}
LANGUAGES # TODO: Building Python extension modules via the python_extension_module requires the C
# language to be enabled here. The test project that is built in scikit-build to verify
# various linking options for the python library is hardcoded to build with C, so until
# that is fixed we need to keep C.
C CXX
)
find_package(cudf_kafka ${cudf_kafka_version} REQUIRED)
if(NOT cudf_kafka_FOUND)
message(
FATAL_ERROR
"cudf_kafka package not found. cudf_kafka C++ is required to build this Python package."
)
endif()
include(rapids-cython)
rapids_cython_init()
add_subdirectory(cudf_kafka/_lib)
if(DEFINED cython_lib_dir)
rapids_cython_add_rpath_entries(TARGET cudf_kafka PATHS "${cython_lib_dir}")
endif()
| 0 |
rapidsai_public_repos/cudf/python
|
rapidsai_public_repos/cudf/python/cudf_kafka/README.md
|
# <div align="left"><img src="img/rapids_logo.png" width="90px"/> cuDF - GPU DataFrames</div>
## π’ cuDF can now be used as a no-code-change accelerator for pandas! To learn more, see [here](https://rapids.ai/cudf-pandas/)!
cuDF is a GPU DataFrame library for loading joining, aggregating,
filtering, and otherwise manipulating data. cuDF leverages
[libcudf](https://docs.rapids.ai/api/libcudf/stable/), a
blazing-fast C++/CUDA dataframe library and the [Apache
Arrow](https://arrow.apache.org/) columnar format to provide a
GPU-accelerated pandas API.
You can import `cudf` directly and use it like `pandas`:
```python
import cudf
import requests
from io import StringIO
url = "https://github.com/plotly/datasets/raw/master/tips.csv"
content = requests.get(url).content.decode("utf-8")
tips_df = cudf.read_csv(StringIO(content))
tips_df["tip_percentage"] = tips_df["tip"] / tips_df["total_bill"] * 100
# display average tip by dining party size
print(tips_df.groupby("size").tip_percentage.mean())
```
Or, you can use cuDF as a no-code-change accelerator for pandas, using
[`cudf.pandas`](https://docs.rapids.ai/api/cudf/stable/cudf_pandas).
`cudf.pandas` supports 100% of the pandas API, utilizing cuDF for
supported operations and falling back to pandas when needed:
```python
%load_ext cudf.pandas # pandas operations now use the GPU!
import pandas as pd
import requests
from io import StringIO
url = "https://github.com/plotly/datasets/raw/master/tips.csv"
content = requests.get(url).content.decode("utf-8")
tips_df = pd.read_csv(StringIO(content))
tips_df["tip_percentage"] = tips_df["tip"] / tips_df["total_bill"] * 100
# display average tip by dining party size
print(tips_df.groupby("size").tip_percentage.mean())
```
## Resources
- [Try cudf.pandas now](https://nvda.ws/rapids-cudf): Explore `cudf.pandas` on a free GPU enabled instance on Google Colab!
- [Install](https://docs.rapids.ai/install): Instructions for installing cuDF and other [RAPIDS](https://rapids.ai) libraries.
- [cudf (Python) documentation](https://docs.rapids.ai/api/cudf/stable/)
- [libcudf (C++/CUDA) documentation](https://docs.rapids.ai/api/libcudf/stable/)
- [RAPIDS Community](https://rapids.ai/learn-more/#get-involved): Get help, contribute, and collaborate.
## Installation
### CUDA/GPU requirements
* CUDA 11.2+
* NVIDIA driver 450.80.02+
* Pascal architecture or better (Compute Capability >=6.0)
### Conda
cuDF can be installed with conda (via [miniconda](https://docs.conda.io/projects/miniconda/en/latest/) or the full [Anaconda distribution](https://www.anaconda.com/download) from the `rapidsai` channel:
```bash
conda install -c rapidsai -c conda-forge -c nvidia \
cudf=24.02 python=3.10 cuda-version=11.8
```
We also provide [nightly Conda packages](https://anaconda.org/rapidsai-nightly) built from the HEAD
of our latest development branch.
Note: cuDF is supported only on Linux, and with Python versions 3.9 and later.
See the [RAPIDS installation guide](https://docs.rapids.ai/install) for more OS and version info.
## Build/Install from Source
See build [instructions](CONTRIBUTING.md#setting-up-your-build-environment).
## Contributing
Please see our [guide for contributing to cuDF](CONTRIBUTING.md).
| 0 |
rapidsai_public_repos/cudf/python
|
rapidsai_public_repos/cudf/python/cudf_kafka/setup.py
|
# Copyright (c) 2018-2023, NVIDIA CORPORATION.
from setuptools import find_packages
from skbuild import setup
packages = find_packages(include=["cudf_kafka*"])
setup(
packages=packages,
package_data={
key: ["VERSION", "*.pxd", "*.hpp", "*.cuh"] for key in packages
},
zip_safe=False,
)
| 0 |
rapidsai_public_repos/cudf/python
|
rapidsai_public_repos/cudf/python/cudf_kafka/LICENSE
|
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2018 NVIDIA Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 0 |
rapidsai_public_repos/cudf/python/cudf_kafka
|
rapidsai_public_repos/cudf/python/cudf_kafka/cudf_kafka/_version.py
|
# Copyright (c) 2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import importlib.resources
__version__ = (
importlib.resources.files("cudf_kafka")
.joinpath("VERSION")
.read_text()
.strip()
)
__git_commit__ = ""
| 0 |
rapidsai_public_repos/cudf/python/cudf_kafka
|
rapidsai_public_repos/cudf/python/cudf_kafka/cudf_kafka/__init__.py
|
# Copyright (c) 2020-2023, NVIDIA CORPORATION.
from ._version import __git_commit__, __version__
| 0 |
rapidsai_public_repos/cudf/python/cudf_kafka
|
rapidsai_public_repos/cudf/python/cudf_kafka/cudf_kafka/VERSION
|
24.02.00
| 0 |
rapidsai_public_repos/cudf/python/cudf_kafka/cudf_kafka
|
rapidsai_public_repos/cudf/python/cudf_kafka/cudf_kafka/_lib/CMakeLists.txt
|
# =============================================================================
# Copyright (c) 2022-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
# in compliance with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
# or implied. See the License for the specific language governing permissions and limitations under
# the License.
# =============================================================================
set(cython_sources kafka.pyx)
set(linked_libraries cudf_kafka::cudf_kafka)
rapids_cython_create_modules(
CXX ASSOCIATED_TARGETS cudf_kafka
SOURCE_FILES "${cython_sources}"
LINKED_LIBRARIES "${linked_libraries}"
)
# TODO: Finding NumPy currently requires finding Development due to a bug in CMake. This bug was
# fixed in https://gitlab.kitware.com/cmake/cmake/-/merge_requests/7410 and will be available in
# CMake 3.24, so we can remove the Development component once we upgrade to CMake 3.24.
# find_package(Python REQUIRED COMPONENTS Development NumPy)
# Note: The bug noted above prevents us from finding NumPy successfully using FindPython.cmake
# inside the manylinux images used to build wheels because manylinux images do not contain
# libpython.so and therefore Development cannot be found. Until we upgrade to CMake 3.24, we should
# use FindNumpy.cmake instead (provided by scikit-build). When we switch to 3.24 we can try
# switching back, but it may not work if that implicitly still requires Python libraries. In that
# case we'll need to follow up with the CMake team to remove that dependency. The stopgap solution
# is to unpack the static lib tarballs in the wheel building jobs so that there are at least static
# libs to be found, but that should be a last resort since it implies a dependency that isn't really
# necessary. The relevant command is tar -xf /opt/_internal/static-libs-for-embedding-only.tar.xz -C
# /opt/_internal"
find_package(NumPy REQUIRED)
find_package(Python 3.9 REQUIRED COMPONENTS Interpreter)
execute_process(
COMMAND "${Python_EXECUTABLE}" -c "import pyarrow; print(pyarrow.get_include())"
OUTPUT_VARIABLE PYARROW_INCLUDE_DIR
ERROR_VARIABLE PYARROW_ERROR
RESULT_VARIABLE PYARROW_RESULT
OUTPUT_STRIP_TRAILING_WHITESPACE
)
if(${PYARROW_RESULT})
message(FATAL_ERROR "Error while trying to obtain pyarrow include directory:\n${PYARROW_ERROR}")
endif()
# TODO: Due to cudf's scalar.pyx needing to cimport pylibcudf's scalar.pyx (because there are parts
# of cudf Cython that need to directly access the c_obj underlying the pylibcudf Scalar) the
# requirement for arrow headers infects all of cudf. That in turn requires including numpy headers.
# These requirements will go away once all scalar-related Cython code is removed from cudf.
foreach(target IN LISTS RAPIDS_CYTHON_CREATED_TARGETS)
target_include_directories(${target} PRIVATE "${NumPy_INCLUDE_DIRS}")
target_include_directories(${target} PRIVATE "${PYARROW_INCLUDE_DIR}")
endforeach()
| 0 |
rapidsai_public_repos/cudf/python/cudf_kafka/cudf_kafka
|
rapidsai_public_repos/cudf/python/cudf_kafka/cudf_kafka/_lib/kafka.pyx
|
# Copyright (c) 2020-2023, NVIDIA CORPORATION.
from libc.stdint cimport int32_t, int64_t
from libcpp cimport bool, nullptr
from libcpp.map cimport map
from libcpp.memory cimport make_unique, unique_ptr
from libcpp.string cimport string
from libcpp.utility cimport move
from cudf._lib.cpp.io.datasource cimport datasource
from cudf_kafka._lib.kafka cimport kafka_consumer
# To avoid including <python.h> in libcudf_kafka
# we introduce this wrapper in Cython
cdef map[string, string] oauth_callback_wrapper(void *ctx):
resp = (<object>(ctx))()
cdef map[string, string] c_resp
c_resp[str.encode("token")] = str.encode(resp["token"])
c_resp[str.encode("token_expiration_in_epoch")] \
= str(resp["token_expiration_in_epoch"]).encode()
return c_resp
cdef class KafkaDatasource(Datasource):
def __cinit__(self,
object kafka_configs,
string topic=b"",
int32_t partition=-1,
int64_t start_offset=0,
int64_t end_offset=0,
int32_t batch_timeout=10000,
string delimiter=b"",):
cdef map[string, string] configs
cdef void* python_callable = nullptr
cdef map[string, string] (*python_callable_wrapper)(void *)
for key in kafka_configs:
if key == 'oauth_cb':
if callable(kafka_configs[key]):
python_callable = <void *>kafka_configs[key]
python_callable_wrapper = &oauth_callback_wrapper
else:
raise TypeError("'oauth_cb' configuration must \
be a Python callable object")
else:
configs[key.encode()] = kafka_configs[key].encode()
if topic != b"" and partition != -1:
self.c_datasource = <unique_ptr[datasource]> \
move(make_unique[kafka_consumer](configs,
python_callable,
python_callable_wrapper,
topic,
partition,
start_offset,
end_offset,
batch_timeout,
delimiter))
else:
self.c_datasource = <unique_ptr[datasource]> \
move(make_unique[kafka_consumer](configs,
python_callable,
python_callable_wrapper))
cdef datasource* get_datasource(self) nogil:
return <datasource *> self.c_datasource.get()
cpdef void commit_offset(self,
string topic,
int32_t partition,
int64_t offset):
(<kafka_consumer *> self.c_datasource.get()).commit_offset(
topic, partition, offset)
cpdef int64_t get_committed_offset(self,
string topic,
int32_t partition):
return (<kafka_consumer *> self.c_datasource.get()). \
get_committed_offset(topic, partition)
cpdef map[string, vector[int32_t]] list_topics(self,
string topic) except *:
return (<kafka_consumer *> self.c_datasource.get()). \
list_topics(topic)
cpdef map[string, int64_t] get_watermark_offset(self, string topic,
int32_t partition,
int32_t timeout,
bool cached):
return (<kafka_consumer *> self.c_datasource.get()). \
get_watermark_offset(topic, partition, timeout, cached)
cpdef void unsubscribe(self):
(<kafka_consumer *> self.c_datasource.get()).unsubscribe()
cpdef void close(self, int32_t timeout):
(<kafka_consumer *> self.c_datasource.get()).close(timeout)
| 0 |
rapidsai_public_repos/cudf/python/cudf_kafka/cudf_kafka
|
rapidsai_public_repos/cudf/python/cudf_kafka/cudf_kafka/_lib/kafka.pxd
|
# Copyright (c) 2020-2023, NVIDIA CORPORATION.
from libc.stdint cimport int32_t, int64_t
from libcpp cimport bool
from libcpp.map cimport map
from libcpp.memory cimport unique_ptr
from libcpp.string cimport string
from libcpp.vector cimport vector
from cudf._lib.cpp.io.datasource cimport datasource
from cudf._lib.io.datasource cimport Datasource
cdef extern from "cudf_kafka/kafka_callback.hpp" \
namespace "cudf::io::external::kafka" nogil:
ctypedef object (*python_callable_type)()
cdef extern from "cudf_kafka/kafka_consumer.hpp" \
namespace "cudf::io::external::kafka" nogil:
cpdef cppclass kafka_consumer:
kafka_consumer(map[string, string] configs,
python_callable_type python_callable) except +
kafka_consumer(map[string, string] configs,
python_callable_type python_callable,
string topic_name,
int32_t partition,
int64_t start_offset,
int64_t end_offset,
int32_t batch_timeout,
string delimiter) except +
bool assign(vector[string] topics, vector[int32_t] partitions) except +
void commit_offset(string topic,
int32_t partition,
int64_t offset) except +
int64_t get_committed_offset(string topic,
int32_t partition) except +
map[string, vector[int32_t]] list_topics(string topic) except +
map[string, int64_t] get_watermark_offset(string topic,
int32_t partition,
int32_t timeout,
bool cached) except +
void unsubscribe() except +
void close(int32_t timeout) except +
cdef class KafkaDatasource(Datasource):
cdef unique_ptr[datasource] c_datasource
cdef string topic
cdef int32_t partition
cdef int64_t start_offset
cdef int64_t end_offset
cdef int32_t batch_timeout
cdef string delimiter
cdef datasource* get_datasource(self) nogil
cpdef void commit_offset(self,
string topic,
int32_t partition,
int64_t offset)
cpdef int64_t get_committed_offset(self, string topic, int32_t partition)
cpdef map[string, vector[int32_t]] list_topics(self, string tp) except *
cpdef map[string, int64_t] get_watermark_offset(self, string topic,
int32_t partition,
int32_t timeout,
bool cached)
cpdef void unsubscribe(self)
cpdef void close(self, int32_t timeout)
| 0 |
rapidsai_public_repos/cudf/python
|
rapidsai_public_repos/cudf/python/cudf/pyproject.toml
|
# Copyright (c) 2021-2023, NVIDIA CORPORATION.
[build-system]
build-backend = "setuptools.build_meta"
requires = [
"cmake>=3.26.4",
"cython>=3.0.3",
"ninja",
"numpy>=1.21,<1.25",
"protoc-wheel",
"pyarrow==14.0.1.*",
"rmm==24.2.*",
"scikit-build>=0.13.1",
"setuptools",
"wheel",
] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit ../../dependencies.yaml and run `rapids-dependency-file-generator`.
[project]
name = "cudf"
dynamic = ["version"]
description = "cuDF - GPU Dataframe"
readme = { file = "README.md", content-type = "text/markdown" }
authors = [
{ name = "NVIDIA Corporation" },
]
license = { text = "Apache 2.0" }
requires-python = ">=3.9"
dependencies = [
"cachetools",
"cubinlinker",
"cuda-python>=11.7.1,<12.0a0",
"cupy-cuda11x>=12.0.0",
"fsspec>=0.6.0",
"numba>=0.57,<0.58",
"numpy>=1.21,<1.25",
"nvtx>=0.2.1",
"packaging",
"pandas>=1.3,<1.6.0dev0",
"protobuf>=4.21,<5",
"ptxcompiler",
"pyarrow>=14.0.1,<15.0.0a0",
"rich",
"rmm==24.2.*",
"typing_extensions>=4.0.0",
] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit ../../dependencies.yaml and run `rapids-dependency-file-generator`.
classifiers = [
"Intended Audience :: Developers",
"Topic :: Database",
"Topic :: Scientific/Engineering",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
]
[project.optional-dependencies]
test = [
"cramjam",
"fastavro>=0.22.9",
"hypothesis",
"mimesis>=4.1.0",
"msgpack",
"pytest",
"pytest-benchmark",
"pytest-cases",
"pytest-cov",
"pytest-xdist",
"python-snappy>=0.6.0",
"scipy",
"tokenizers==0.13.1",
"transformers==4.24.0",
"tzdata",
] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit ../../dependencies.yaml and run `rapids-dependency-file-generator`.
pandas_tests = [
"beautifulsoup4",
"blosc",
"boto3",
"botocore>=1.24.21",
"bottleneck",
"brotlipy",
"fastparquet",
"flask",
"fsspec",
"gcsfs",
"html5lib",
"hypothesis",
"ipython",
"jinja2",
"lxml",
"matplotlib",
"moto",
"numba",
"numexpr",
"odfpy",
"openpyxl",
"pandas-gbq",
"psycopg2-binary",
"py",
"pyarrow",
"pymysql",
"pyreadstat",
"pytest-asyncio",
"pytest-reportlog",
"python-snappy",
"pyxlsb",
"s3fs",
"scipy",
"sqlalchemy",
"tables",
"tabulate",
"xarray",
"xlrd",
"xlsxwriter",
"xlwt",
"zstandard",
] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit ../../dependencies.yaml and run `rapids-dependency-file-generator`.
cudf_pandas_tests = [
"ipython",
"openpyxl",
] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit ../../dependencies.yaml and run `rapids-dependency-file-generator`.
[project.urls]
Homepage = "https://github.com/rapidsai/cudf"
Documentation = "https://docs.rapids.ai/api/cudf/stable/"
[tool.setuptools]
license-files = ["LICENSE"]
[tool.setuptools.dynamic]
version = {file = "cudf/VERSION"}
[tool.isort]
line_length = 79
multi_line_output = 3
include_trailing_comma = true
force_grid_wrap = 0
combine_as_imports = true
order_by_type = true
known_dask = [
"dask",
"distributed",
"dask_cuda",
]
known_rapids = [
"rmm",
]
known_first_party = [
"cudf",
]
default_section = "THIRDPARTY"
sections = [
"FUTURE",
"STDLIB",
"THIRDPARTY",
"DASK",
"RAPIDS",
"FIRSTPARTY",
"LOCALFOLDER",
]
skip = [
"thirdparty",
".eggs",
".git",
".hg",
".mypy_cache",
".tox",
".venv",
"_build",
"buck-out",
"build",
"dist",
"__init__.py",
]
| 0 |
rapidsai_public_repos/cudf/python
|
rapidsai_public_repos/cudf/python/cudf/CMakeLists.txt
|
# =============================================================================
# Copyright (c) 2022-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
# in compliance with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
# or implied. See the License for the specific language governing permissions and limitations under
# the License.
# =============================================================================
cmake_minimum_required(VERSION 3.26.4 FATAL_ERROR)
set(cudf_version 24.02.00)
include(../../fetch_rapids.cmake)
include(rapids-cuda)
rapids_cuda_init_architectures(cudf-python)
project(
cudf-python
VERSION ${cudf_version}
LANGUAGES # TODO: Building Python extension modules via the python_extension_module requires the C
# language to be enabled here. The test project that is built in scikit-build to verify
# various linking options for the python library is hardcoded to build with C, so until
# that is fixed we need to keep C.
C CXX CUDA
)
option(FIND_CUDF_CPP "Search for existing CUDF C++ installations before defaulting to local files"
OFF
)
option(CUDF_BUILD_WHEELS "Whether this build is generating a Python wheel." OFF)
option(USE_LIBARROW_FROM_PYARROW "Use the libarrow contained within pyarrow." OFF)
mark_as_advanced(USE_LIBARROW_FROM_PYARROW)
# Always build wheels against the pyarrow libarrow.
if(CUDF_BUILD_WHEELS)
set(USE_LIBARROW_FROM_PYARROW ON)
endif()
# If the user requested it we attempt to find CUDF.
if(FIND_CUDF_CPP)
if(USE_LIBARROW_FROM_PYARROW)
# We need to find arrow before libcudf since libcudf requires it but doesn't bundle it. TODO:
# These options should probably all become optional since in practice they aren't meaningful
# except in the case where we actually compile Arrow.
set(CUDF_USE_ARROW_STATIC OFF)
set(CUDF_ENABLE_ARROW_S3 OFF)
set(CUDF_ENABLE_ARROW_ORC OFF)
set(CUDF_ENABLE_ARROW_PYTHON OFF)
set(CUDF_ENABLE_ARROW_PARQUET OFF)
include(rapids-find)
include(rapids-export)
include(../../cpp/cmake/thirdparty/get_arrow.cmake)
endif()
find_package(cudf ${cudf_version} REQUIRED)
# an installed version of libcudf doesn't provide the dlpack headers so we need to download dlpack
# for the interop.pyx
include(rapids-cpm)
rapids_cpm_init()
include(../../cpp/cmake/thirdparty/get_dlpack.cmake)
else()
set(cudf_FOUND OFF)
endif()
include(rapids-cython)
if(NOT cudf_FOUND)
set(BUILD_TESTS OFF)
set(BUILD_BENCHMARKS OFF)
set(_exclude_from_all "")
if(CUDF_BUILD_WHEELS)
# We don't build C++ tests when building wheels, so we can also omit the test util and shrink
# the wheel by avoiding embedding GTest.
set(CUDF_BUILD_TESTUTIL OFF)
set(CUDF_BUILD_STREAMS_TEST_UTIL OFF)
# Statically link cudart if building wheels
set(CUDA_STATIC_RUNTIME ON)
# Need to set this so all the nvcomp targets are global, not only nvcomp::nvcomp
# https://cmake.org/cmake/help/latest/variable/CMAKE_FIND_PACKAGE_TARGETS_GLOBAL.html#variable:CMAKE_FIND_PACKAGE_TARGETS_GLOBAL
set(CMAKE_FIND_PACKAGE_TARGETS_GLOBAL ON)
# Don't install the cuDF C++ targets into wheels
set(_exclude_from_all EXCLUDE_FROM_ALL)
endif()
add_subdirectory(../../cpp cudf-cpp ${_exclude_from_all})
if(CUDF_BUILD_WHEELS)
include(cmake/Modules/WheelHelpers.cmake)
get_target_property(_nvcomp_link_libs nvcomp::nvcomp INTERFACE_LINK_LIBRARIES)
# Ensure all the shared objects we need at runtime are in the wheel
add_target_libs_to_wheel(LIB_DIR cudf TARGETS arrow_shared nvcomp::nvcomp ${_nvcomp_link_libs})
endif()
# Since there are multiple subpackages of cudf._lib that require access to libcudf, we place the
# library in the cudf directory as a single source of truth and modify the other rpaths
# appropriately.
set(cython_lib_dir cudf)
install(TARGETS cudf DESTINATION ${cython_lib_dir})
endif()
rapids_cython_init()
add_subdirectory(cudf/_lib)
add_subdirectory(udf_cpp)
include(cmake/Modules/ProtobufHelpers.cmake)
codegen_protoc(cudf/utils/metadata/orc_column_statistics.proto)
if(DEFINED cython_lib_dir)
rapids_cython_add_rpath_entries(TARGET cudf PATHS "${cython_lib_dir}")
endif()
| 0 |
rapidsai_public_repos/cudf/python
|
rapidsai_public_repos/cudf/python/cudf/README.md
|
# <div align="left"><img src="img/rapids_logo.png" width="90px"/> cuDF - GPU DataFrames</div>
## π’ cuDF can now be used as a no-code-change accelerator for pandas! To learn more, see [here](https://rapids.ai/cudf-pandas/)!
cuDF is a GPU DataFrame library for loading joining, aggregating,
filtering, and otherwise manipulating data. cuDF leverages
[libcudf](https://docs.rapids.ai/api/libcudf/stable/), a
blazing-fast C++/CUDA dataframe library and the [Apache
Arrow](https://arrow.apache.org/) columnar format to provide a
GPU-accelerated pandas API.
You can import `cudf` directly and use it like `pandas`:
```python
import cudf
import requests
from io import StringIO
url = "https://github.com/plotly/datasets/raw/master/tips.csv"
content = requests.get(url).content.decode("utf-8")
tips_df = cudf.read_csv(StringIO(content))
tips_df["tip_percentage"] = tips_df["tip"] / tips_df["total_bill"] * 100
# display average tip by dining party size
print(tips_df.groupby("size").tip_percentage.mean())
```
Or, you can use cuDF as a no-code-change accelerator for pandas, using
[`cudf.pandas`](https://docs.rapids.ai/api/cudf/stable/cudf_pandas).
`cudf.pandas` supports 100% of the pandas API, utilizing cuDF for
supported operations and falling back to pandas when needed:
```python
%load_ext cudf.pandas # pandas operations now use the GPU!
import pandas as pd
import requests
from io import StringIO
url = "https://github.com/plotly/datasets/raw/master/tips.csv"
content = requests.get(url).content.decode("utf-8")
tips_df = pd.read_csv(StringIO(content))
tips_df["tip_percentage"] = tips_df["tip"] / tips_df["total_bill"] * 100
# display average tip by dining party size
print(tips_df.groupby("size").tip_percentage.mean())
```
## Resources
- [Try cudf.pandas now](https://nvda.ws/rapids-cudf): Explore `cudf.pandas` on a free GPU enabled instance on Google Colab!
- [Install](https://docs.rapids.ai/install): Instructions for installing cuDF and other [RAPIDS](https://rapids.ai) libraries.
- [cudf (Python) documentation](https://docs.rapids.ai/api/cudf/stable/)
- [libcudf (C++/CUDA) documentation](https://docs.rapids.ai/api/libcudf/stable/)
- [RAPIDS Community](https://rapids.ai/learn-more/#get-involved): Get help, contribute, and collaborate.
## Installation
### CUDA/GPU requirements
* CUDA 11.2+
* NVIDIA driver 450.80.02+
* Pascal architecture or better (Compute Capability >=6.0)
### Conda
cuDF can be installed with conda (via [miniconda](https://docs.conda.io/projects/miniconda/en/latest/) or the full [Anaconda distribution](https://www.anaconda.com/download) from the `rapidsai` channel:
```bash
conda install -c rapidsai -c conda-forge -c nvidia \
cudf=24.02 python=3.10 cuda-version=11.8
```
We also provide [nightly Conda packages](https://anaconda.org/rapidsai-nightly) built from the HEAD
of our latest development branch.
Note: cuDF is supported only on Linux, and with Python versions 3.9 and later.
See the [RAPIDS installation guide](https://docs.rapids.ai/install) for more OS and version info.
## Build/Install from Source
See build [instructions](CONTRIBUTING.md#setting-up-your-build-environment).
## Contributing
Please see our [guide for contributing to cuDF](CONTRIBUTING.md).
| 0 |
rapidsai_public_repos/cudf/python
|
rapidsai_public_repos/cudf/python/cudf/setup.py
|
# Copyright (c) 2018-2023, NVIDIA CORPORATION.
from setuptools import find_packages
from skbuild import setup
packages = find_packages(include=["cudf*", "udf_cpp*"])
setup(
packages=packages,
package_data={
key: ["VERSION", "*.pxd", "*.hpp", "*.cuh"] for key in packages
},
zip_safe=False,
)
| 0 |
rapidsai_public_repos/cudf/python
|
rapidsai_public_repos/cudf/python/cudf/LICENSE
|
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2018 NVIDIA Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 0 |
rapidsai_public_repos/cudf/python
|
rapidsai_public_repos/cudf/python/cudf/.coveragerc
|
# Configuration file for Python coverage tests
[run]
source = cudf
| 0 |
rapidsai_public_repos/cudf/python/cudf
|
rapidsai_public_repos/cudf/python/cudf/cudf/options.py
|
# Copyright (c) 2022-2023, NVIDIA CORPORATION.
import os
import textwrap
from collections.abc import Container
from contextlib import ContextDecorator
from dataclasses import dataclass
from typing import Any, Callable, Dict, Optional
@dataclass
class Option:
default: Any
value: Any
description: str
validator: Callable
_OPTIONS: Dict[str, Option] = {}
def _env_get_int(name, default):
try:
return int(os.getenv(name, default))
except (ValueError, TypeError):
return default
def _env_get_bool(name, default):
env = os.getenv(name)
if env is None:
return default
as_a_int = _env_get_int(name, None)
env = env.lower().strip()
if env == "true" or env == "on" or as_a_int:
return True
if env == "false" or env == "off" or as_a_int == 0:
return False
return default
def _register_option(
name: str, default_value: Any, description: str, validator: Callable
):
"""Register an option.
Parameters
----------
name : str
The name of the option.
default_value : Any
The default value of the option.
description : str
A text description of the option.
validator : Callable
Called on the option value to check its validity. Should raise an
error if the value is invalid.
Raises
------
BaseException
Raised by validator if the value is invalid.
"""
validator(default_value)
_OPTIONS[name] = Option(
default_value, default_value, description, validator
)
def get_option(name: str) -> Any:
"""Get the value of option.
Parameters
----------
key : str
The name of the option.
Returns
-------
The value of the option.
Raises
------
KeyError
If option ``name`` does not exist.
"""
try:
return _OPTIONS[name].value
except KeyError:
raise KeyError(f'"{name}" does not exist.')
def set_option(name: str, val: Any):
"""Set the value of option.
Parameters
----------
name : str
The name of the option.
val : Any
The value to set.
Raises
------
KeyError
If option ``name`` does not exist.
BaseException
Raised by validator if the value is invalid.
"""
try:
option = _OPTIONS[name]
except KeyError:
raise KeyError(f'"{name}" does not exist.')
option.validator(val)
option.value = val
def _build_option_description(name, opt):
return (
f"{name}:\n"
f"\t{opt.description}\n"
f"\t[Default: {opt.default}] [Current: {opt.value}]"
)
def describe_option(name: Optional[str] = None):
"""Prints the description of an option.
If `name` is unspecified, prints the description of all available options.
Parameters
----------
name : Optional[str]
The name of the option.
"""
names = _OPTIONS.keys() if name is None else [name]
for name in names:
print(_build_option_description(name, _OPTIONS[name]))
def _make_contains_validator(valid_options: Container) -> Callable:
"""Return a validator that checks if a value is in `valid_options`."""
def _validator(val):
if val not in valid_options:
raise ValueError(
f"{val} is not a valid option. "
f"Must be one of {set(valid_options)}."
)
return _validator
def _cow_validator(val):
if get_option("spill") and val:
raise ValueError(
"Copy-on-write is not supported when spilling is enabled. "
"Please set `spill` to `False`"
)
if val not in {False, True}:
raise ValueError(
f"{val} is not a valid option. Must be one of {{False, True}}."
)
def _spill_validator(val):
try:
if get_option("copy_on_write") and val:
raise ValueError(
"Spilling is not supported when copy-on-write is enabled. "
"Please set `copy_on_write` to `False`"
)
except KeyError:
pass
if val not in {False, True}:
raise ValueError(
f"{val} is not a valid option. Must be one of {{False, True}}."
)
def _integer_validator(val):
try:
int(val)
return True
except ValueError:
raise ValueError(
f"{val} is not a valid option. " f"Must be an integer."
)
def _integer_and_none_validator(val):
try:
if val is None or int(val):
return
except ValueError:
raise ValueError(
f"{val} is not a valid option. " f"Must be an integer or None."
)
_register_option(
"default_integer_bitwidth",
None,
textwrap.dedent(
"""
Default bitwidth when the dtype of an integer needs to be
inferred. If set to `None`, the API will align dtype with pandas.
APIs that respect this option include:
\t- cudf object constructors
\t- cudf.read_csv and cudf.read_json when `dtype` is not specified.
\t- APIs that require implicit conversion of cudf.RangeIndex to an
\t integer index.
\tValid values are None, 32 or 64. Default is None.
"""
),
_make_contains_validator([None, 32, 64]),
)
_register_option(
"default_float_bitwidth",
None,
textwrap.dedent(
"""
Default bitwidth when the dtype of a float needs to be
inferred. If set to `None`, the API will align dtype with pandas.
APIs that respect this option include:
\t- cudf object constructors
\t- cudf.read_csv and cudf.read_json when `dtype` is not specified.
\tValid values are None, 32 or 64. Default is None.
"""
),
_make_contains_validator([None, 32, 64]),
)
_register_option(
"spill",
_env_get_bool("CUDF_SPILL", False),
textwrap.dedent(
"""
Enables spilling.
\tValid values are True or False. Default is False.
"""
),
_spill_validator,
)
_register_option(
"copy_on_write",
_env_get_bool("CUDF_COPY_ON_WRITE", False),
textwrap.dedent(
"""
If set to `False`, disables copy-on-write.
If set to `True`, enables copy-on-write.
Read more at: :ref:`copy-on-write-user-doc`
\tValid values are True or False. Default is False.
"""
),
_cow_validator,
)
_register_option(
"spill_on_demand",
_env_get_bool("CUDF_SPILL_ON_DEMAND", True),
textwrap.dedent(
"""
Enables spilling on demand using an RMM out-of-memory error handler.
This has no effect if spilling is disabled, see the "spill" option.
\tValid values are True or False. Default is True.
"""
),
_make_contains_validator([False, True]),
)
_register_option(
"spill_device_limit",
_env_get_int("CUDF_SPILL_DEVICE_LIMIT", None),
textwrap.dedent(
"""
Enforce a device memory limit in bytes.
This has no effect if spilling is disabled, see the "spill" option.
\tValid values are any positive integer or None (disabled).
\tDefault is None.
"""
),
_integer_and_none_validator,
)
_register_option(
"spill_stats",
_env_get_int("CUDF_SPILL_STATS", 0),
textwrap.dedent(
"""
If not 0, enables statistics at the specified level:
0 - disabled (no overhead).
1+ - duration and number of bytes spilled (very low overhead).
2+ - a traceback for each time a spillable buffer is exposed
permanently (potential high overhead).
Valid values are any positive integer.
Default is 0 (disabled).
"""
),
_integer_validator,
)
_register_option(
"mode.pandas_compatible",
False,
textwrap.dedent(
"""
If set to `False`, retains `cudf` specific behavior.
If set to `True`, enables pandas compatibility mode,
which will try to match pandas API behaviors in case of
any inconsistency.
\tValid values are True or False. Default is False.
"""
),
_make_contains_validator([False, True]),
)
class option_context(ContextDecorator):
"""
Context manager to temporarily set options in the `with` statement context.
You need to invoke as ``option_context(pat, val, [(pat, val), ...])``.
Examples
--------
>>> from cudf import option_context
>>> with option_context('mode.pandas_compatible', True, 'default_float_bitwidth', 32):
... pass
""" # noqa: E501
def __init__(self, *args) -> None:
if len(args) % 2 != 0:
raise ValueError(
"Need to invoke as option_context(pat, val, "
"[(pat, val), ...])."
)
self.ops = tuple(zip(args[::2], args[1::2]))
def __enter__(self) -> None:
self.undo = tuple((pat, get_option(pat)) for pat, _ in self.ops)
for pat, val in self.ops:
set_option(pat, val)
def __exit__(self, *args) -> None:
for pat, val in self.undo:
set_option(pat, val)
| 0 |
rapidsai_public_repos/cudf/python/cudf
|
rapidsai_public_repos/cudf/python/cudf/cudf/_typing.py
|
# Copyright (c) 2021-2022, NVIDIA CORPORATION.
import sys
from typing import TYPE_CHECKING, Any, Callable, Dict, Iterable, TypeVar, Union
import numpy as np
from pandas import Period, Timedelta, Timestamp
from pandas.api.extensions import ExtensionDtype
if TYPE_CHECKING:
import cudf
# Backwards compat: mypy >= 0.790 rejects Type[NotImplemented], but
# NotImplementedType is only introduced in 3.10
if sys.version_info >= (3, 10):
from types import NotImplementedType
else:
NotImplementedType = Any
# Many of these are from
# https://github.com/pandas-dev/pandas/blob/master/pandas/_typing.py
Dtype = Union["ExtensionDtype", str, np.dtype]
DtypeObj = Union["ExtensionDtype", np.dtype]
# scalars
DatetimeLikeScalar = TypeVar(
"DatetimeLikeScalar", Period, Timestamp, Timedelta
)
ScalarLike = Any
# columns
ColumnLike = Any
# binary operation
ColumnBinaryOperand = Union["cudf.Scalar", "cudf.core.column.ColumnBase"]
DataFrameOrSeries = Union["cudf.Series", "cudf.DataFrame"]
SeriesOrIndex = Union["cudf.Series", "cudf.core.index.BaseIndex"]
SeriesOrSingleColumnIndex = Union[
"cudf.Series", "cudf.core.index.GenericIndex"
]
# Groupby aggregation
AggType = Union[str, Callable]
MultiColumnAggType = Union[
AggType, Iterable[AggType], Dict[Any, Iterable[AggType]]
]
| 0 |
rapidsai_public_repos/cudf/python/cudf
|
rapidsai_public_repos/cudf/python/cudf/cudf/errors.py
|
# Copyright (c) 2020-2023, NVIDIA CORPORATION.
class UnsupportedCUDAError(Exception):
pass
class MixedTypeError(TypeError):
pass
| 0 |
rapidsai_public_repos/cudf/python/cudf
|
rapidsai_public_repos/cudf/python/cudf/cudf/datasets.py
|
# Copyright (c) 2020-2022, NVIDIA CORPORATION.
import numpy as np
import pandas as pd
import cudf
from cudf._lib.transform import bools_to_mask
from cudf.core.column_accessor import ColumnAccessor
__all__ = ["timeseries", "randomdata"]
# TODO:
# change default of name from category to str type when nvstring are merged
def timeseries(
start="2000-01-01",
end="2000-01-31",
freq="1s",
dtypes=None,
nulls_frequency=0,
seed=None,
):
"""Create timeseries dataframe with random data
Parameters
----------
start : datetime (or datetime-like string)
Start of time series
end : datetime (or datetime-like string)
End of time series
dtypes : dict
Mapping of column names to types.
Valid types include {float, int, str, 'category'}.
If none is provided, this defaults to
``{"name": "category", "id": int, "x": float, "y": float}``
freq : string
String like '2s' or '1H' or '12W' for the time series frequency
nulls_frequency : float
Fill the series with the specified proportion of nulls. Default is 0.
seed : int (optional)
Randomstate seed
Examples
--------
>>> import cudf as gd
>>> gdf = gd.datasets.timeseries()
>>> gdf.head() # doctest: +SKIP
timestamp id name x y
2000-01-01 00:00:00 967 Jerry -0.031348 -0.040633
2000-01-01 00:00:01 1066 Michael -0.262136 0.307107
2000-01-01 00:00:02 988 Wendy -0.526331 0.128641
2000-01-01 00:00:03 1016 Yvonne 0.620456 0.767270
2000-01-01 00:00:04 998 Ursula 0.684902 -0.463278
"""
if dtypes is None:
dtypes = {"name": "category", "id": int, "x": float, "y": float}
index = pd.DatetimeIndex(
pd.date_range(start, end, freq=freq, name="timestamp")
)
state = np.random.RandomState(seed)
columns = {k: make[dt](len(index), state) for k, dt in dtypes.items()}
df = pd.DataFrame(columns, index=index, columns=sorted(columns))
if df.index[-1] == end:
df = df.iloc[:-1]
gdf = cudf.from_pandas(df)
for col in gdf:
mask = state.choice(
[True, False],
size=len(index),
p=[1 - nulls_frequency, nulls_frequency],
)
mask_buf = bools_to_mask(cudf.core.column.as_column(mask))
masked_col = gdf[col]._column.set_mask(mask_buf)
gdf[col] = cudf.Series._from_data(
ColumnAccessor({None: masked_col}), index=gdf.index
)
return gdf
def randomdata(nrows=10, dtypes=None, seed=None):
"""Create a dataframe with random data
Parameters
----------
nrows : int
number of rows in the dataframe
dtypes : dict
Mapping of column names to types.
Valid types include {float, int, str, 'category'}
If none is provided, this defaults to
``{"id": int, "x": float, "y": float}``
seed : int (optional)
Randomstate seed
Examples
--------
>>> import cudf as gd
>>> gdf = gd.datasets.randomdata()
>>> cdf.head() # doctest: +SKIP
id x y
0 1014 0.28361267466770146 -0.44274170661264334
1 1026 -0.9937981936047235 -0.09433464773262323
2 1038 -0.1266722796765325 0.20971126368240123
3 1002 0.9280495300010041 0.5137701393017848
4 976 0.9089527839187654 0.9881063385586304
"""
if dtypes is None:
dtypes = {"id": int, "x": float, "y": float}
state = np.random.RandomState(seed)
columns = {k: make[dt](nrows, state) for k, dt in dtypes.items()}
df = pd.DataFrame(columns, columns=sorted(columns))
return cudf.from_pandas(df)
def make_float(n, rstate):
return rstate.rand(n) * 2 - 1
def make_int(n, rstate):
return rstate.poisson(1000, size=n)
names = [
"Alice",
"Bob",
"Charlie",
"Dan",
"Edith",
"Frank",
"George",
"Hannah",
"Ingrid",
"Jerry",
"Kevin",
"Laura",
"Michael",
"Norbert",
"Oliver",
"Patricia",
"Quinn",
"Ray",
"Sarah",
"Tim",
"Ursula",
"Victor",
"Wendy",
"Xavier",
"Yvonne",
"Zelda",
]
def make_string(n, rstate):
return rstate.choice(names, size=n)
def make_categorical(n, rstate):
return pd.Categorical.from_codes(
rstate.randint(0, len(names), size=n), names
)
def make_bool(n, rstate):
return rstate.choice([True, False], size=n)
make = {
float: make_float,
int: make_int,
str: make_string,
object: make_string,
"category": make_categorical,
bool: make_bool,
}
| 0 |
rapidsai_public_repos/cudf/python/cudf
|
rapidsai_public_repos/cudf/python/cudf/cudf/_version.py
|
# Copyright (c) 2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import importlib.resources
__version__ = (
importlib.resources.files("cudf").joinpath("VERSION").read_text().strip()
)
__git_commit__ = ""
| 0 |
rapidsai_public_repos/cudf/python/cudf
|
rapidsai_public_repos/cudf/python/cudf/cudf/__init__.py
|
# Copyright (c) 2018-2023, NVIDIA CORPORATION.
# _setup_numba _must be called before numba.cuda is imported, because
# it sets the numba config variable responsible for enabling
# Minor Version Compatibility. Setting it after importing numba.cuda has no effect.
from cudf.utils._numba import _setup_numba
from cudf.utils.gpu_utils import validate_setup
_setup_numba()
validate_setup()
import cupy
from numba import config as numba_config, cuda
import rmm
from rmm.allocators.cupy import rmm_cupy_allocator
from rmm.allocators.numba import RMMNumbaManager
from cudf import api, core, datasets, testing
from cudf._version import __git_commit__, __version__
from cudf.api.extensions import (
register_dataframe_accessor,
register_index_accessor,
register_series_accessor,
)
from cudf.api.types import dtype
from cudf.core.algorithms import factorize
from cudf.core.cut import cut
from cudf.core.dataframe import DataFrame, from_dataframe, from_pandas, merge
from cudf.core.dtypes import (
CategoricalDtype,
Decimal32Dtype,
Decimal64Dtype,
Decimal128Dtype,
IntervalDtype,
ListDtype,
StructDtype,
)
from cudf.core.groupby import Grouper
from cudf.core.index import (
BaseIndex,
CategoricalIndex,
DatetimeIndex,
Float32Index,
Float64Index,
GenericIndex,
Index,
Int8Index,
Int16Index,
Int32Index,
Int64Index,
IntervalIndex,
RangeIndex,
StringIndex,
TimedeltaIndex,
UInt8Index,
UInt16Index,
UInt32Index,
UInt64Index,
interval_range,
)
from cudf.core.missing import NA, NaT
from cudf.core.multiindex import MultiIndex
from cudf.core.reshape import (
concat,
crosstab,
get_dummies,
melt,
pivot,
pivot_table,
unstack,
)
from cudf.core.scalar import Scalar
from cudf.core.series import Series, isclose
from cudf.core.tools.datetimes import DateOffset, date_range, to_datetime
from cudf.core.tools.numeric import to_numeric
from cudf.io import (
from_dlpack,
read_avro,
read_csv,
read_feather,
read_hdf,
read_json,
read_orc,
read_parquet,
read_text,
)
from cudf.options import (
describe_option,
get_option,
option_context,
set_option,
)
from cudf.utils.utils import clear_cache
cuda.set_memory_manager(RMMNumbaManager)
cupy.cuda.set_allocator(rmm_cupy_allocator)
rmm.register_reinitialize_hook(clear_cache)
__all__ = [
"BaseIndex",
"CategoricalDtype",
"CategoricalIndex",
"DataFrame",
"DateOffset",
"DatetimeIndex",
"Decimal32Dtype",
"Decimal64Dtype",
"Float32Index",
"Float64Index",
"GenericIndex",
"Grouper",
"Index",
"Int16Index",
"Int32Index",
"Int64Index",
"Int8Index",
"IntervalDtype",
"IntervalIndex",
"ListDtype",
"MultiIndex",
"NA",
"NaT",
"RangeIndex",
"Scalar",
"Series",
"StringIndex",
"StructDtype",
"TimedeltaIndex",
"UInt16Index",
"UInt32Index",
"UInt64Index",
"UInt8Index",
"api",
"concat",
"crosstab",
"cut",
"date_range",
"describe_option",
"factorize",
"from_dataframe",
"from_dlpack",
"from_pandas",
"get_dummies",
"get_option",
"interval_range",
"isclose",
"melt",
"merge",
"pivot",
"pivot_table",
"read_avro",
"read_csv",
"read_feather",
"read_hdf",
"read_json",
"read_orc",
"read_parquet",
"read_text",
"set_option",
"testing",
"to_datetime",
"to_numeric",
"unstack",
]
| 0 |
rapidsai_public_repos/cudf/python/cudf
|
rapidsai_public_repos/cudf/python/cudf/cudf/VERSION
|
24.02.00
| 0 |
rapidsai_public_repos/cudf/python/cudf/cudf
|
rapidsai_public_repos/cudf/python/cudf/cudf/utils/_ptxcompiler.py
|
# Copyright (c) 2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import math
import os
import subprocess
import sys
import warnings
NO_DRIVER = (math.inf, math.inf)
NUMBA_CHECK_VERSION_CMD = """\
from ctypes import c_int, byref
from numba import cuda
dv = c_int(0)
cuda.cudadrv.driver.driver.cuDriverGetVersion(byref(dv))
drv_major = dv.value // 1000
drv_minor = (dv.value - (drv_major * 1000)) // 10
run_major, run_minor = cuda.runtime.get_version()
print(f'{drv_major} {drv_minor} {run_major} {run_minor}')
"""
def check_disabled_in_env():
# We should avoid checking whether the patch is
# needed if the user requested that we don't check
# (e.g. in a non-fork-safe environment)
check = os.getenv("PTXCOMPILER_CHECK_NUMBA_CODEGEN_PATCH_NEEDED")
if check is not None:
try:
check = int(check)
except ValueError:
check = False
else:
check = True
return not check
def get_versions():
cp = subprocess.run(
[sys.executable, "-c", NUMBA_CHECK_VERSION_CMD], capture_output=True
)
if cp.returncode:
msg = (
f"Error getting driver and runtime versions:\n\nstdout:\n\n"
f"{cp.stdout.decode()}\n\nstderr:\n\n{cp.stderr.decode()}\n\n"
"Not patching Numba"
)
warnings.warn(msg, UserWarning)
return NO_DRIVER
versions = [int(s) for s in cp.stdout.strip().split()]
driver_version = tuple(versions[:2])
runtime_version = tuple(versions[2:])
return driver_version, runtime_version
def safe_get_versions():
"""
Return a 2-tuple of deduced driver and runtime versions.
To ensure that this function does not initialize a CUDA context,
calls to the runtime and driver are made in a subprocess.
If PTXCOMPILER_CHECK_NUMBA_CODEGEN_PATCH_NEEDED is set
in the environment, then this subprocess call is not launched.
To specify the driver and runtime versions of the environment
in this case, set PTXCOMPILER_KNOWN_DRIVER_VERSION and
PTXCOMPILER_KNOWN_RUNTIME_VERSION appropriately.
"""
if check_disabled_in_env():
try:
# allow user to specify driver/runtime
# versions manually, if necessary
driver_version = os.environ[
"PTXCOMPILER_KNOWN_DRIVER_VERSION"
].split(".")
runtime_version = os.environ[
"PTXCOMPILER_KNOWN_RUNTIME_VERSION"
].split(".")
driver_version, runtime_version = (
tuple(map(int, driver_version)),
tuple(map(int, runtime_version)),
)
except (KeyError, ValueError):
warnings.warn(
"No way to determine driver and runtime versions for "
"patching, set PTXCOMPILER_KNOWN_DRIVER_VERSION and "
"PTXCOMPILER_KNOWN_RUNTIME_VERSION"
)
return NO_DRIVER
else:
driver_version, runtime_version = get_versions()
return driver_version, runtime_version
| 0 |
rapidsai_public_repos/cudf/python/cudf/cudf
|
rapidsai_public_repos/cudf/python/cudf/cudf/utils/ioutils.py
|
# Copyright (c) 2019-2023, NVIDIA CORPORATION.
import datetime
import os
import urllib
import warnings
from io import BufferedWriter, BytesIO, IOBase, TextIOWrapper
from threading import Thread
import fsspec
import fsspec.implementations.local
import numpy as np
import pandas as pd
from fsspec.core import get_fs_token_paths
from pyarrow import PythonFile as ArrowPythonFile
from pyarrow.lib import NativeFile
from cudf.utils.docutils import docfmt_partial
try:
import fsspec.parquet as fsspec_parquet
except ImportError:
fsspec_parquet = None
_BYTES_PER_THREAD_DEFAULT = 256 * 1024 * 1024
_ROW_GROUP_SIZE_BYTES_DEFAULT = 128 * 1024 * 1024
_docstring_remote_sources = """
- cuDF supports local and remote data stores. See configuration details for
available sources
`here <https://docs.dask.org/en/latest/remote-data-services.html>`__.
"""
_docstring_read_avro = """
Load an Avro dataset into a DataFrame
Parameters
----------
filepath_or_buffer : str, path object, bytes, or file-like object
Either a path to a file (a `str`, `pathlib.Path`, or
`py._path.local.LocalPath`), URL (including http, ftp, and S3 locations),
Python bytes of raw binary data, or any object with a `read()` method
(such as builtin `open()` file handler function or `BytesIO`).
columns : list, default None
If not None, only these columns will be read.
skiprows : int, default None
If not None, the number of rows to skip from the start of the file.
num_rows : int, default None
If not None, the total number of rows to read.
storage_options : dict, optional, default None
Extra options that make sense for a particular storage connection,
e.g. host, port, username, password, etc. For HTTP(S) URLs the key-value
pairs are forwarded to ``urllib.request.Request`` as header options.
For other URLs (e.g. starting with "s3://", and "gcs://") the key-value
pairs are forwarded to ``fsspec.open``. Please see ``fsspec`` and
``urllib`` for more details.
Returns
-------
DataFrame
Notes
-----
{remote_data_sources}
Examples
--------
>>> import pandavro
>>> import pandas as pd
>>> import cudf
>>> pandas_df = pd.DataFrame()
>>> pandas_df['numbers'] = [10, 20, 30]
>>> pandas_df['text'] = ["hello", "rapids", "ai"]
>>> pandas_df
numbers text
0 10 hello
1 20 rapids
2 30 ai
>>> pandavro.to_avro("data.avro", pandas_df)
>>> cudf.read_avro("data.avro")
numbers text
0 10 hello
1 20 rapids
2 30 ai
""".format(
remote_data_sources=_docstring_remote_sources
)
doc_read_avro = docfmt_partial(docstring=_docstring_read_avro)
_docstring_read_parquet_metadata = """
Read a Parquet file's metadata and schema
Parameters
----------
path : string or path object
Path of file to be read
Returns
-------
Total number of rows
Number of row groups
List of column names
Examples
--------
>>> import cudf
>>> num_rows, num_row_groups, names = cudf.io.read_parquet_metadata(filename)
>>> df = [cudf.read_parquet(fname, row_group=i) for i in range(row_groups)]
>>> df = cudf.concat(df)
>>> df
num1 datetime text
0 123 2018-11-13T12:00:00.000 5451
1 456 2018-11-14T12:35:01.000 5784
2 789 2018-11-15T18:02:59.000 6117
See Also
--------
cudf.read_parquet
"""
doc_read_parquet_metadata = docfmt_partial(
docstring=_docstring_read_parquet_metadata
)
_docstring_read_parquet = """
Load a Parquet dataset into a DataFrame
Parameters
----------
filepath_or_buffer : str, path object, bytes, file-like object, or a list
of such objects.
Contains one or more of the following: either a path to a file (a `str`,
`pathlib.Path`, or `py._path.local.LocalPath`), URL (including http, ftp,
and S3 locations), Python bytes of raw binary data, or any object with a
`read()` method (such as builtin `open()` file handler function or
`BytesIO`).
engine : {{ 'cudf', 'pyarrow' }}, default 'cudf'
Parser engine to use.
columns : list, default None
If not None, only these columns will be read.
storage_options : dict, optional, default None
Extra options that make sense for a particular storage connection,
e.g. host, port, username, password, etc. For HTTP(S) URLs the key-value
pairs are forwarded to ``urllib.request.Request`` as header options.
For other URLs (e.g. starting with "s3://", and "gcs://") the key-value
pairs are forwarded to ``fsspec.open``. Please see ``fsspec`` and
``urllib`` for more details.
filters : list of tuple, list of lists of tuples, default None
If not None, specifies a filter predicate used to filter out row groups
using statistics stored for each row group as Parquet metadata. Row groups
that do not match the given filter predicate are not read. The filters
will also be applied to the rows of the in-memory DataFrame after IO.
The predicate is expressed in disjunctive normal form (DNF) like
`[[('x', '=', 0), ...], ...]`. DNF allows arbitrary boolean logical
combinations of single column predicates. The innermost tuples each
describe a single column predicate. The list of inner predicates is
interpreted as a conjunction (AND), forming a more selective and
multiple column predicate. Finally, the most outer list combines
these filters as a disjunction (OR). Predicates may also be passed
as a list of tuples. This form is interpreted as a single conjunction.
To express OR in predicates, one must use the (preferred) notation of
list of lists of tuples.
row_groups : int, or list, or a list of lists default None
If not None, specifies, for each input file, which row groups to read.
If reading multiple inputs, a list of lists should be passed, one list
for each input.
categorical_partitions : boolean, default True
Whether directory-partitioned columns should be interpreted as categorical
or raw dtypes.
use_pandas_metadata : boolean, default True
If True and dataset has custom PANDAS schema metadata, ensure that index
columns are also loaded.
use_python_file_object : boolean, default True
If True, Arrow-backed PythonFile objects will be used in place of fsspec
AbstractBufferedFile objects at IO time. Setting this argument to `False`
will require the entire file to be copied to host memory, and is highly
discouraged.
open_file_options : dict, optional
Dictionary of key-value pairs to pass to the function used to open remote
files. By default, this will be `fsspec.parquet.open_parquet_file`. To
deactivate optimized precaching, set the "method" to `None` under the
"precache_options" key. Note that the `open_file_func` key can also be
used to specify a custom file-open function.
bytes_per_thread : int, default None
Determines the number of bytes to be allocated per thread to read the
files in parallel. When there is a file of large size, we get slightly
better throughput by decomposing it and transferring multiple "blocks"
in parallel (using a python thread pool). Default allocation is
{bytes_per_thread} bytes.
This parameter is functional only when `use_python_file_object=False`.
Returns
-------
DataFrame
Notes
-----
{remote_data_sources}
Examples
--------
>>> import cudf
>>> df = cudf.read_parquet(filename)
>>> df
num1 datetime text
0 123 2018-11-13T12:00:00.000 5451
1 456 2018-11-14T12:35:01.000 5784
2 789 2018-11-15T18:02:59.000 6117
See Also
--------
cudf.io.parquet.read_parquet_metadata
cudf.DataFrame.to_parquet
cudf.read_orc
""".format(
remote_data_sources=_docstring_remote_sources,
bytes_per_thread=_BYTES_PER_THREAD_DEFAULT,
)
doc_read_parquet = docfmt_partial(docstring=_docstring_read_parquet)
_docstring_to_parquet = """
Write a DataFrame to the parquet format.
Parameters
----------
path : str or list of str
File path or Root Directory path. Will be used as Root Directory path
while writing a partitioned dataset. Use list of str with partition_offsets
to write parts of the dataframe to different files.
compression : {{'snappy', 'ZSTD', None}}, default 'snappy'
Name of the compression to use. Use ``None`` for no compression.
index : bool, default None
If ``True``, include the dataframe's index(es) in the file output.
If ``False``, they will not be written to the file.
If ``None``, similar to ``True`` the dataframe's index(es) will
be saved, however, instead of being saved as values any
``RangeIndex`` will be stored as a range in the metadata so it
doesn't require much space and is faster. Other indexes will
be included as columns in the file output.
partition_cols : list, optional, default None
Column names by which to partition the dataset
Columns are partitioned in the order they are given
partition_file_name : str, optional, default None
File name to use for partitioned datasets. Different partitions
will be written to different directories, but all files will
have this name. If nothing is specified, a random uuid4 hex string
will be used for each file.
partition_offsets : list, optional, default None
Offsets to partition the dataframe by. Should be used when path is list
of str. Should be a list of integers of size ``len(path) + 1``
statistics : {{'ROWGROUP', 'PAGE', 'COLUMN', 'NONE'}}, default 'ROWGROUP'
Level at which column statistics should be included in file.
metadata_file_path : str, optional, default None
If specified, this function will return a binary blob containing the footer
metadata of the written parquet file. The returned blob will have the
``chunk.file_path`` field set to the ``metadata_file_path`` for each chunk.
When using with ``partition_offsets``, should be same size as ``len(path)``
int96_timestamps : bool, default False
If ``True``, write timestamps in int96 format. This will convert
timestamps from timestamp[ns], timestamp[ms], timestamp[s], and
timestamp[us] to the int96 format, which is the number of Julian
days and the number of nanoseconds since midnight of 1970-01-01.
If ``False``, timestamps will not be altered.
row_group_size_bytes: integer, default {row_group_size_bytes_val}
Maximum size of each stripe of the output.
If None, {row_group_size_bytes_val}
({row_group_size_bytes_val_in_mb} MB) will be used.
row_group_size_rows: integer or None, default None
Maximum number of rows of each stripe of the output.
If None, 1000000 will be used.
max_page_size_bytes: integer or None, default None
Maximum uncompressed size of each page of the output.
If None, 524288 (512KB) will be used.
max_page_size_rows: integer or None, default None
Maximum number of rows of each page of the output.
If None, 20000 will be used.
storage_options : dict, optional, default None
Extra options that make sense for a particular storage connection,
e.g. host, port, username, password, etc. For HTTP(S) URLs the key-value
pairs are forwarded to ``urllib.request.Request`` as header options.
For other URLs (e.g. starting with "s3://", and "gcs://") the key-value
pairs are forwarded to ``fsspec.open``. Please see ``fsspec`` and
``urllib`` for more details.
return_metadata : bool, default False
Return parquet metadata for written data. Returned metadata will
include the file path metadata (relative to `root_path`).
To request metadata binary blob when using with ``partition_cols``, Pass
``return_metadata=True`` instead of specifying ``metadata_file_path``
use_dictionary : bool, default True
When ``False``, prevents the use of dictionary encoding for Parquet page
data. When ``True``, dictionary encoding is preferred when not disabled due
to dictionary size constraints.
header_version : {{'1.0', '2.0'}}, default "1.0"
Controls whether to use version 1.0 or version 2.0 page headers when
encoding. Version 1.0 is more portable, but version 2.0 enables the
use of newer encoding schemes.
force_nullable_schema : bool, default False.
If True, writes all columns as `null` in schema.
If False, columns are written as `null` if they contain null values,
otherwise as `not null`.
**kwargs
Additional parameters will be passed to execution engines other
than ``cudf``.
See Also
--------
cudf.read_parquet
""".format(
row_group_size_bytes_val=_ROW_GROUP_SIZE_BYTES_DEFAULT,
row_group_size_bytes_val_in_mb=_ROW_GROUP_SIZE_BYTES_DEFAULT / 1024 / 1024,
)
doc_to_parquet = docfmt_partial(docstring=_docstring_to_parquet)
_docstring_merge_parquet_filemetadata = """
Merge multiple parquet metadata blobs
Parameters
----------
metadata_list : list
List of buffers returned by to_parquet
Returns
-------
Combined parquet metadata blob
See Also
--------
cudf.DataFrame.to_parquet
"""
doc_merge_parquet_filemetadata = docfmt_partial(
docstring=_docstring_merge_parquet_filemetadata
)
_docstring_read_orc_metadata = """
Read an ORC file's metadata and schema
Parameters
----------
path : string or path object
Path of file to be read
Returns
-------
Total number of rows
Number of stripes
List of column names
Notes
-----
{remote_data_sources}
Examples
--------
>>> import cudf
>>> num_rows, stripes, names = cudf.io.read_orc_metadata(filename)
>>> df = [cudf.read_orc(fname, stripes=i) for i in range(stripes)]
>>> df = cudf.concat(df)
>>> df
num1 datetime text
0 123 2018-11-13T12:00:00.000 5451
1 456 2018-11-14T12:35:01.000 5784
2 789 2018-11-15T18:02:59.000 6117
See Also
--------
cudf.read_orc
"""
doc_read_orc_metadata = docfmt_partial(docstring=_docstring_read_orc_metadata)
_docstring_read_orc_statistics = """
Read an ORC file's file-level and stripe-level statistics
Parameters
----------
filepath_or_buffer : str, path object, bytes, or file-like object
Either a path to a file (a `str`, `pathlib.Path`, or
`py._path.local.LocalPath`), URL (including http, ftp, and S3 locations),
Python bytes of raw binary data, or any object with a `read()` method
(such as builtin `open()` file handler function or `BytesIO`).
columns : list, default None
If not None, statistics for only these columns will be read from the file.
Returns
-------
Statistics for each column of given file
Statistics for each column for each stripe of given file
See Also
--------
cudf.read_orc
"""
doc_read_orc_statistics = docfmt_partial(
docstring=_docstring_read_orc_statistics
)
_docstring_read_orc = """
Load an ORC dataset into a DataFrame
Parameters
----------
filepath_or_buffer : str, path object, bytes, or file-like object
Either a path to a file (a `str`, `pathlib.Path`, or
`py._path.local.LocalPath`), URL (including http, ftp, and S3 locations),
Python bytes of raw binary data, or any object with a `read()` method
(such as builtin `open()` file handler function or `BytesIO`).
engine : {{ 'cudf', 'pyarrow' }}, default 'cudf'
Parser engine to use.
columns : list, default None
If not None, only these columns will be read from the file.
filters : list of tuple, list of lists of tuples default None
If not None, specifies a filter predicate used to filter out row groups
using statistics stored for each row group as Parquet metadata. Row groups
that do not match the given filter predicate are not read. The
predicate is expressed in disjunctive normal form (DNF) like
`[[('x', '=', 0), ...], ...]`. DNF allows arbitrary boolean logical
combinations of single column predicates. The innermost tuples each
describe a single column predicate. The list of inner predicates is
interpreted as a conjunction (AND), forming a more selective and
multiple column predicate. Finally, the outermost list combines
these filters as a disjunction (OR). Predicates may also be passed
as a list of tuples. This form is interpreted as a single conjunction.
To express OR in predicates, one must use the (preferred) notation of
list of lists of tuples.
stripes: list, default None
If not None, only these stripe will be read from the file. Stripes are
concatenated with index ignored.
skiprows : int, default None
If not None, the number of rows to skip from the start of the file.
This parameter is deprecated.
num_rows : int, default None
If not None, the total number of rows to read.
This parameter is deprecated.
use_index : bool, default True
If True, use row index if available for faster seeking.
use_python_file_object : boolean, default True
If True, Arrow-backed PythonFile objects will be used in place of fsspec
AbstractBufferedFile objects at IO time. This option is likely to improve
performance when making small reads from larger ORC files.
storage_options : dict, optional, default None
Extra options that make sense for a particular storage connection,
e.g. host, port, username, password, etc. For HTTP(S) URLs the key-value
pairs are forwarded to ``urllib.request.Request`` as header options.
For other URLs (e.g. starting with "s3://", and "gcs://") the key-value
pairs are forwarded to ``fsspec.open``. Please see ``fsspec`` and
``urllib`` for more details.
bytes_per_thread : int, default None
Determines the number of bytes to be allocated per thread to read the
files in parallel. When there is a file of large size, we get slightly
better throughput by decomposing it and transferring multiple "blocks"
in parallel (using a python thread pool). Default allocation is
{bytes_per_thread} bytes.
This parameter is functional only when `use_python_file_object=False`.
Returns
-------
DataFrame
Notes
-----
{remote_data_sources}
Examples
--------
>>> import cudf
>>> df = cudf.read_orc(filename)
>>> df
num1 datetime text
0 123 2018-11-13T12:00:00.000 5451
1 456 2018-11-14T12:35:01.000 5784
2 789 2018-11-15T18:02:59.000 6117
See Also
--------
cudf.DataFrame.to_orc
""".format(
remote_data_sources=_docstring_remote_sources,
bytes_per_thread=_BYTES_PER_THREAD_DEFAULT,
)
doc_read_orc = docfmt_partial(docstring=_docstring_read_orc)
_docstring_to_orc = """
Write a DataFrame to the ORC format.
Parameters
----------
fname : str
File path or object where the ORC dataset will be stored.
compression : {{ 'snappy', 'ZSTD', None }}, default 'snappy'
Name of the compression to use. Use None for no compression.
statistics: str {{ "ROWGROUP", "STRIPE", None }}, default "ROWGROUP"
The granularity with which column statistics must
be written to the file.
stripe_size_bytes: integer or None, default None
Maximum size of each stripe of the output.
If None, 67108864 (64MB) will be used.
stripe_size_rows: integer or None, default None
Maximum number of rows of each stripe of the output.
If None, 1000000 will be used.
row_index_stride: integer or None, default None
Row index stride (maximum number of rows in each row group).
If None, 10000 will be used.
cols_as_map_type : list of column names or None, default None
A list of column names which should be written as map type in the ORC file.
Note that this option only affects columns of ListDtype. Names of other
column types will be ignored.
storage_options : dict, optional, default None
Extra options that make sense for a particular storage connection,
e.g. host, port, username, password, etc. For HTTP(S) URLs the key-value
pairs are forwarded to ``urllib.request.Request`` as header options.
For other URLs (e.g. starting with "s3://", and "gcs://") the key-value
pairs are forwarded to ``fsspec.open``. Please see ``fsspec`` and
``urllib`` for more details.
index : bool, default None
If ``True``, include the dataframe's index(es) in the file output.
If ``False``, they will not be written to the file.
If ``None``, similar to ``True`` the dataframe's index(es) will
be saved, however, instead of being saved as values any
``RangeIndex`` will be stored as a range in the metadata so it
doesn't require much space and is faster. Other indexes will
be included as columns in the file output.
See Also
--------
cudf.read_orc
"""
doc_to_orc = docfmt_partial(docstring=_docstring_to_orc)
_docstring_read_json = r"""
Load a JSON dataset into a DataFrame
Parameters
----------
path_or_buf : list, str, path object, or file-like object
Either JSON data in a `str`, path to a file (a `str`, `pathlib.Path`, or
`py._path.local.LocalPath`), URL (including http, ftp, and S3 locations),
or any object with a `read()` method (such as builtin `open()` file handler
function or `StringIO`). Multiple inputs may be provided as a list. If a
list is specified each list entry may be of a different input type as long
as each input is of a valid type and all input JSON schema(s) match.
engine : {{ 'auto', 'cudf', 'cudf_legacy', 'pandas' }}, default 'auto'
Parser engine to use. If 'auto' is passed, the engine will be
automatically selected based on the other parameters. See notes below.
orient : string
.. admonition:: Not GPU-accelerated
This parameter is only supported with ``engine='pandas'``.
Indication of expected JSON string format.
Compatible JSON strings can be produced by ``to_json()`` with a
corresponding orient value.
The set of possible orients is:
- ``'split'`` : dict like
``{index -> [index], columns -> [columns], data -> [values]}``
- ``'records'`` : list like
``[{column -> value}, ... , {column -> value}]``
- ``'index'`` : dict like ``{index -> {column -> value}}``
- ``'columns'`` : dict like ``{column -> {index -> value}}``
- ``'values'`` : just the values array
The allowed and default values depend on the value
of the `typ` parameter.
* when ``typ == 'series'``,
- allowed orients are ``{'split','records','index'}``
- default is ``'index'``
- The Series index must be unique for orient ``'index'``.
* when ``typ == 'frame'``,
- allowed orients are ``{'split','records','index',
'columns','values', 'table'}``
- default is ``'columns'``
- The DataFrame index must be unique for orients ``'index'`` and
``'columns'``.
- The DataFrame columns must be unique for orients ``'index'``,
``'columns'``, and ``'records'``.
typ : type of object to recover (series or frame), default 'frame'
With cudf engine, only frame output is supported.
dtype : boolean or dict, default None
If True, infer dtypes for all columns; if False, then don't infer dtypes at all,
if a dict, provide a mapping from column names to their respective dtype (any missing
columns will have their dtype inferred). Applies only to the data.
For all ``orient`` values except ``'table'``, default is ``True``.
convert_axes : boolean, default True
.. admonition:: Not GPU-accelerated
This parameter is only supported with ``engine='pandas'``.
Try to convert the axes to the proper dtypes.
convert_dates : boolean, default True
.. admonition:: Not GPU-accelerated
This parameter is only supported with ``engine='pandas'``.
List of columns to parse for dates; If True, then try
to parse datelike columns default is True; a column label is datelike if
* it ends with ``'_at'``,
* it ends with ``'_time'``,
* it begins with ``'timestamp'``,
* it is ``'modified'``, or
* it is ``'date'``
keep_default_dates : boolean, default True
.. admonition:: Not GPU-accelerated
This parameter is only supported with ``engine='pandas'``.
If parsing dates, parse the default datelike columns.
numpy : boolean, default False
.. admonition:: Not GPU-accelerated
This parameter is only supported with ``engine='pandas'``.
Direct decoding to numpy arrays. Supports numeric
data only, but non-numeric column and index labels are supported. Note
also that the JSON ordering MUST be the same for each term if numpy=True.
precise_float : boolean, default False
.. admonition:: Not GPU-accelerated
This parameter is only supported with ``engine='pandas'``.
Set to enable usage of higher precision (strtod) function when
decoding string to double values (pandas engine only). Default (False)
is to use fast but less precise builtin functionality
date_unit : string, default None
.. admonition:: Not GPU-accelerated
This parameter is only supported with ``engine='pandas'``.
The timestamp unit to detect if converting dates.
The default behavior is to try and detect the correct precision, but if
this is not desired then pass one of 's', 'ms', 'us' or 'ns' to force
parsing only seconds, milliseconds, microseconds or nanoseconds.
encoding : str, default is 'utf-8'
.. admonition:: Not GPU-accelerated
This parameter is only supported with ``engine='pandas'``.
The encoding to use to decode py3 bytes.
With cudf engine, only utf-8 is supported.
lines : boolean, default False
Read the file as a json object per line.
chunksize : integer, default None
.. admonition:: Not GPU-accelerated
This parameter is only supported with ``engine='pandas'``.
Return JsonReader object for iteration.
See the `line-delimited json docs
<http://pandas.pydata.org/pandas-docs/stable/io.html#io-jsonl>`_
for more information on ``chunksize``.
This can only be passed if `lines=True`.
If this is None, the file will be read into memory all at once.
compression : {'infer', 'gzip', 'bz2', 'zip', 'xz', None}, default 'infer'
For on-the-fly decompression of on-disk data. If 'infer', then use
gzip, bz2, zip or xz if path_or_buf is a string ending in
'.gz', '.bz2', '.zip', or 'xz', respectively, and no decompression
otherwise. If using 'zip', the ZIP file must contain only one data
file to be read in. Set to None for no decompression.
byte_range : list or tuple, default None
.. admonition:: GPU-accelerated
This parameter is only supported with ``engine='cudf'``.
Byte range within the input file to be read.
The first number is the offset in bytes, the second number is the range
size in bytes. Set the size to zero to read all data after the offset
location. Reads the row that starts before or at the end of the range,
even if it ends after the end of the range.
keep_quotes : bool, default False
.. admonition:: GPU-accelerated feature
This parameter is only supported with ``engine='cudf'``.
This parameter is only supported in ``cudf`` engine.
If `True`, any string values are read literally (and wrapped in an
additional set of quotes).
If `False` string values are parsed into Python strings.
storage_options : dict, optional, default None
Extra options that make sense for a particular storage connection,
e.g. host, port, username, password, etc. For HTTP(S) URLs the key-value
pairs are forwarded to ``urllib.request.Request`` as header options.
For other URLs (e.g. starting with "s3://", and "gcs://") the key-value
pairs are forwarded to ``fsspec.open``. Please see ``fsspec`` and
``urllib`` for more details.
Returns
-------
result : Series or DataFrame, depending on the value of `typ`.
Notes
-----
When `engine='auto'`, and `line=False`, the `pandas` json
reader will be used. To override the selection, please
use `engine='cudf'`.
See Also
--------
cudf.DataFrame.to_json
Examples
--------
>>> import cudf
>>> df = cudf.DataFrame({'a': ["hello", "rapids"], 'b': ["hello", "worlds"]})
>>> df
a b
0 hello hello
1 rapids worlds
>>> json_str = df.to_json(orient='records', lines=True)
>>> json_str
'{"a":"hello","b":"hello"}\n{"a":"rapids","b":"worlds"}\n'
>>> cudf.read_json(json_str, engine="cudf", lines=True)
a b
0 hello hello
1 rapids worlds
To read the strings with additional set of quotes:
>>> cudf.read_json(json_str, engine="cudf", lines=True,
... keep_quotes=True)
a b
0 "hello" "hello"
1 "rapids" "worlds"
Reading a JSON string containing ordered lists and name/value pairs:
>>> json_str = '[{"list": [0,1,2], "struct": {"k":"v1"}}, {"list": [3,4,5], "struct": {"k":"v2"}}]'
>>> cudf.read_json(json_str, engine='cudf')
list struct
0 [0, 1, 2] {'k': 'v1'}
1 [3, 4, 5] {'k': 'v2'}
Reading JSON Lines data containing ordered lists and name/value pairs:
>>> json_str = '{"a": [{"k1": "v1"}]}\n{"a": [{"k1":"v2"}]}'
>>> cudf.read_json(json_str, engine='cudf', lines=True)
a
0 [{'k1': 'v1'}]
1 [{'k1': 'v2'}]
Using the `dtype` argument to specify type casting:
>>> json_str = '{"k1": 1, "k2":[1.5]}'
>>> cudf.read_json(json_str, engine='cudf', lines=True, dtype={'k1':float, 'k2':cudf.ListDtype(int)})
k1 k2
0 1.0 [1]
""" # noqa: E501
doc_read_json = docfmt_partial(docstring=_docstring_read_json)
_docstring_to_json = """
Convert the cuDF object to a JSON string.
Note nulls and NaNs will be converted to null and datetime objects
will be converted to UNIX timestamps.
Parameters
----------
path_or_buf : string or file handle, optional
File path or object. If not specified, the result is returned as a string.
engine : {{ 'auto', 'cudf', 'pandas' }}, default 'auto'
Parser engine to use. If 'auto' is passed, the `pandas` engine
will be selected.
orient : string
Indication of expected JSON string format.
* Series
- default is 'index'
- allowed values are: {'split','records','index','table'}
* DataFrame
- default is 'columns'
- allowed values are:
{'split','records','index','columns','values','table'}
* The format of the JSON string
- 'split' : dict like {'index' -> [index],
'columns' -> [columns], 'data' -> [values]}
- 'records' : list like
[{column -> value}, ... , {column -> value}]
- 'index' : dict like {index -> {column -> value}}
- 'columns' : dict like {column -> {index -> value}}
- 'values' : just the values array
- 'table' : dict like {'schema': {schema}, 'data': {data}}
describing the data, and the data component is
like ``orient='records'``.
date_format : {None, 'epoch', 'iso'}
Type of date conversion. 'epoch' = epoch milliseconds,
'iso' = ISO8601. The default depends on the `orient`. For
``orient='table'``, the default is 'iso'. For all other orients,
the default is 'epoch'.
double_precision : int, default 10
The number of decimal places to use when encoding
floating point values.
force_ascii : bool, default True
Force encoded string to be ASCII.
date_unit : string, default 'ms' (milliseconds)
The time unit to encode to, governs timestamp and ISO8601
precision. One of 's', 'ms', 'us', 'ns' for second, millisecond,
microsecond, and nanosecond respectively.
default_handler : callable, default None
Handler to call if object cannot otherwise be converted to a
suitable format for JSON. Should receive a single argument which is
the object to convert and return a serializable object.
lines : bool, default False
If 'orient' is 'records' write out line delimited json format. Will
throw ValueError if incorrect 'orient' since others are not list
like.
compression : {'infer', 'gzip', 'bz2', 'zip', 'xz', None}
A string representing the compression to use in the output file,
only used when the first argument is a filename. By default, the
compression is inferred from the filename.
index : bool, default True
Whether to include the index values in the JSON string. Not
including the index (``index=False``) is only supported when
orient is 'split' or 'table'.
See Also
--------
cudf.read_json
"""
doc_to_json = docfmt_partial(docstring=_docstring_to_json)
_docstring_read_hdf = """
Read from the store, close it if we opened it.
Retrieve pandas object stored in file, optionally based on where
criteria
Parameters
----------
path_or_buf : string, buffer or path object
Path to the file to open, or an open `HDFStore
<https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#hdf5-pytables>`_.
object.
Supports any object implementing the ``__fspath__`` protocol.
This includes :class:`pathlib.Path` and py._path.local.LocalPath
objects.
key : object, optional
The group identifier in the store. Can be omitted if the HDF file
contains a single pandas object.
mode : {'r', 'r+', 'a'}, optional
Mode to use when opening the file. Ignored if path_or_buf is a
`Pandas HDFS
<https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#hdf5-pytables>`_.
Default is 'r'.
where : list, optional
A list of Term (or convertible) objects.
start : int, optional
Row number to start selection.
stop : int, optional
Row number to stop selection.
columns : list, optional
A list of columns names to return.
iterator : bool, optional
Return an iterator object.
chunksize : int, optional
Number of rows to include in an iteration when using an iterator.
errors : str, default 'strict'
Specifies how encoding and decoding errors are to be handled.
See the errors argument for :func:`open` for a full list
of options.
**kwargs
Additional keyword arguments passed to HDFStore.
Returns
-------
item : object
The selected object. Return type depends on the object stored.
See Also
--------
cudf.DataFrame.to_hdf : Write a HDF file from a DataFrame.
"""
doc_read_hdf = docfmt_partial(docstring=_docstring_read_hdf)
_docstring_to_hdf = """
Write the contained data to an HDF5 file using HDFStore.
Hierarchical Data Format (HDF) is self-describing, allowing an
application to interpret the structure and contents of a file with
no outside information. One HDF file can hold a mix of related objects
which can be accessed as a group or as individual objects.
In order to add another DataFrame or Series to an existing HDF file
please use append mode and a different a key.
For more information see the `user guide
<https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#hdf5-pytables>`_.
Parameters
----------
path_or_buf : str or pandas.HDFStore
File path or HDFStore object.
key : str
Identifier for the group in the store.
mode : {'a', 'w', 'r+'}, default 'a'
Mode to open file:
- 'w': write, a new file is created (an existing file with the same name
would be deleted).
- 'a': append, an existing file is opened for reading and writing, and if
the file does not exist it is created.
- 'r+': similar to 'a', but the file must already exist.
format : {'fixed', 'table'}, default 'fixed'
Possible values:
- 'fixed': Fixed format. Fast writing/reading. Not-appendable,
nor searchable.
- 'table': Table format. Write as a PyTables Table structure
which may perform worse but allow more flexible operations
like searching / selecting subsets of the data.
append : bool, default False
For Table formats, append the input data to the existing.
data_columns : list of columns or True, optional
List of columns to create as indexed data columns for on-disk
queries, or True to use all columns. By default only the axes
of the object are indexed. See `Query via Data Columns
<https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#io-hdf5-query-data-columns>`_.
Applicable only to format='table'.
complevel : {0-9}, optional
Specifies a compression level for data.
A value of 0 disables compression.
complib : {'zlib', 'lzo', 'bzip2', 'blosc'}, default 'zlib'
Specifies the compression library to be used.
As of v0.20.2 these additional compressors for Blosc are supported
(default if no compressor specified: 'blosc:blosclz'):
{'blosc:blosclz', 'blosc:lz4', 'blosc:lz4hc', 'blosc:snappy',
'blosc:zlib', 'blosc:zstd'}.
Specifying a compression library which is not available issues
a ValueError.
fletcher32 : bool, default False
If applying compression use the fletcher32 checksum.
dropna : bool, default False
If true, ALL nan rows will not be written to store.
errors : str, default 'strict'
Specifies how encoding and decoding errors are to be handled.
See the errors argument for :func:`open` for a full list
of options.
See Also
--------
cudf.read_hdf : Read from HDF file.
cudf.DataFrame.to_parquet : Write a DataFrame to the binary parquet format.
cudf.DataFrame.to_feather : Write out feather-format for DataFrames.
"""
doc_to_hdf = docfmt_partial(docstring=_docstring_to_hdf)
_docstring_read_feather = """
Load an feather object from the file path, returning a DataFrame.
Parameters
----------
path : string
File path
columns : list, default=None
If not None, only these columns will be read from the file.
Returns
-------
DataFrame
Examples
--------
>>> import cudf
>>> df = cudf.read_feather(filename)
>>> df
num1 datetime text
0 123 2018-11-13T12:00:00.000 5451
1 456 2018-11-14T12:35:01.000 5784
2 789 2018-11-15T18:02:59.000 6117
See Also
--------
cudf.DataFrame.to_feather
"""
doc_read_feather = docfmt_partial(docstring=_docstring_read_feather)
_docstring_to_feather = """
Write a DataFrame to the feather format.
Parameters
----------
path : str
File path
See Also
--------
cudf.read_feather
"""
doc_to_feather = docfmt_partial(docstring=_docstring_to_feather)
_docstring_to_dlpack = """
Converts a cuDF object into a DLPack tensor.
DLPack is an open-source memory tensor structure:
`dmlc/dlpack <https://github.com/dmlc/dlpack>`_.
This function takes a cuDF object and converts it to a PyCapsule object
which contains a pointer to a DLPack tensor. This function deep copies the
data into the DLPack tensor from the cuDF object.
Parameters
----------
cudf_obj : DataFrame, Series, Index, or Column
Returns
-------
pycapsule_obj : PyCapsule
Output DLPack tensor pointer which is encapsulated in a PyCapsule
object.
"""
doc_to_dlpack = docfmt_partial(docstring=_docstring_to_dlpack)
_docstring_read_csv = """
Load a comma-separated-values (CSV) dataset into a DataFrame
Parameters
----------
filepath_or_buffer : str, path object, or file-like object
Either a path to a file (a `str`, `pathlib.Path`, or
`py._path.local.LocalPath`), URL (including http, ftp, and S3 locations),
or any object with a `read()` method (such as builtin `open()` file handler
function or `StringIO`).
sep : char, default ','
Delimiter to be used.
delimiter : char, default None
Alternative argument name for sep.
header : int, default 'infer'
Row number to use as the column names. Default behavior is to infer
the column names: if no names are passed, header=0;
if column names are passed explicitly, header=None.
names : list of str, default None
List of column names to be used. Needs to include names of all columns in
the file, or names of all columns selected using `usecols` (only when
`usecols` holds integer indices). When `usecols` is not used to select
column indices, `names` can contain more names than there are columns i.n
the file. In this case the extra columns will only contain null rows.
index_col : int, string or False, default None
Column to use as the row labels of the DataFrame. Passing `index_col=False`
explicitly disables index column inference and discards the last column.
usecols : list of int or str, default None
Returns subset of the columns given in the list. All elements must be
either integer indices (column number) or strings that correspond to
column names. When an integer index is passed for each name in the `names`
parameter, the names are interpreted as names in the output table, not as
names in the input file.
prefix : str, default None
Prefix to add to column numbers when parsing without a header row.
mangle_dupe_cols : boolean, default True
Duplicate columns will be specified as 'X','X.1',...'X.N'.
dtype : type, str, list of types, or dict of column -> type, default None
Data type(s) for data or columns. If `dtype` is a type/str, all columns
are mapped to the particular type passed. If list, types are applied in
the same order as the column names. If dict, types are mapped to the
column names.
E.g. {{'a': np.float64, 'b': int32, 'c': 'float'}}
If `None`, dtypes are inferred from the dataset. Use `str` to preserve data
and not infer or interpret to dtype.
true_values : list, default None
Values to consider as boolean True
false_values : list, default None
Values to consider as boolean False
skipinitialspace : bool, default False
Skip spaces after delimiter.
skiprows : int, default 0
Number of rows to be skipped from the start of file.
skipfooter : int, default 0
Number of rows to be skipped at the bottom of file.
nrows : int, default None
If specified, maximum number of rows to read
na_values : scalar, str, or list-like, optional
Additional strings to recognize as nulls.
By default the following values are interpreted as
nulls: '', '#N/A', '#N/A N/A', '#NA', '-1.#IND',
'-1.#QNAN', '-NaN', '-nan', '1.#IND', '1.#QNAN',
'<NA>', 'N/A', 'NA', 'NULL', 'NaN', 'n/a', 'nan',
'null'.
keep_default_na : bool, default True
Whether or not to include the default NA values when parsing the data.
na_filter : bool, default True
Detect missing values (empty strings and the values in na_values).
Passing False can improve performance.
skip_blank_lines : bool, default True
If True, discard and do not parse empty lines
If False, interpret empty lines as NaN values
parse_dates : list of int or names, default None
If list of columns, then attempt to parse each entry as a date.
Columns may not always be recognized as dates, for instance due to
unusual or non-standard formats. To guarantee a date and increase parsing
speed, explicitly specify `dtype='date'` for the desired columns.
dayfirst : bool, default False
DD/MM format dates, international and European format.
compression : {{'infer', 'gzip', 'zip', None}}, default 'infer'
For on-the-fly decompression of on-disk data. If 'infer', then detect
compression from the following extensions: '.gz','.zip' (otherwise no
decompression). If using 'zip', the ZIP file must contain only one
data file to be read in, otherwise the first non-zero-sized file will
be used. Set to None for no decompression.
thousands : char, default None
Character used as a thousands delimiter.
decimal : char, default '.'
Character used as a decimal point.
lineterminator : char, default '\\n'
Character to indicate end of line.
quotechar : char, default '"'
Character to indicate start and end of quote item.
quoting : str or int, default 0
Controls quoting behavior. Set to one of
0 (csv.QUOTE_MINIMAL), 1 (csv.QUOTE_ALL),
2 (csv.QUOTE_NONNUMERIC) or 3 (csv.QUOTE_NONE).
Quoting is enabled with all values except 3.
doublequote : bool, default True
When quoting is enabled, indicates whether to interpret two
consecutive quotechar inside fields as single quotechar
comment : char, default None
Character used as a comments indicator. If found at the beginning of a
line, the line will be ignored altogether.
delim_whitespace : bool, default False
Determines whether to use whitespace as delimiter.
byte_range : list or tuple, default None
Byte range within the input file to be read. The first number is the
offset in bytes, the second number is the range size in bytes. Set the
size to zero to read all data after the offset location. Reads the row
that starts before or at the end of the range, even if it ends after
the end of the range.
use_python_file_object : boolean, default True
If True, Arrow-backed PythonFile objects will be used in place of fsspec
AbstractBufferedFile objects at IO time. This option is likely to improve
performance when making small reads from larger CSV files.
storage_options : dict, optional, default None
Extra options that make sense for a particular storage connection,
e.g. host, port, username, password, etc. For HTTP(S) URLs the key-value
pairs are forwarded to ``urllib.request.Request`` as header options.
For other URLs (e.g. starting with "s3://", and "gcs://") the key-value
pairs are forwarded to ``fsspec.open``. Please see ``fsspec`` and
``urllib`` for more details.
bytes_per_thread : int, default None
Determines the number of bytes to be allocated per thread to read the
files in parallel. When there is a file of large size, we get slightly
better throughput by decomposing it and transferring multiple "blocks"
in parallel (using a python thread pool). Default allocation is
{bytes_per_thread} bytes.
This parameter is functional only when `use_python_file_object=False`.
Returns
-------
GPU ``DataFrame`` object.
Notes
-----
{remote_data_sources}
Examples
--------
Create a test csv file
>>> import cudf
>>> filename = 'foo.csv'
>>> lines = [
... "num1,datetime,text",
... "123,2018-11-13T12:00:00,abc",
... "456,2018-11-14T12:35:01,def",
... "789,2018-11-15T18:02:59,ghi"
... ]
>>> with open(filename, 'w') as fp:
... fp.write('\\n'.join(lines)+'\\n')
Read the file with ``cudf.read_csv``
>>> cudf.read_csv(filename)
num1 datetime text
0 123 2018-11-13T12:00:00.000 5451
1 456 2018-11-14T12:35:01.000 5784
2 789 2018-11-15T18:02:59.000 6117
See Also
--------
cudf.DataFrame.to_csv
""".format(
remote_data_sources=_docstring_remote_sources,
bytes_per_thread=_BYTES_PER_THREAD_DEFAULT,
)
doc_read_csv = docfmt_partial(docstring=_docstring_read_csv)
_to_csv_example = """
Write a dataframe to csv.
>>> import cudf
>>> filename = 'foo.csv'
>>> df = cudf.DataFrame({'x': [0, 1, 2, 3],
... 'y': [1.0, 3.3, 2.2, 4.4],
... 'z': ['a', 'b', 'c', 'd']})
>>> df = df.set_index(cudf.Series([3, 2, 1, 0]))
>>> df.to_csv(filename)
"""
_docstring_to_csv = """
Write a dataframe to csv file format.
Parameters
----------
{df_param}
path_or_buf : str or file handle, default None
File path or object, if None is provided
the result is returned as a string.
sep : char, default ','
Delimiter to be used.
na_rep : str, default ''
String to use for null entries
columns : list of str, optional
Columns to write
header : bool, default True
Write out the column names
index : bool, default True
Write out the index as a column
encoding : str, default 'utf-8'
A string representing the encoding to use in the output file
Only 'utf-8' is currently supported
compression : str, None
A string representing the compression scheme to use in the output file
Compression while writing csv is not supported currently
lineterminator : str, optional
The newline character or character sequence to use in the output file.
Defaults to :data:`os.linesep`.
chunksize : int or None, default None
Rows to write at a time
storage_options : dict, optional, default None
Extra options that make sense for a particular storage connection,
e.g. host, port, username, password, etc. For HTTP(S) URLs the key-value
pairs are forwarded to ``urllib.request.Request`` as header options.
For other URLs (e.g. starting with "s3://", and "gcs://") the key-value
pairs are forwarded to ``fsspec.open``. Please see ``fsspec`` and
``urllib`` for more details.
Returns
-------
None or str
If `path_or_buf` is None, returns the resulting csv format as a string.
Otherwise returns None.
Notes
-----
- Follows the standard of Pandas csv.QUOTE_NONNUMERIC for all output.
- The default behaviour is to write all rows of the dataframe at once.
This can lead to memory or overflow errors for large tables. If this
happens, consider setting the ``chunksize`` argument to some
reasonable fraction of the total rows in the dataframe.
Examples
--------
{example}
See Also
--------
cudf.read_csv
"""
doc_to_csv = docfmt_partial(
docstring=_docstring_to_csv.format(
df_param="""
df : DataFrame
DataFrame object to be written to csv
""",
example=_to_csv_example,
)
)
doc_dataframe_to_csv = docfmt_partial(
docstring=_docstring_to_csv.format(df_param="", example=_to_csv_example)
)
_docstring_kafka_datasource = """
Configuration object for a Kafka Datasource
Parameters
----------
kafka_configs : dict, key/value pairs of librdkafka configuration values.
The complete list of valid configurations can be found at
https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md
topic : string, case sensitive name of the Kafka topic that contains the
source data.
partition : int,
Zero-based identifier of the Kafka partition that the underlying consumer
should consume messages from. Valid values are 0 - (N-1)
start_offset : int, Kafka Topic/Partition offset that consumption
should begin at. Inclusive.
end_offset : int, Kafka Topic/Partition offset that consumption
should end at. Inclusive.
batch_timeout : int, default 10000
Maximum number of milliseconds that will be spent trying to
consume messages between the specified 'start_offset' and 'end_offset'.
delimiter : string, default None, optional delimiter to insert into the
output between kafka messages, Ex: "\n"
"""
doc_kafka_datasource = docfmt_partial(docstring=_docstring_kafka_datasource)
_docstring_text_datasource = """
Configuration object for a text Datasource
Parameters
----------
filepath_or_buffer : str, path object, or file-like object
Either a path to a file (a `str`, `pathlib.Path`, or
`py._path.local.LocalPath`), URL (including http, ftp, and S3 locations),
or any object with a `read()` method (such as builtin `open()` file handler
function or `StringIO`).
delimiter : string, default None
The delimiter that should be used for splitting text chunks into
separate cudf column rows. The delimiter may be one or more characters.
byte_range : list or tuple, default None
Byte range within the input file to be read. The first number is the
offset in bytes, the second number is the range size in bytes.
The output contains all rows that start inside the byte range
(i.e. at or after the offset, and before the end at `offset + size`),
which may include rows that continue past the end.
strip_delimiters : boolean, default False
Unlike the `str.split()` function, `read_text` preserves the delimiter
at the end of a field in output by default, meaning `a;b;c` will turn into
`['a;','b;','c']` when using `;` as a delimiter.
Setting this option to `True` will strip these trailing delimiters,
leaving only the contents between delimiters in the resulting column:
`['a','b','c']`
compression : string, default None
Which compression type is the input compressed with.
Currently supports only `bgzip`, and requires the path to a file as input.
compression_offsets: list or tuple, default None
The virtual begin and end offset associated with the provided compression.
For `bgzip`, they are composed of a local uncompressed offset inside a
BGZIP block (lower 16 bits) and the start offset of this BGZIP block in the
compressed file (upper 48 bits).
The start offset points to the first byte to be read, the end offset points
one past the last byte to be read.
storage_options : dict, optional, default None
Extra options that make sense for a particular storage connection,
e.g. host, port, username, password, etc. For HTTP(S) URLs the key-value
pairs are forwarded to ``urllib.request.Request`` as header options.
For other URLs (e.g. starting with "s3://", and "gcs://") the key-value
pairs are forwarded to ``fsspec.open``. Please see ``fsspec`` and
``urllib`` for more details.
Returns
-------
result : Series
"""
doc_read_text = docfmt_partial(docstring=_docstring_text_datasource)
_docstring_get_reader_filepath_or_buffer = """
Return either a filepath string to data, or a memory buffer of data.
If filepath, then the source filepath is expanded to user's environment.
If buffer, then data is returned in-memory as bytes or a ByteIO object.
Parameters
----------
path_or_data : str, file-like object, bytes, ByteIO
Path to data or the data itself.
compression : str
Type of compression algorithm for the content
mode : str
Mode in which file is opened
iotypes : (), default (BytesIO)
Object type to exclude from file-like check
use_python_file_object : boolean, default False
If True, Arrow-backed PythonFile objects will be used in place
of fsspec AbstractBufferedFile objects.
open_file_options : dict, optional
Optional dictionary of keyword arguments to pass to
`_open_remote_files` (used for remote storage only).
allow_raw_text_input : boolean, default False
If True, this indicates the input `path_or_data` could be a raw text
input and will not check for its existence in the filesystem. If False,
the input must be a path and an error will be raised if it does not
exist.
storage_options : dict, optional
Extra options that make sense for a particular storage connection, e.g.
host, port, username, password, etc. For HTTP(S) URLs the key-value
pairs are forwarded to ``urllib.request.Request`` as header options.
For other URLs (e.g. starting with "s3://", and "gcs://") the key-value
pairs are forwarded to ``fsspec.open``. Please see ``fsspec`` and
``urllib`` for more details, and for more examples on storage options
refer `here <https://pandas.pydata.org/docs/user_guide/io.html?
highlight=storage_options#reading-writing-remote-files>`__.
bytes_per_thread : int, default None
Determines the number of bytes to be allocated per thread to read the
files in parallel. When there is a file of large size, we get slightly
better throughput by decomposing it and transferring multiple "blocks"
in parallel (using a Python thread pool). Default allocation is
{bytes_per_thread} bytes.
This parameter is functional only when `use_python_file_object=False`.
Returns
-------
filepath_or_buffer : str, bytes, BytesIO, list
Filepath string or in-memory buffer of data or a
list of Filepath strings or in-memory buffers of data.
compression : str
Type of compression algorithm for the content
""".format(
bytes_per_thread=_BYTES_PER_THREAD_DEFAULT
)
doc_get_reader_filepath_or_buffer = docfmt_partial(
docstring=_docstring_get_reader_filepath_or_buffer
)
def is_url(url):
"""Check if a string is a valid URL to a network location.
Parameters
----------
url : str
String containing a possible URL
Returns
-------
bool : bool
If `url` has a valid protocol return True otherwise False.
"""
# Do not include the empty ('') scheme in the check
schemes = urllib.parse.uses_netloc[1:]
try:
return urllib.parse.urlparse(url).scheme in schemes
except Exception:
return False
def is_file_like(obj):
"""Check if the object is a file-like object, per PANDAS' definition.
An object is considered file-like if it has an iterator AND has a either or
both `read()` / `write()` methods as attributes.
Parameters
----------
obj : object
Object to check for file-like properties
Returns
-------
is_file_like : bool
If `obj` is file-like returns True otherwise False
"""
if not (hasattr(obj, "read") or hasattr(obj, "write")):
return False
elif not hasattr(obj, "__iter__"):
return False
else:
return True
def _is_local_filesystem(fs):
return isinstance(fs, fsspec.implementations.local.LocalFileSystem)
def ensure_single_filepath_or_buffer(path_or_data, storage_options=None):
"""Return False if `path_or_data` resolves to multiple filepaths or
buffers.
"""
path_or_data = stringify_pathlike(path_or_data)
if isinstance(path_or_data, str):
path_or_data = os.path.expanduser(path_or_data)
try:
fs, _, paths = get_fs_token_paths(
path_or_data, mode="rb", storage_options=storage_options
)
except ValueError as e:
if str(e).startswith("Protocol not known"):
return True
else:
raise e
if len(paths) > 1:
return False
elif isinstance(path_or_data, (list, tuple)) and len(path_or_data) > 1:
return False
return True
def is_directory(path_or_data, storage_options=None):
"""Returns True if the provided filepath is a directory"""
path_or_data = stringify_pathlike(path_or_data)
if isinstance(path_or_data, str):
path_or_data = os.path.expanduser(path_or_data)
try:
fs = get_fs_token_paths(
path_or_data, mode="rb", storage_options=storage_options
)[0]
except ValueError as e:
if str(e).startswith("Protocol not known"):
return False
else:
raise e
return fs.isdir(path_or_data)
return False
def _get_filesystem_and_paths(path_or_data, storage_options):
# Returns a filesystem object and the filesystem-normalized
# paths. If `path_or_data` does not correspond to a path or
# list of paths (or if the protocol is not supported), the
# return will be `None` for the fs and `[]` for the paths.
fs = None
return_paths = path_or_data
if isinstance(path_or_data, str) or (
isinstance(path_or_data, list)
and isinstance(stringify_pathlike(path_or_data[0]), str)
):
# Ensure we are always working with a list
if isinstance(path_or_data, list):
path_or_data = [
os.path.expanduser(stringify_pathlike(source))
for source in path_or_data
]
else:
path_or_data = [path_or_data]
try:
fs, _, fs_paths = get_fs_token_paths(
path_or_data, mode="rb", storage_options=storage_options
)
return_paths = fs_paths
except ValueError as e:
if str(e).startswith("Protocol not known"):
return None, []
else:
raise e
return fs, return_paths
def _set_context(obj, stack):
# Helper function to place open file on context stack
if stack is None:
return obj
return stack.enter_context(obj)
def _open_remote_files(
paths,
fs,
context_stack=None,
open_file_func=None,
precache_options=None,
**kwargs,
):
"""Return a list of open file-like objects given
a list of remote file paths.
Parameters
----------
paths : list(str)
List of file-path strings.
fs : fsspec.AbstractFileSystem
Fsspec file-system object.
context_stack : contextlib.ExitStack, Optional
Context manager to use for open files.
open_file_func : Callable, Optional
Call-back function to use for opening. If this argument
is specified, all other arguments will be ignored.
precache_options : dict, optional
Dictionary of key-word arguments to pass to use for
precaching. Unless the input contains ``{"method": None}``,
``fsspec.parquet.open_parquet_file`` will be used for remote
storage.
**kwargs :
Key-word arguments to be passed to format-specific
open functions.
"""
# Just use call-back function if one was specified
if open_file_func is not None:
return [
_set_context(open_file_func(path, **kwargs), context_stack)
for path in paths
]
# Check if the "precache" option is supported.
# In the future, fsspec should do this check for us
precache_options = (precache_options or {}).copy()
precache = precache_options.pop("method", None)
if precache not in ("parquet", None):
raise ValueError(f"{precache} not a supported `precache` option.")
# Check that "parts" caching (used for all format-aware file handling)
# is supported by the installed fsspec/s3fs version
if precache == "parquet" and not fsspec_parquet:
warnings.warn(
f"This version of fsspec ({fsspec.__version__}) does "
f"not support parquet-optimized precaching. Please upgrade "
f"to the latest fsspec version for better performance."
)
precache = None
if precache == "parquet":
# Use fsspec.parquet module.
# TODO: Use `cat_ranges` to collect "known"
# parts for all files at once.
row_groups = precache_options.pop("row_groups", None) or (
[None] * len(paths)
)
return [
ArrowPythonFile(
_set_context(
fsspec_parquet.open_parquet_file(
path,
fs=fs,
row_groups=rgs,
**precache_options,
**kwargs,
),
context_stack,
)
)
for path, rgs in zip(paths, row_groups)
]
# Avoid top-level pyarrow.fs import.
# Importing pyarrow.fs initializes a S3 SDK with a finalizer
# that runs atexit. In some circumstances it appears this
# runs a call into a logging system that is already shutdown.
# To avoid this, we only import this subsystem if it is
# really needed.
# See https://github.com/aws/aws-sdk-cpp/issues/2681
from pyarrow.fs import FSSpecHandler, PyFileSystem
# Default open - Use pyarrow filesystem API
pa_fs = PyFileSystem(FSSpecHandler(fs))
return [
_set_context(pa_fs.open_input_file(fpath), context_stack)
for fpath in paths
]
@doc_get_reader_filepath_or_buffer()
def get_reader_filepath_or_buffer(
path_or_data,
compression,
mode="rb",
fs=None,
iotypes=(BytesIO, NativeFile),
use_python_file_object=False,
open_file_options=None,
allow_raw_text_input=False,
storage_options=None,
bytes_per_thread=_BYTES_PER_THREAD_DEFAULT,
):
"""{docstring}"""
path_or_data = stringify_pathlike(path_or_data)
if isinstance(path_or_data, str):
# Get a filesystem object if one isn't already available
paths = [path_or_data]
if fs is None:
fs, paths = _get_filesystem_and_paths(
path_or_data, storage_options
)
if fs is None:
return path_or_data, compression
if _is_local_filesystem(fs):
# Doing this as `read_json` accepts a json string
# path_or_data need not be a filepath like string
if len(paths):
if fs.exists(paths[0]):
path_or_data = paths if len(paths) > 1 else paths[0]
elif not allow_raw_text_input:
raise FileNotFoundError(
f"{path_or_data} could not be resolved to any files"
)
else:
if len(paths) == 0:
raise FileNotFoundError(
f"{path_or_data} could not be resolved to any files"
)
if use_python_file_object:
path_or_data = _open_remote_files(
paths,
fs,
**(open_file_options or {}),
)
else:
path_or_data = [
BytesIO(
_fsspec_data_transfer(
fpath,
fs=fs,
mode=mode,
bytes_per_thread=bytes_per_thread,
)
)
for fpath in paths
]
if len(path_or_data) == 1:
path_or_data = path_or_data[0]
elif not isinstance(path_or_data, iotypes) and is_file_like(path_or_data):
if isinstance(path_or_data, TextIOWrapper):
path_or_data = path_or_data.buffer
if use_python_file_object:
path_or_data = ArrowPythonFile(path_or_data)
else:
path_or_data = BytesIO(
_fsspec_data_transfer(
path_or_data, mode=mode, bytes_per_thread=bytes_per_thread
)
)
return path_or_data, compression
def get_writer_filepath_or_buffer(path_or_data, mode, storage_options=None):
"""
Return either a filepath string to data,
or a open file object to the output filesystem
Parameters
----------
path_or_data : str, file-like object, bytes, ByteIO
Path to data or the data itself.
mode : str
Mode in which file is opened
storage_options : dict, optional, default None
Extra options that make sense for a particular storage connection,
e.g. host, port, username, password, etc. For HTTP(S) URLs the
key-value pairs are forwarded to ``urllib.request.Request`` as
header options. For other URLs (e.g. starting with "s3://", and
"gcs://") the key-value pairs are forwarded to ``fsspec.open``.
Please see ``fsspec`` and ``urllib`` for more details.
Returns
-------
filepath_or_buffer : str,
Filepath string or buffer of data
"""
if storage_options is None:
storage_options = {}
if isinstance(path_or_data, str):
path_or_data = os.path.expanduser(path_or_data)
fs = get_fs_token_paths(
path_or_data, mode=mode or "w", storage_options=storage_options
)[0]
if not _is_local_filesystem(fs):
filepath_or_buffer = fsspec.open(
path_or_data, mode=mode or "w", **(storage_options)
)
return filepath_or_buffer
return path_or_data
def get_IOBase_writer(file_obj):
"""
Parameters
----------
file_obj : file-like object
Open file object for writing to any filesystem
Returns
-------
iobase_file_obj : file-like object
Open file object inheriting from io.IOBase
"""
if not isinstance(file_obj, IOBase):
if "b" in file_obj.mode:
iobase_file_obj = BufferedWriter(file_obj)
else:
iobase_file_obj = TextIOWrapper(file_obj)
return iobase_file_obj
return file_obj
def is_fsspec_open_file(file_obj):
if isinstance(file_obj, fsspec.core.OpenFile):
return True
return False
def stringify_pathlike(pathlike):
"""
Convert any object that implements the fspath protocol
to a string. Leaves other objects unchanged
Parameters
----------
pathlike
Pathlike object that implements the fspath protocol
Returns
-------
maybe_pathlike_str
String version of the object if possible
"""
maybe_pathlike_str = (
pathlike.__fspath__() if hasattr(pathlike, "__fspath__") else pathlike
)
return maybe_pathlike_str
def buffer_write_lines(buf, lines):
"""
Appends lines to a buffer.
Parameters
----------
buf
The buffer to write to
lines
The lines to append.
"""
if any(isinstance(x, str) for x in lines):
lines = [str(x) for x in lines]
buf.write("\n".join(lines))
def _apply_filter_bool_eq(val, col_stats):
if "true_count" in col_stats and "false_count" in col_stats:
if val is True:
if (col_stats["true_count"] == 0) or (
col_stats["false_count"] == col_stats["number_of_values"]
):
return False
elif val is False:
if (col_stats["false_count"] == 0) or (
col_stats["true_count"] == col_stats["number_of_values"]
):
return False
return True
def _apply_filter_not_eq(val, col_stats):
return ("minimum" in col_stats and val < col_stats["minimum"]) or (
"maximum" in col_stats and val > col_stats["maximum"]
)
def _apply_predicate(op, val, col_stats):
# Sanitize operator
if op not in {"=", "==", "!=", "<", "<=", ">", ">=", "in", "not in"}:
raise ValueError(f"'{op}' is not a valid operator in predicates.")
col_min = col_stats.get("minimum", None)
col_max = col_stats.get("maximum", None)
col_sum = col_stats.get("sum", None)
# Apply operator
if op == "=" or op == "==":
if _apply_filter_not_eq(val, col_stats):
return False
# TODO: Replace pd.isnull with
# cudf.isnull once it is implemented
if pd.isnull(val) and not col_stats["has_null"]:
return False
if not _apply_filter_bool_eq(val, col_stats):
return False
elif op == "!=":
if (
col_min is not None
and col_max is not None
and val == col_min
and val == col_max
):
return False
if _apply_filter_bool_eq(val, col_stats):
return False
elif col_min is not None and (
(op == "<" and val <= col_min) or (op == "<=" and val < col_min)
):
return False
elif col_max is not None and (
(op == ">" and val >= col_max) or (op == ">=" and val > col_max)
):
return False
elif (
col_sum is not None
and op == ">"
and (
(col_min is not None and col_min >= 0 and col_sum <= val)
or (col_max is not None and col_max <= 0 and col_sum >= val)
)
):
return False
elif (
col_sum is not None
and op == ">="
and (
(col_min is not None and col_min >= 0 and col_sum < val)
or (col_max is not None and col_max <= 0 and col_sum > val)
)
):
return False
elif op == "in":
if (col_max is not None and col_max < min(val)) or (
col_min is not None and col_min > max(val)
):
return False
if all(_apply_filter_not_eq(elem, col_stats) for elem in val):
return False
elif op == "not in" and col_min is not None and col_max is not None:
if any(elem == col_min == col_max for elem in val):
return False
col_range = None
if isinstance(col_min, int):
col_range = range(col_min, col_max)
elif isinstance(col_min, datetime.datetime):
col_range = pd.date_range(col_min, col_max)
if col_range and all(elem in val for elem in col_range):
return False
return True
def _apply_filters(filters, stats):
for conjunction in filters:
if all(
_apply_predicate(op, val, stats[col])
for col, op, val in conjunction
):
return True
return False
def _prepare_filters(filters):
# Coerce filters into list of lists of tuples
if isinstance(filters[0][0], str):
filters = [filters]
return filters
def _ensure_filesystem(passed_filesystem, path, storage_options):
if passed_filesystem is None:
return get_fs_token_paths(
path[0] if isinstance(path, list) else path,
storage_options={} if storage_options is None else storage_options,
)[0]
return passed_filesystem
#
# Fsspec Data-transfer Optimization Code
#
def _fsspec_data_transfer(
path_or_fob,
fs=None,
file_size=None,
bytes_per_thread=_BYTES_PER_THREAD_DEFAULT,
max_gap=64_000,
mode="rb",
):
if bytes_per_thread is None:
bytes_per_thread = _BYTES_PER_THREAD_DEFAULT
# Require `fs` if `path_or_fob` is not file-like
file_like = is_file_like(path_or_fob)
if fs is None and not file_like:
raise ValueError(
"fs must be defined if `path_or_fob` is not file-like"
)
# Calculate total file size
if file_like:
try:
file_size = path_or_fob.size
except AttributeError:
# If we cannot find the size of path_or_fob
# just read it.
return path_or_fob.read()
file_size = file_size or fs.size(path_or_fob)
# Check if a direct read makes the most sense
if bytes_per_thread >= file_size:
if file_like:
return path_or_fob.read()
else:
return fs.open(path_or_fob, mode=mode, cache_type="all").read()
# Threaded read into "local" buffer
buf = np.zeros(file_size, dtype="b")
byte_ranges = [
(b, min(bytes_per_thread, file_size - b))
for b in range(0, file_size, bytes_per_thread)
]
_read_byte_ranges(
path_or_fob,
byte_ranges,
buf,
fs=fs,
)
return buf.tobytes()
def _merge_ranges(byte_ranges, max_block=256_000_000, max_gap=64_000):
# Simple utility to merge small/adjacent byte ranges
new_ranges = []
if not byte_ranges:
# Early return
return new_ranges
offset, size = byte_ranges[0]
for (new_offset, new_size) in byte_ranges[1:]:
gap = new_offset - (offset + size)
if gap > max_gap or (size + new_size + gap) > max_block:
# Gap is too large or total read is too large
new_ranges.append((offset, size))
offset = new_offset
size = new_size
continue
size += new_size + gap
new_ranges.append((offset, size))
return new_ranges
def _assign_block(fs, path_or_fob, local_buffer, offset, nbytes):
if fs is None:
# We have an open fsspec file object
path_or_fob.seek(offset)
local_buffer[offset : offset + nbytes] = np.frombuffer(
path_or_fob.read(nbytes),
dtype="b",
)
else:
# We have an fsspec filesystem and a path
with fs.open(path_or_fob, mode="rb", cache_type="none") as fob:
fob.seek(offset)
local_buffer[offset : offset + nbytes] = np.frombuffer(
fob.read(nbytes),
dtype="b",
)
def _read_byte_ranges(
path_or_fob,
ranges,
local_buffer,
fs=None,
):
# Simple utility to copy remote byte ranges
# into a local buffer for IO in libcudf
workers = []
for (offset, nbytes) in ranges:
if len(ranges) > 1:
workers.append(
Thread(
target=_assign_block,
args=(fs, path_or_fob, local_buffer, offset, nbytes),
)
)
workers[-1].start()
else:
_assign_block(fs, path_or_fob, local_buffer, offset, nbytes)
for worker in workers:
worker.join()
| 0 |
rapidsai_public_repos/cudf/python/cudf/cudf
|
rapidsai_public_repos/cudf/python/cudf/cudf/utils/string.py
|
# Copyright (c) 2022, NVIDIA CORPORATION.
def format_bytes(nbytes: int) -> str:
"""Format `nbytes` to a human readable string"""
n = float(nbytes)
for unit in ["B", "KiB", "MiB", "GiB", "TiB"]:
if abs(n) < 1024:
if n.is_integer():
return f"{int(n)}{unit}"
return f"{n:.2f}{unit}"
n /= 1024
return f"{n:.2f} PiB"
| 0 |
rapidsai_public_repos/cudf/python/cudf/cudf
|
rapidsai_public_repos/cudf/python/cudf/cudf/utils/applyutils.py
|
# Copyright (c) 2018-2023, NVIDIA CORPORATION.
import functools
from typing import Any, Dict
import cupy as cp
from numba import cuda
from numba.core.utils import pysignature
import cudf
from cudf import _lib as libcudf
from cudf.core.buffer import acquire_spill_lock
from cudf.core.column import column
from cudf.utils import utils
from cudf.utils._numba import _CUDFNumbaConfig
from cudf.utils.docutils import docfmt_partial
_doc_applyparams = """
df : DataFrame
The source dataframe.
func : function
The transformation function that will be executed on the CUDA GPU.
incols: list or dict
A list of names of input columns that match the function arguments.
Or, a dictionary mapping input column names to their corresponding
function arguments such as {'col1': 'arg1'}.
outcols: dict
A dictionary of output column names and their dtype.
kwargs: dict
name-value of extra arguments. These values are passed
directly into the function.
pessimistic_nulls : bool
Whether or not apply_rows output should be null when any corresponding
input is null. If False, all outputs will be non-null, but will be the
result of applying func against the underlying column data, which
may be garbage.
"""
_doc_applychunkparams = """
chunks : int or Series-like
If it is an ``int``, it is the chunksize.
If it is an array, it contains integer offset for the start of each chunk.
The span of a chunk for chunk i-th is ``data[chunks[i] : chunks[i + 1]]``
for any ``i + 1 < chunks.size``; or, ``data[chunks[i]:]`` for the
``i == len(chunks) - 1``.
tpb : int; optional
The threads-per-block for the underlying kernel.
If not specified (Default), uses Numba ``.forall(...)`` built-in to query
the CUDA Driver API to determine optimal kernel launch configuration.
Specify 1 to emulate serial execution for each chunk. It is a good
starting point but inefficient.
Its maximum possible value is limited by the available CUDA GPU resources.
blkct : int; optional
The number of blocks for the underlying kernel.
If not specified (Default) and ``tpb`` is not specified (Default), uses
Numba ``.forall(...)`` built-in to query the CUDA Driver API to determine
optimal kernel launch configuration.
If not specified (Default) and ``tpb`` is specified, uses ``chunks`` as the
number of blocks.
"""
doc_apply = docfmt_partial(params=_doc_applyparams)
doc_applychunks = docfmt_partial(
params=_doc_applyparams, params_chunks=_doc_applychunkparams
)
@doc_apply()
def apply_rows(
df, func, incols, outcols, kwargs, pessimistic_nulls, cache_key
):
"""Row-wise transformation
Parameters
----------
{params}
"""
applyrows = ApplyRowsCompiler(
func, incols, outcols, kwargs, pessimistic_nulls, cache_key=cache_key
)
return applyrows.run(df)
@doc_applychunks()
def apply_chunks(
df,
func,
incols,
outcols,
kwargs,
pessimistic_nulls,
chunks,
blkct=None,
tpb=None,
):
"""Chunk-wise transformation
Parameters
----------
{params}
{params_chunks}
"""
applychunks = ApplyChunksCompiler(
func, incols, outcols, kwargs, pessimistic_nulls, cache_key=None
)
return applychunks.run(df, chunks=chunks, tpb=tpb)
@acquire_spill_lock()
def make_aggregate_nullmask(df, columns=None, op="__and__"):
out_mask = None
for k in columns or df._data:
col = cudf.core.dataframe.extract_col(df, k)
if not col.nullable:
continue
nullmask = column.as_column(df[k]._column.nullmask)
if out_mask is None:
out_mask = column.as_column(
nullmask.copy(), dtype=utils.mask_dtype
)
else:
out_mask = libcudf.binaryop.binaryop(
nullmask, out_mask, op, out_mask.dtype
)
return out_mask
class ApplyKernelCompilerBase:
def __init__(
self, func, incols, outcols, kwargs, pessimistic_nulls, cache_key
):
# Get signature of user function
sig = pysignature(func)
self.sig = sig
self.incols = incols
self.outcols = outcols
self.kwargs = kwargs
self.pessimistic_nulls = pessimistic_nulls
self.cache_key = cache_key
self.kernel = self.compile(func, sig.parameters.keys(), kwargs.keys())
@acquire_spill_lock()
def run(self, df, **launch_params):
# Get input columns
if isinstance(self.incols, dict):
inputs = {
v: df[k]._column.data_array_view(mode="read")
for (k, v) in self.incols.items()
}
else:
inputs = {
k: df[k]._column.data_array_view(mode="read")
for k in self.incols
}
# Allocate output columns
outputs = {}
for k, dt in self.outcols.items():
outputs[k] = column.column_empty(
len(df), dt, False
).data_array_view(mode="write")
# Bind argument
args = {}
for dct in [inputs, outputs, self.kwargs]:
args.update(dct)
bound = self.sig.bind(**args)
# Launch kernel
self.launch_kernel(df, bound.args, **launch_params)
# Prepare pessimistic nullmask
if self.pessimistic_nulls:
out_mask = make_aggregate_nullmask(df, columns=self.incols)
else:
out_mask = None
# Prepare output frame
outdf = df.copy()
for k in sorted(self.outcols):
outdf[k] = cudf.Series(
outputs[k], index=outdf.index, nan_as_null=False
)
if out_mask is not None:
outdf._data[k] = outdf[k]._column.set_mask(
out_mask.data_array_view(mode="write")
)
return outdf
class ApplyRowsCompiler(ApplyKernelCompilerBase):
def compile(self, func, argnames, extra_argnames):
# Compile kernel
kernel = _load_cache_or_make_row_wise_kernel(
self.cache_key, func, argnames, extra_argnames
)
return kernel
def launch_kernel(self, df, args):
with _CUDFNumbaConfig():
self.kernel.forall(len(df))(*args)
class ApplyChunksCompiler(ApplyKernelCompilerBase):
def compile(self, func, argnames, extra_argnames):
# Compile kernel
kernel = _load_cache_or_make_chunk_wise_kernel(
func, argnames, extra_argnames
)
return kernel
def launch_kernel(self, df, args, chunks, blkct=None, tpb=None):
chunks = self.normalize_chunks(len(df), chunks)
if blkct is None and tpb is None:
with _CUDFNumbaConfig():
self.kernel.forall(len(df))(len(df), chunks, *args)
else:
assert tpb is not None
if blkct is None:
blkct = chunks.size
with _CUDFNumbaConfig():
self.kernel[blkct, tpb](len(df), chunks, *args)
def normalize_chunks(self, size, chunks):
if isinstance(chunks, int):
# *chunks* is the chunksize
return cuda.as_cuda_array(
cp.arange(start=0, stop=size, step=chunks)
).view("int64")
else:
# *chunks* is an array of chunk leading offset
return cuda.as_cuda_array(cp.asarray(chunks)).view("int64")
def _make_row_wise_kernel(func, argnames, extras):
"""
Make a kernel that does a stride loop over the input rows.
Each thread is responsible for a row in each iteration.
Several iteration may be needed to handling a large number of rows.
The resulting kernel can be used with any 1D grid size and 1D block size.
"""
# Build kernel source
argnames = list(map(_mangle_user, argnames))
extras = list(map(_mangle_user, extras))
source = """
def row_wise_kernel({args}):
{body}
"""
args = ", ".join(argnames)
body = []
body.append("tid = cuda.grid(1)")
body.append("ntid = cuda.gridsize(1)")
for a in argnames:
if a not in extras:
start = "tid"
stop = ""
stride = "ntid"
srcidx = "{a} = {a}[{start}:{stop}:{stride}]"
body.append(
srcidx.format(a=a, start=start, stop=stop, stride=stride)
)
body.append(f"inner({args})")
indented = ["{}{}".format(" " * 4, ln) for ln in body]
# Finalize source
concrete = source.format(args=args, body="\n".join(indented))
# Get bytecode
glbs = {"inner": cuda.jit(device=True)(func), "cuda": cuda}
exec(concrete, glbs)
# Compile as CUDA kernel
kernel = cuda.jit(glbs["row_wise_kernel"])
return kernel
def _make_chunk_wise_kernel(func, argnames, extras):
"""
Make a kernel that does a stride loop over the input chunks.
Each block is responsible for a chunk in each iteration.
Several iteration may be needed to handling a large number of chunks.
The user function *func* will have all threads in the block for its
computation.
The resulting kernel can be used with any 1D grid size and 1D block size.
"""
# Build kernel source
argnames = list(map(_mangle_user, argnames))
extras = list(map(_mangle_user, extras))
source = """
def chunk_wise_kernel(nrows, chunks, {args}):
{body}
"""
args = ", ".join(argnames)
body = []
body.append("blkid = cuda.blockIdx.x")
body.append("nblkid = cuda.gridDim.x")
body.append("tid = cuda.threadIdx.x")
body.append("ntid = cuda.blockDim.x")
# Stride loop over the block
body.append("for curblk in range(blkid, chunks.size, nblkid):")
indent = " " * 4
body.append(indent + "start = chunks[curblk]")
body.append(
indent
+ "stop = chunks[curblk + 1]"
+ " if curblk + 1 < chunks.size else nrows"
)
slicedargs = {}
for a in argnames:
if a not in extras:
slicedargs[a] = f"{a}[start:stop]"
else:
slicedargs[a] = str(a)
body.append(
"{}inner({})".format(
indent, ", ".join(slicedargs[k] for k in argnames)
)
)
indented = ["{}{}".format(" " * 4, ln) for ln in body]
# Finalize source
concrete = source.format(args=args, body="\n".join(indented))
# Get bytecode
glbs = {"inner": cuda.jit(device=True)(func), "cuda": cuda}
exec(concrete, glbs)
# Compile as CUDA kernel
kernel = cuda.jit(glbs["chunk_wise_kernel"])
return kernel
_cache: Dict[Any, Any] = dict()
@functools.wraps(_make_row_wise_kernel)
def _load_cache_or_make_row_wise_kernel(cache_key, func, *args, **kwargs):
"""Caching version of ``_make_row_wise_kernel``."""
if cache_key is None:
cache_key = func
try:
out = _cache[cache_key]
# print("apply cache loaded", cache_key)
return out
except KeyError:
# print("apply cache NOT loaded", cache_key)
kernel = _make_row_wise_kernel(func, *args, **kwargs)
_cache[cache_key] = kernel
return kernel
@functools.wraps(_make_chunk_wise_kernel)
def _load_cache_or_make_chunk_wise_kernel(func, *args, **kwargs):
"""Caching version of ``_make_row_wise_kernel``."""
try:
return _cache[func]
except KeyError:
kernel = _make_chunk_wise_kernel(func, *args, **kwargs)
_cache[func] = kernel
return kernel
def _mangle_user(name):
"""Mangle user variable name"""
return f"__user_{name}"
| 0 |
rapidsai_public_repos/cudf/python/cudf/cudf
|
rapidsai_public_repos/cudf/python/cudf/cudf/utils/docutils.py
|
# Copyright (c) 2018-2023, NVIDIA CORPORATION.
"""
Helper functions for parameterized docstring
"""
import functools
import re
import string
_regex_whitespaces = re.compile(r"^\s+$")
def _only_spaces(s):
return bool(_regex_whitespaces.match(s))
_wrapopts = {"width": 78, "replace_whitespace": False}
def docfmt(**kwargs):
"""Format docstring.
Similar to saving the result of ``__doc__.format(**kwargs)`` as the
function's docstring.
"""
kwargs = {k: v.lstrip() for k, v in kwargs.items()}
def outer(fn):
buf = []
if fn.__doc__ is None:
return fn
formatsiter = string.Formatter().parse(fn.__doc__)
for literal, field, fmtspec, conv in formatsiter:
assert conv is None
assert not fmtspec
buf.append(literal)
if field is not None:
# get indentation
lines = literal.rsplit("\n", 1)
if _only_spaces(lines[-1]):
indent = " " * len(lines[-1])
valuelines = kwargs[field].splitlines(True)
# first line
buf.append(valuelines[0])
# subsequent lines are indented
buf.extend([indent + ln for ln in valuelines[1:]])
else:
buf.append(kwargs[field])
fn.__doc__ = "".join(buf)
return fn
return outer
def docfmt_partial(**kwargs):
return functools.partial(docfmt, **kwargs)
def copy_docstring(other):
"""
Decorator that sets ``__doc__`` to ``other.__doc___``.
"""
def wrapper(func):
func.__doc__ = other.__doc__
return func
return wrapper
def doc_apply(doc):
"""Set `__doc__` attribute of `func` to `doc`."""
def wrapper(func):
func.__doc__ = doc
return func
return wrapper
doc_describe = docfmt_partial(
docstring="""
Generate descriptive statistics.
Descriptive statistics include those that summarize the
central tendency, dispersion and shape of a dataset's
distribution, excluding ``NaN`` values.
Analyzes both numeric and object series, as well as
``DataFrame`` column sets of mixed data types. The
output will vary depending on what is provided.
Refer to the notes below for more detail.
Parameters
----------
percentiles : list-like of numbers, optional
The percentiles to include in the output.
All should fall between 0 and 1. The default is
``[.25, .5, .75]``, which returns the 25th, 50th,
and 75th percentiles.
include : 'all', list-like of dtypes or None(default), optional
A list of data types to include in the result.
Ignored for ``Series``. Here are the options:
- 'all' : All columns of the input will be included in the output.
- A list-like of dtypes : Limits the results to the
provided data types.
To limit the result to numeric types submit
``numpy.number``. To limit it instead to object columns submit
the ``numpy.object`` data type. Strings
can also be used in the style of
``select_dtypes`` (e.g. ``df.describe(include=['O'])``). To
select pandas categorical columns, use ``'category'``
- None (default) : The result will include all numeric columns.
exclude : list-like of dtypes or None (default), optional,
A list of data types to omit from the result. Ignored
for ``Series``. Here are the options:
- A list-like of dtypes : Excludes the provided data types
from the result. To exclude numeric types submit
``numpy.number``. To exclude object columns submit the data
type ``numpy.object``. Strings can also be used in the style of
``select_dtypes`` (e.g. ``df.describe(include=['O'])``). To
exclude pandas categorical columns, use ``'category'``
- None (default) : The result will exclude nothing.
datetime_is_numeric : bool, default False
For DataFrame input, this also controls whether datetime columns
are included by default.
.. deprecated:: 23.04
`datetime_is_numeric` is deprecated and will be removed in
a future version of cudf.
Returns
-------
output_frame : Series or DataFrame
Summary statistics of the Series or Dataframe provided.
Notes
-----
For numeric data, the result's index will include ``count``,
``mean``, ``std``, ``min``, ``max`` as well as lower, ``50`` and
upper percentiles. By default the lower percentile is ``25`` and the
upper percentile is ``75``. The ``50`` percentile is the
same as the median.
For strings dtype or datetime dtype, the result's index
will include ``count``, ``unique``, ``top``, and ``freq``. The ``top``
is the most common value. The ``freq`` is the most common value's
frequency. Timestamps also include the ``first`` and ``last`` items.
If multiple object values have the highest count, then the
``count`` and ``top`` results will be arbitrarily chosen from
among those with the highest count.
For mixed data types provided via a ``DataFrame``, the default is to
return only an analysis of numeric columns. If the dataframe consists
only of object and categorical data without any numeric columns, the
default is to return an analysis of both the object and categorical
columns. If ``include='all'`` is provided as an option, the result
will include a union of attributes of each type.
The ``include`` and ``exclude`` parameters can be used to limit
which columns in a ``DataFrame`` are analyzed for the output.
The parameters are ignored when analyzing a ``Series``.
Examples
--------
Describing a ``Series`` containing numeric values.
>>> import cudf
>>> s = cudf.Series([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
>>> s
0 1
1 2
2 3
3 4
4 5
5 6
6 7
7 8
8 9
9 10
dtype: int64
>>> s.describe()
count 10.00000
mean 5.50000
std 3.02765
min 1.00000
25% 3.25000
50% 5.50000
75% 7.75000
max 10.00000
dtype: float64
Describing a categorical ``Series``.
>>> s = cudf.Series(['a', 'b', 'a', 'b', 'c', 'a'], dtype='category')
>>> s
0 a
1 b
2 a
3 b
4 c
5 a
dtype: category
Categories (3, object): ['a', 'b', 'c']
>>> s.describe()
count 6
unique 3
top a
freq 3
dtype: object
Describing a timestamp ``Series``.
>>> import numpy as np
>>> s = cudf.Series([
... np.datetime64("2000-01-01"),
... np.datetime64("2010-01-01"),
... np.datetime64("2010-01-01")
... ])
>>> s
0 2000-01-01
1 2010-01-01
2 2010-01-01
dtype: datetime64[s]
>>> s.describe()
count 3
mean 2006-09-01 08:00:00
min 2000-01-01 00:00:00
25% 2004-12-31 12:00:00
50% 2010-01-01 00:00:00
75% 2010-01-01 00:00:00
max 2010-01-01 00:00:00
dtype: object
Describing a ``DataFrame``. By default only numeric fields are
returned.
>>> df = cudf.DataFrame({"categorical": cudf.Series(['d', 'e', 'f'],
... dtype='category'),
... "numeric": [1, 2, 3],
... "object": ['a', 'b', 'c']
... })
>>> df
categorical numeric object
0 d 1 a
1 e 2 b
2 f 3 c
>>> df.describe()
numeric
count 3.0
mean 2.0
std 1.0
min 1.0
25% 1.5
50% 2.0
75% 2.5
max 3.0
Describing all columns of a ``DataFrame`` regardless of data type.
>>> df.describe(include='all')
categorical numeric object
count 3 3.0 3
unique 3 <NA> 3
top d <NA> a
freq 1 <NA> 1
mean <NA> 2.0 <NA>
std <NA> 1.0 <NA>
min <NA> 1.0 <NA>
25% <NA> 1.5 <NA>
50% <NA> 2.0 <NA>
75% <NA> 2.5 <NA>
max <NA> 3.0 <NA>
Describing a column from a ``DataFrame`` by accessing it as an
attribute.
>>> df.numeric.describe()
count 3.0
mean 2.0
std 1.0
min 1.0
25% 1.5
50% 2.0
75% 2.5
max 3.0
Name: numeric, dtype: float64
Including only numeric columns in a ``DataFrame`` description.
>>> df.describe(include=[np.number])
numeric
count 3.0
mean 2.0
std 1.0
min 1.0
25% 1.5
50% 2.0
75% 2.5
max 3.0
Including only string columns in a ``DataFrame`` description.
>>> df.describe(include=[object])
object
count 3
unique 3
top a
freq 1
Including only categorical columns from a ``DataFrame`` description.
>>> df.describe(include=['category'])
categorical
count 3
unique 3
top d
freq 1
Excluding numeric columns from a ``DataFrame`` description.
>>> df.describe(exclude=[np.number])
categorical object
count 3 3
unique 3 3
top d a
freq 1 1
Excluding object columns from a ``DataFrame`` description.
>>> df.describe(exclude=[object])
categorical numeric
count 3 3.0
unique 3 <NA>
top d <NA>
freq 1 <NA>
mean <NA> 2.0
std <NA> 1.0
min <NA> 1.0
25% <NA> 1.5
50% <NA> 2.0
75% <NA> 2.5
max <NA> 3.0
"""
)
| 0 |
rapidsai_public_repos/cudf/python/cudf/cudf
|
rapidsai_public_repos/cudf/python/cudf/cudf/utils/gpu_utils.py
|
# Copyright (c) 2020-2022, NVIDIA CORPORATION.
def validate_setup():
import os
# TODO: Remove the following check once we arrive at a solution for #4827
# This is a temporary workaround to unblock internal testing
# related issue: https://github.com/rapidsai/cudf/issues/4827
if (
"RAPIDS_NO_INITIALIZE" in os.environ
or "CUDF_NO_INITIALIZE" in os.environ
):
return
import warnings
from cuda.cudart import cudaDeviceAttr, cudaError_t
from rmm._cuda.gpu import (
CUDARuntimeError,
deviceGetName,
driverGetVersion,
getDeviceAttribute,
getDeviceCount,
runtimeGetVersion,
)
from cudf.errors import UnsupportedCUDAError
notify_caller_errors = {
cudaError_t.cudaErrorInitializationError,
cudaError_t.cudaErrorInsufficientDriver,
cudaError_t.cudaErrorInvalidDeviceFunction,
cudaError_t.cudaErrorInvalidDevice,
cudaError_t.cudaErrorStartupFailure,
cudaError_t.cudaErrorInvalidKernelImage,
cudaError_t.cudaErrorAlreadyAcquired,
cudaError_t.cudaErrorOperatingSystem,
cudaError_t.cudaErrorNotPermitted,
cudaError_t.cudaErrorNotSupported,
cudaError_t.cudaErrorSystemNotReady,
cudaError_t.cudaErrorSystemDriverMismatch,
cudaError_t.cudaErrorCompatNotSupportedOnDevice,
cudaError_t.cudaErrorDeviceUninitialized,
cudaError_t.cudaErrorTimeout,
cudaError_t.cudaErrorUnknown,
cudaError_t.cudaErrorApiFailureBase,
}
try:
gpus_count = getDeviceCount()
except CUDARuntimeError as e:
if e.status in notify_caller_errors:
raise e
# If there is no GPU detected, set `gpus_count` to -1
gpus_count = -1
except RuntimeError as e:
# getDeviceCount() can raise a RuntimeError
# when ``libcuda.so`` is missing.
# We don't want this to propagate up to the user.
warnings.warn(str(e))
return
if gpus_count > 0:
# Cupy throws RunTimeException to get GPU count,
# hence obtaining GPU count by in-house cpp api above
major_version = getDeviceAttribute(
cudaDeviceAttr.cudaDevAttrComputeCapabilityMajor, 0
)
if major_version < 6:
# A GPU with NVIDIA Pascalβ’ architecture or newer is required.
# Reference: https://developer.nvidia.com/cuda-gpus
# Hardware Generation Compute Capability
# Ampere 8.x
# Turing 7.5
# Volta 7.0, 7.2
# Pascal 6.x
# Maxwell 5.x
# Kepler 3.x
# Fermi 2.x
device_name = deviceGetName(0)
minor_version = getDeviceAttribute(
cudaDeviceAttr.cudaDevAttrComputeCapabilityMinor, 0
)
warnings.warn(
"A GPU with NVIDIA Pascalβ’ (Compute Capability 6.0) "
"or newer architecture is required.\n"
f"Detected GPU 0: {device_name}\n"
f"Detected Compute Capability: {major_version}.{minor_version}"
)
cuda_runtime_version = runtimeGetVersion()
if cuda_runtime_version < 11000:
# Require CUDA Runtime version 11.0 or greater.
major_version = cuda_runtime_version // 1000
minor_version = (cuda_runtime_version % 1000) // 10
raise UnsupportedCUDAError(
"Detected CUDA Runtime version is "
f"{major_version}.{minor_version}. "
"Please update your CUDA Runtime to 11.0 or above."
)
cuda_driver_supported_rt_version = driverGetVersion()
# Though Yes, Externally driver version is represented like `418.39`
# and cuda runtime version like `10.1`. It is not the similar case
# at cuda api's level. Coming down to APIs they follow a uniform
# convention of an integer which corresponds to the versioning
# like (1000 major + 10 minor) for 10.1 Driver version API doesn't
# actually indicate driver version, it indicates only the latest
# CUDA version supported by the driver.
# For reference :
# https://docs.nvidia.com/deploy/cuda-compatibility/index.html
if cuda_driver_supported_rt_version == 0:
raise UnsupportedCUDAError(
"We couldn't detect the GPU driver properly. Please follow "
"the installation guide to ensure your driver is properly "
"installed: "
"https://docs.nvidia.com/cuda/cuda-installation-guide-linux/"
)
elif cuda_driver_supported_rt_version >= cuda_runtime_version:
# CUDA Driver Version Check:
# Driver Runtime version is >= Runtime version
pass
elif (
cuda_driver_supported_rt_version >= 11000
and cuda_runtime_version >= 11000
):
# With cuda enhanced compatibility any code compiled
# with 11.x version of cuda can now run on any
# driver >= 450.80.02. 11000 is the minimum cuda
# version 450.80.02 supports.
pass
else:
raise UnsupportedCUDAError(
"Please update your NVIDIA GPU Driver to support CUDA "
"Runtime.\n"
f"Detected CUDA Runtime version : {cuda_runtime_version}\n"
"Latest version of CUDA supported by current "
f"NVIDIA GPU Driver : {cuda_driver_supported_rt_version}"
)
else:
warnings.warn("No NVIDIA GPU detected")
| 0 |
rapidsai_public_repos/cudf/python/cudf/cudf
|
rapidsai_public_repos/cudf/python/cudf/cudf/utils/_numba.py
|
# Copyright (c) 2023, NVIDIA CORPORATION.
import glob
import os
import sys
import warnings
from numba import config as numba_config
try:
from pynvjitlink.patch import (
patch_numba_linker as patch_numba_linker_pynvjitlink,
)
except ImportError:
def patch_numba_linker_pynvjitlink():
warnings.warn(
"CUDA Toolkit is newer than CUDA driver. "
"Numba features will not work in this configuration. "
)
CC_60_PTX_FILE = os.path.join(
os.path.dirname(__file__), "../core/udf/shim_60.ptx"
)
def _get_best_ptx_file(archs, max_compute_capability):
"""
Determine of the available PTX files which one is
the most recent up to and including the device compute capability.
"""
filtered_archs = [x for x in archs if x[0] <= max_compute_capability]
if filtered_archs:
return max(filtered_archs, key=lambda x: x[0])
else:
return None
def _get_ptx_file(path, prefix):
if "RAPIDS_NO_INITIALIZE" in os.environ:
# cc=60 ptx is always built
cc = int(os.environ.get("STRINGS_UDF_CC", "60"))
else:
from numba import cuda
dev = cuda.get_current_device()
# Load the highest compute capability file available that is less than
# the current device's.
cc = int("".join(str(x) for x in dev.compute_capability))
files = glob.glob(os.path.join(path, f"{prefix}*.ptx"))
if len(files) == 0:
raise RuntimeError(f"Missing PTX files for cc={cc}")
regular_sms = []
for f in files:
file_name = os.path.basename(f)
sm_number = file_name.rstrip(".ptx").lstrip(prefix)
if sm_number.endswith("a"):
processed_sm_number = int(sm_number.rstrip("a"))
if processed_sm_number == cc:
return f
else:
regular_sms.append((int(sm_number), f))
regular_result = None
if regular_sms:
regular_result = _get_best_ptx_file(regular_sms, cc)
if regular_result is None:
raise RuntimeError(
"This cuDF installation is missing the necessary PTX "
f"files that are <={cc}."
)
else:
return regular_result[1]
def patch_numba_linker_cuda_11():
# Enable the config option for minor version compatibility
numba_config.CUDA_ENABLE_MINOR_VERSION_COMPATIBILITY = 1
if "numba.cuda" in sys.modules:
# Patch numba for version 0.57.0 MVC support, which must know the
# config value at import time. We cannot guarantee the order of imports
# between cudf and numba.cuda so we patch numba to ensure it has these
# names available.
# See https://github.com/numba/numba/issues/8977 for details.
import numba.cuda
from cubinlinker import CubinLinker, CubinLinkerError
from ptxcompiler import compile_ptx
numba.cuda.cudadrv.driver.compile_ptx = compile_ptx
numba.cuda.cudadrv.driver.CubinLinker = CubinLinker
numba.cuda.cudadrv.driver.CubinLinkerError = CubinLinkerError
def _setup_numba():
"""
Configure the numba linker for use with cuDF. This consists of
potentially putting numba into enhanced compatibility mode
based on the user driver and runtime versions as well as the
version of the CUDA Toolkit used to build the PTX files shipped
with the user cuDF package.
"""
# ptxcompiler is a requirement for cuda 11.x packages but not
# cuda 12.x packages. However its version checking machinery
# is still necessary. If a user happens to have ptxcompiler
# in a cuda 12 environment, it's use for the purposes of
# checking the driver and runtime versions is harmless
try:
from ptxcompiler.patch import NO_DRIVER, safe_get_versions
except ModuleNotFoundError:
# use vendored version
from cudf.utils._ptxcompiler import NO_DRIVER, safe_get_versions
versions = safe_get_versions()
if versions != NO_DRIVER:
driver_version, runtime_version = versions
ptx_toolkit_version = _get_cuda_version_from_ptx_file(CC_60_PTX_FILE)
# MVC is required whenever any PTX is newer than the driver
# This could be the shipped PTX file or the PTX emitted by
# the version of NVVM on the user system, the latter aligning
# with the runtime version
if (driver_version < ptx_toolkit_version) or (
driver_version < runtime_version
):
if driver_version < (12, 0):
patch_numba_linker_cuda_11()
else:
patch_numba_linker_pynvjitlink()
def _get_cuda_version_from_ptx_file(path):
"""
https://docs.nvidia.com/cuda/parallel-thread-execution/
Each PTX module must begin with a .version
directive specifying the PTX language version
example header:
//
// Generated by NVIDIA NVVM Compiler
//
// Compiler Build ID: CL-31057947
// Cuda compilation tools, release 11.6, V11.6.124
// Based on NVVM 7.0.1
//
.version 7.6
.target sm_52
.address_size 64
"""
with open(path) as ptx_file:
for line in ptx_file:
if line.startswith(".version"):
ver_line = line
break
else:
raise ValueError("Could not read CUDA version from ptx file.")
version = ver_line.strip("\n").split(" ")[1]
# This dictionary maps from supported versions of NVVM to the
# PTX version it produces. The lowest value should be the minimum
# CUDA version required to compile the library. Currently CUDA 11.5
# or higher is required to build cudf. New CUDA versions should
# be added to this dictionary when officially supported.
ver_map = {
"7.5": (11, 5),
"7.6": (11, 6),
"7.7": (11, 7),
"7.8": (11, 8),
"8.0": (12, 0),
"8.1": (12, 1),
"8.2": (12, 2),
"8.3": (12, 3),
}
cuda_ver = ver_map.get(version)
if cuda_ver is None:
raise ValueError(
f"Could not map PTX version {version} to a CUDA version"
)
return cuda_ver
class _CUDFNumbaConfig:
def __enter__(self):
self.enter_val = numba_config.CUDA_LOW_OCCUPANCY_WARNINGS
numba_config.CUDA_LOW_OCCUPANCY_WARNINGS = 0
def __exit__(self, exc_type, exc_value, traceback):
numba_config.CUDA_LOW_OCCUPANCY_WARNINGS = self.enter_val
| 0 |
rapidsai_public_repos/cudf/python/cudf/cudf
|
rapidsai_public_repos/cudf/python/cudf/cudf/utils/__init__.py
|
# Copyright (c) 2020, NVIDIA CORPORATION.
| 0 |
rapidsai_public_repos/cudf/python/cudf/cudf
|
rapidsai_public_repos/cudf/python/cudf/cudf/utils/queryutils.py
|
# Copyright (c) 2018-2023, NVIDIA CORPORATION.
import ast
import datetime
from typing import Any, Dict
import numpy as np
from numba import cuda
import cudf
from cudf.core.buffer import acquire_spill_lock
from cudf.core.column import column_empty
from cudf.utils import applyutils
from cudf.utils._numba import _CUDFNumbaConfig
from cudf.utils.dtypes import (
BOOL_TYPES,
DATETIME_TYPES,
NUMERIC_TYPES,
TIMEDELTA_TYPES,
)
ENVREF_PREFIX = "__CUDF_ENVREF__"
SUPPORTED_QUERY_TYPES = {
np.dtype(dt)
for dt in NUMERIC_TYPES | DATETIME_TYPES | TIMEDELTA_TYPES | BOOL_TYPES
}
class QuerySyntaxError(ValueError):
pass
class _NameExtractor(ast.NodeVisitor):
def __init__(self):
self.colnames = set()
self.refnames = set()
def visit_Name(self, node):
if not isinstance(node.ctx, ast.Load):
raise QuerySyntaxError("assignment is not allowed")
name = node.id
chosen = (
self.refnames if name.startswith(ENVREF_PREFIX) else self.colnames
)
chosen.add(name)
def query_parser(text):
"""The query expression parser.
See https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.query.html
* names with '@' prefix are global reference.
* other names must be column names of the dataframe.
Parameters
----------
text: str
The query string
Returns
-------
info: a `dict` of the parsed info
""" # noqa
# convert any '@' to
text = text.replace("@", ENVREF_PREFIX)
tree = ast.parse(text)
_check_error(tree)
[expr] = tree.body
extractor = _NameExtractor()
extractor.visit(expr)
colnames = sorted(extractor.colnames)
refnames = sorted(extractor.refnames)
info = {
"source": text,
"args": colnames + refnames,
"colnames": colnames,
"refnames": refnames,
}
return info
def query_builder(info, funcid):
"""Function builder for the query expression
Parameters
----------
info: dict
From the `query_parser()`
funcid: str
The name for the function being generated
Returns
-------
func: a python function of the query
"""
args = info["args"]
def_line = "def {funcid}({args}):".format(
funcid=funcid, args=", ".join(args)
)
lines = [def_line, " return {}".format(info["source"])]
source = "\n".join(lines)
glbs = {}
exec(source, glbs)
return glbs[funcid]
def _check_error(tree):
if not isinstance(tree, ast.Module):
raise QuerySyntaxError("top level should be of ast.Module")
if len(tree.body) != 1:
raise QuerySyntaxError("too many expressions")
_cache: Dict[Any, Any] = {}
def query_compile(expr):
"""Compile the query expression.
This generates a CUDA Kernel for the query expression. The kernel is
cached for reuse. All variable names, including both references to
columns and references to variables in the calling environment, in the
expression are passed as argument to the kernel. Thus, the kernel is
reusable on any dataframe and in any environment.
Parameters
----------
expr : str
The boolean expression
Returns
-------
compiled: dict
key "kernel" is the cuda kernel for the query.
key "args" is a sequence of name of the arguments.
"""
# hash returns in the semi-open interval [-2**63, 2**63)
funcid = f"queryexpr_{(hash(expr) + 2**63):x}"
# Load cache
compiled = _cache.get(funcid)
# Cache not found
if compiled is None:
info = query_parser(expr)
fn = query_builder(info, funcid)
args = info["args"]
# compile
devicefn = cuda.jit(device=True)(fn)
kernelid = f"kernel_{funcid}"
kernel = _wrap_query_expr(kernelid, devicefn, args)
compiled = info.copy()
compiled["kernel"] = kernel
# Store cache
_cache[funcid] = compiled
return compiled
_kernel_source = """
@cuda.jit
def {kernelname}(out, {args}):
idx = cuda.grid(1)
if idx < out.size:
out[idx] = queryfn({indiced_args})
"""
def _wrap_query_expr(name, fn, args):
"""Wrap the query expression in a cuda kernel."""
def _add_idx(arg):
if arg.startswith(ENVREF_PREFIX):
return arg
else:
return f"{arg}[idx]"
def _add_prefix(arg):
return f"_args_{arg}"
glbls = {"queryfn": fn, "cuda": cuda}
kernargs = map(_add_prefix, args)
indiced_args = map(_add_prefix, map(_add_idx, args))
src = _kernel_source.format(
kernelname=name,
args=", ".join(kernargs),
indiced_args=", ".join(indiced_args),
)
exec(src, glbls)
kernel = glbls[name]
return kernel
@acquire_spill_lock()
def query_execute(df, expr, callenv):
"""Compile & execute the query expression
Note: the expression is compiled and cached for future reuse.
Parameters
----------
df : DataFrame
expr : str
boolean expression
callenv : dict
Contains keys 'local_dict', 'locals' and 'globals' which are all dict.
They represent the arg, local and global dictionaries of the caller.
"""
# compile
compiled = query_compile(expr)
columns = compiled["colnames"]
# prepare col args
colarrays = [cudf.core.dataframe.extract_col(df, col) for col in columns]
# wait to check the types until we know which cols are used
if any(col.dtype not in SUPPORTED_QUERY_TYPES for col in colarrays):
raise TypeError(
"query only supports numeric, datetime, timedelta, "
"or bool dtypes."
)
colarrays = [col.data_array_view(mode="read") for col in colarrays]
kernel = compiled["kernel"]
# process env args
envargs = []
envdict = callenv["globals"].copy()
envdict.update(callenv["locals"])
envdict.update(callenv["local_dict"])
for name in compiled["refnames"]:
name = name[len(ENVREF_PREFIX) :]
try:
val = envdict[name]
if isinstance(val, datetime.datetime):
val = np.datetime64(val)
except KeyError:
msg = "{!r} not defined in the calling environment"
raise NameError(msg.format(name))
else:
envargs.append(val)
# allocate output buffer
nrows = len(df)
out = column_empty(nrows, dtype=np.bool_)
# run kernel
args = [out] + colarrays + envargs
with _CUDFNumbaConfig():
kernel.forall(nrows)(*args)
out_mask = applyutils.make_aggregate_nullmask(df, columns=columns)
return out.set_mask(out_mask).fillna(False)
| 0 |
rapidsai_public_repos/cudf/python/cudf/cudf
|
rapidsai_public_repos/cudf/python/cudf/cudf/utils/utils.py
|
# Copyright (c) 2020-2023, NVIDIA CORPORATION.
import decimal
import functools
import os
import traceback
import warnings
from typing import FrozenSet, Set, Union
import numpy as np
import pandas as pd
import rmm
import cudf
import cudf.api.types
from cudf.core import column
from cudf.core.buffer import as_buffer
# The size of the mask in bytes
mask_dtype = cudf.api.types.dtype(np.int32)
mask_bitsize = mask_dtype.itemsize * 8
# Mapping from ufuncs to the corresponding binary operators.
_ufunc_binary_operations = {
# Arithmetic binary operations.
"add": "add",
"subtract": "sub",
"multiply": "mul",
"matmul": "matmul",
"divide": "truediv",
"true_divide": "truediv",
"floor_divide": "floordiv",
"power": "pow",
"float_power": "pow",
"remainder": "mod",
"mod": "mod",
"fmod": "mod",
# Bitwise binary operations.
"bitwise_and": "and",
"bitwise_or": "or",
"bitwise_xor": "xor",
# Comparison binary operators
"greater": "gt",
"greater_equal": "ge",
"less": "lt",
"less_equal": "le",
"not_equal": "ne",
"equal": "eq",
}
# These operators need to be mapped to their inverses when performing a
# reflected ufunc operation because no reflected version of the operators
# themselves exist. When these operators are invoked directly (not via
# __array_ufunc__) Python takes care of calling the inverse operation.
_ops_without_reflection = {
"gt": "lt",
"ge": "le",
"lt": "gt",
"le": "ge",
# ne and eq are symmetric, so they are their own inverse op
"ne": "ne",
"eq": "eq",
}
# This is the implementation of __array_ufunc__ used for Frame and Column.
# For more detail on this function and how it should work, see
# https://numpy.org/doc/stable/reference/ufuncs.html
def _array_ufunc(obj, ufunc, method, inputs, kwargs):
# We don't currently support reduction, accumulation, etc. We also
# don't support any special kwargs or higher arity ufuncs than binary.
if method != "__call__" or kwargs or ufunc.nin > 2:
return NotImplemented
fname = ufunc.__name__
if fname in _ufunc_binary_operations:
reflect = obj is not inputs[0]
other = inputs[0] if reflect else inputs[1]
op = _ufunc_binary_operations[fname]
if reflect and op in _ops_without_reflection:
op = _ops_without_reflection[op]
reflect = False
op = f"__{'r' if reflect else ''}{op}__"
# float_power returns float irrespective of the input type.
# TODO: Do not get the attribute directly, get from the operator module
# so that we can still exploit reflection.
if fname == "float_power":
return getattr(obj, op)(other).astype(float)
return getattr(obj, op)(other)
# Special handling for various unary operations.
if fname == "negative":
return obj * -1
if fname == "positive":
return obj.copy(deep=True)
if fname == "invert":
return ~obj
if fname == "absolute":
# TODO: Make sure all obj (mainly Column) implement abs.
return abs(obj)
if fname == "fabs":
return abs(obj).astype(np.float64)
# None is a sentinel used by subclasses to trigger cupy dispatch.
return None
_EQUALITY_OPS = {
"__eq__",
"__ne__",
"__lt__",
"__gt__",
"__le__",
"__ge__",
}
# The test root is set by pytest to support situations where tests are run from
# a source tree on a built version of cudf.
NO_EXTERNAL_ONLY_APIS = os.getenv("NO_EXTERNAL_ONLY_APIS")
_cudf_root = os.path.dirname(cudf.__file__)
# If the environment variable for the test root is not set, we default to
# using the path relative to the cudf root directory.
_tests_root = os.getenv("_CUDF_TEST_ROOT") or os.path.join(_cudf_root, "tests")
def _external_only_api(func, alternative=""):
"""Decorator to indicate that a function should not be used internally.
cudf contains many APIs that exist for pandas compatibility but are
intrinsically inefficient. For some of these cudf has internal
equivalents that are much faster. Usage of the slow public APIs inside
our implementation can lead to unnecessary performance bottlenecks.
Applying this decorator to such functions and setting the environment
variable NO_EXTERNAL_ONLY_APIS will cause such functions to raise
exceptions if they are called from anywhere inside cudf, making it easy
to identify and excise such usage.
The `alternative` should be a complete phrase or sentence since it will
be used verbatim in error messages.
"""
# If the first arg is a string then an alternative function to use in
# place of this API was provided, so we pass that to a subsequent call.
# It would be cleaner to implement this pattern by using a class
# decorator with a factory method, but there is no way to generically
# wrap docstrings on a class (we would need the docstring to be on the
# class itself, not instances, because that's what `help` looks at) and
# there is also no way to make mypy happy with that approach.
if isinstance(func, str):
return lambda actual_func: _external_only_api(actual_func, func)
if not NO_EXTERNAL_ONLY_APIS:
return func
@functools.wraps(func)
def wrapper(*args, **kwargs):
# Check the immediately preceding frame to see if it's in cudf.
frame, lineno = next(traceback.walk_stack(None))
fn = frame.f_code.co_filename
if _cudf_root in fn and _tests_root not in fn:
raise RuntimeError(
f"External-only API called in {fn} at line {lineno}. "
f"{alternative}"
)
return func(*args, **kwargs)
return wrapper
def initfunc(f):
"""
Decorator for initialization functions that should
be run exactly once.
"""
@functools.wraps(f)
def wrapper(*args, **kwargs):
if wrapper.initialized:
return
wrapper.initialized = True
return f(*args, **kwargs)
wrapper.initialized = False
return wrapper
def clear_cache():
"""Clear all internal caches"""
cudf.Scalar._clear_instance_cache()
class GetAttrGetItemMixin:
"""This mixin changes `__getattr__` to attempt a `__getitem__` call.
Classes that include this mixin gain enhanced functionality for the
behavior of attribute access like `obj.foo`: if `foo` is not an attribute
of `obj`, obj['foo'] will be attempted, and the result returned. To make
this behavior safe, classes that include this mixin must define a class
attribute `_PROTECTED_KEYS` that defines the attributes that are accessed
within `__getitem__`. For example, if `__getitem__` is defined as
`return self._data[key]`, we must define `_PROTECTED_KEYS={'_data'}`.
"""
# Tracking of protected keys by each subclass is necessary to make the
# `__getattr__`->`__getitem__` call safe. See
# https://nedbatchelder.com/blog/201010/surprising_getattr_recursion.html # noqa: E501
# for an explanation. In brief, defining the `_PROTECTED_KEYS` allows this
# class to avoid calling `__getitem__` inside `__getattr__` when
# `__getitem__` will internally again call `__getattr__`, resulting in an
# infinite recursion.
# This problem only arises when the copy protocol is invoked (e.g. by
# `copy.copy` or `pickle.dumps`), and could also be avoided by redefining
# methods involved with the copy protocol such as `__reduce__` or
# `__setstate__`, but this class may be used in complex multiple
# inheritance hierarchies that might also override serialization. The
# solution here is a minimally invasive change that avoids such conflicts.
_PROTECTED_KEYS: Union[FrozenSet[str], Set[str]] = frozenset()
def __getattr__(self, key):
if key in self._PROTECTED_KEYS:
raise AttributeError
try:
return self[key]
except KeyError:
raise AttributeError(
f"{type(self).__name__} object has no attribute {key}"
)
class NotIterable:
def __iter__(self):
"""
Iteration is unsupported.
See :ref:`iteration <pandas-comparison/iteration>` for more
information.
"""
raise TypeError(
f"{self.__class__.__name__} object is not iterable. "
f"Consider using `.to_arrow()`, `.to_pandas()` or `.values_host` "
f"if you wish to iterate over the values."
)
def pa_mask_buffer_to_mask(mask_buf, size):
"""
Convert PyArrow mask buffer to cuDF mask buffer
"""
mask_size = cudf._lib.null_mask.bitmask_allocation_size_bytes(size)
if mask_buf.size < mask_size:
dbuf = rmm.DeviceBuffer(size=mask_size)
dbuf.copy_from_host(np.asarray(mask_buf).view("u1"))
return as_buffer(dbuf)
return as_buffer(mask_buf)
def _isnat(val):
"""Wraps np.isnat to return False instead of error on invalid inputs."""
if val is pd.NaT:
return True
elif not isinstance(val, (np.datetime64, np.timedelta64, str)):
return False
else:
try:
return val in {"NaT", "NAT"} or np.isnat(val)
except TypeError:
return False
def search_range(x: int, ri: range, *, side: str) -> int:
"""
Find insertion point in a range to maintain sorted order
Parameters
----------
x
Integer to insert
ri
Range to insert into
side
Tie-breaking decision for the case that `x` is a member of the
range. If `"left"` then the insertion point is before the
entry, otherwise it is after.
Returns
-------
int
The insertion point
See Also
--------
numpy.searchsorted
Notes
-----
Let ``p`` be the return value, then if ``side="left"`` the
following invariants are maintained::
all(x < n for n in ri[:p])
all(x >= n for n in ri[p:])
Conversely, if ``side="right"`` then we have::
all(x <= n for n in ri[:p])
all(x > n for n in ri[p:])
Examples
--------
For series: 1 4 7
>>> search_range(4, range(1, 10, 3), side="left")
1
>>> search_range(4, range(1, 10, 3), side="right")
2
"""
assert side in {"left", "right"}
if flip := (ri.step < 0):
ri = ri[::-1]
shift = int(side == "right")
else:
shift = int(side == "left")
offset = (x - ri.start - shift) // ri.step + 1
if flip:
offset = len(ri) - offset
return max(min(len(ri), offset), 0)
def is_na_like(obj):
"""
Check if `obj` is a cudf NA value,
i.e., None, cudf.NA or cudf.NaT
"""
return obj is None or obj is cudf.NA or obj is cudf.NaT
def _warn_no_dask_cudf(fn):
@functools.wraps(fn)
def wrapper(self):
# try import
try:
# Import dask_cudf (if available) in case
# this is being called within Dask Dataframe
import dask_cudf # noqa: F401
except ImportError:
warnings.warn(
f"Using dask to tokenize a {type(self)} object, "
"but `dask_cudf` is not installed. Please install "
"`dask_cudf` for proper dispatching."
)
return fn(self)
return wrapper
def _is_same_name(left_name, right_name):
# Internal utility to compare if two names are same.
with warnings.catch_warnings():
# numpy throws warnings while comparing
# NaT values with non-NaT values.
warnings.simplefilter("ignore")
try:
same = (left_name is right_name) or (left_name == right_name)
if not same:
if isinstance(left_name, decimal.Decimal) and isinstance(
right_name, decimal.Decimal
):
return left_name.is_nan() and right_name.is_nan()
if isinstance(left_name, float) and isinstance(
right_name, float
):
return np.isnan(left_name) and np.isnan(right_name)
if isinstance(left_name, np.datetime64) and isinstance(
right_name, np.datetime64
):
return np.isnan(left_name) and np.isnan(right_name)
return same
except TypeError:
return False
def _all_bools_with_nulls(lhs, rhs, bool_fill_value):
# Internal utility to construct a boolean column
# by combining nulls from `lhs` & `rhs`.
if lhs.has_nulls() and rhs.has_nulls():
result_mask = lhs._get_mask_as_column() & rhs._get_mask_as_column()
elif lhs.has_nulls():
result_mask = lhs._get_mask_as_column()
elif rhs.has_nulls():
result_mask = rhs._get_mask_as_column()
else:
result_mask = None
result_col = column.full(
size=len(lhs), fill_value=bool_fill_value, dtype=cudf.dtype(np.bool_)
)
if result_mask is not None:
result_col = result_col.set_mask(result_mask.as_mask())
return result_col
| 0 |
rapidsai_public_repos/cudf/python/cudf/cudf
|
rapidsai_public_repos/cudf/python/cudf/cudf/utils/hash_vocab_utils.py
|
# Copyright (c) 2020-2022, NVIDIA CORPORATION.
# This function is from the rapidsai/clx repo at below link
# https://github.com/rapidsai/clx/blob/267c6d30805c9dcbf80840f222bf31c5c4b7068a/python/clx/analytics/_perfect_hash.py
import numpy as np
PRIME = np.uint64(281474976710677)
# Coefficients ranges for inner hash - This are important to set to be
# large so that we have randomness in the bottom bits when modding
A_SECOND_LEVEL_POW = np.uint8(48)
B_SECOND_LEVEL_POW = np.uint8(7)
A_LBOUND_SECOND_LEVEL_HASH = 2**16
A_HBOUND_SECOND_LEVEL_HASH = 2**A_SECOND_LEVEL_POW
B_LBOUND_SECOND_LEVEL_HASH = 0
B_HBOUND_SECOND_LEVEL_HASH = 2**B_SECOND_LEVEL_POW
# Extremely generous and should not ever happen. This limit is imposed
# To ensure we can bit pack all the information needed for the bin hash
# functions - a, b and table size
MAX_SIZE_FOR_INITIAL_BIN = 2**8 - 1
# Shifts for bit packing
A_SECOND_LEVEL_SHIFT_AMT = np.uint8(64 - A_SECOND_LEVEL_POW)
B_SECOND_LEVEL_SHIFT_AMT = np.uint8(
64 - A_SECOND_LEVEL_POW - B_SECOND_LEVEL_POW
)
BITS_FOR_INNER_TABLE_SIZE = np.uint8(8)
NOT_FOUND = -1
def _sdbm_hash(string):
hv = 0
mask = (1 << 48) - 1
for c in string:
hv = ord(c) + (hv << 6) + (hv << 16) - hv
hv &= mask
return hv
def _hash_func(k, a, b, size):
k = np.uint64(k)
a = np.uint64(a)
b = np.uint64(b)
size = np.uint64(size)
return ((a * k + b) % PRIME) % size
def _longest_bin_length(bins):
return len(max(bins, key=len))
def _make_bins(data, num_bins, a, b):
bins = [[] for i in range(num_bins)]
for item in data:
bins[_hash_func(item, a, b, num_bins)].append(item)
return bins
def _new_bin_length(orig_length):
return int(orig_length)
def _get_space_util(bins, init_bins):
return sum(_new_bin_length(len(b)) for b in bins) + 2 * init_bins
def _pick_initial_a_b(data, max_constant, init_bins):
while True:
a = np.random.randint(2**12, 2**15)
b = np.random.randint(2**12, 2**15)
bins = _make_bins(data, init_bins, a, b)
score = _get_space_util(bins, init_bins) / len(data)
longest = _new_bin_length(_longest_bin_length(bins))
if score <= max_constant and longest <= MAX_SIZE_FOR_INITIAL_BIN:
print(f"Attempting to build table using {score:.6f}n space")
print(f"Longest bin was {longest}")
break
return bins, a, b
def _find_hash_for_internal(hash_bin):
if not hash_bin:
return [[], 0, 0]
new_length = _new_bin_length(len(hash_bin))
while True:
a = np.random.randint(
A_LBOUND_SECOND_LEVEL_HASH, A_HBOUND_SECOND_LEVEL_HASH
)
b = np.random.randint(
B_LBOUND_SECOND_LEVEL_HASH, B_HBOUND_SECOND_LEVEL_HASH
)
bins = _make_bins(hash_bin, new_length, a, b)
max_length = len(max(bins, key=len))
if max_length == 1:
bins = [b[0] if b else 0 for b in bins]
return bins, a, b
def _perfect_hash(integers, max_constant):
num_top_level_bins = len(integers) // 4
init_bins, init_a, init_b = _pick_initial_a_b(
integers, max_constant, num_top_level_bins
)
flattened_bins = []
internal_table_coeffs = np.zeros(
shape=[num_top_level_bins], dtype=np.uint64
)
offset_into_flattened_table = np.zeros(
shape=[num_top_level_bins + 1], dtype=np.uint64
)
max_bin_length = 0
for i, b in enumerate(init_bins):
if i % 500 == 0:
print(f"Processing bin {i} / {len(init_bins)} of size = {len(b)}")
internal_table, coeff_a, coeff_b = _find_hash_for_internal(b)
bin_length = len(internal_table)
max_bin_length = max(bin_length, max_bin_length)
internal_table_coeffs[i] = (
coeff_a << A_SECOND_LEVEL_SHIFT_AMT
| coeff_b << B_SECOND_LEVEL_SHIFT_AMT
| bin_length
)
offset_into_flattened_table[i + 1] = (
offset_into_flattened_table[i] + bin_length
)
flattened_bins.extend(internal_table)
print(
"Final table size {} elements compared to {} for original".format(
len(flattened_bins), len(integers)
)
)
print("Max bin length was", max_bin_length)
return (
init_a,
init_b,
num_top_level_bins,
flattened_bins,
internal_table_coeffs,
offset_into_flattened_table,
)
def _pack_keys_and_values(flattened_hash_table, original_dict):
for i in range(len(flattened_hash_table)):
if flattened_hash_table[i] in original_dict:
value = original_dict[flattened_hash_table[i]]
flattened_hash_table[i] <<= 16
flattened_hash_table[i] |= value
def _load_vocab_dict(path):
vocab = {}
with open(path, encoding="utf-8") as f:
counter = 0
for line in f:
vocab[line.strip()] = counter
counter += 1
return vocab
def _store_func(
out_name,
outer_a,
outer_b,
num_outer_bins,
hash_table,
inner_table_coeffs,
offsets_into_ht,
unk_tok_id,
first_token_id,
sep_token_id,
):
with open(out_name, mode="w+") as f:
f.write(f"{outer_a}\n")
f.write(f"{outer_b}\n")
f.write(f"{num_outer_bins}\n")
f.writelines(
f"{coeff} {offset}\n"
for coeff, offset in zip(inner_table_coeffs, offsets_into_ht)
)
f.write(f"{len(hash_table)}\n")
f.writelines(f"{kv}\n" for kv in hash_table)
f.writelines(
f"{tok_id}\n"
for tok_id in [unk_tok_id, first_token_id, sep_token_id]
)
def _retrieve(
k,
outer_a,
outer_b,
num_outer_bins,
hash_table,
inner_table_coeffs,
offsets_into_ht,
):
bin_hash = _hash_func(k, outer_a, outer_b, num_outer_bins)
start_offset_in_ht = offsets_into_ht[bin_hash]
inner_table_values = inner_table_coeffs[bin_hash]
one = np.uint64(1)
inner_a = inner_table_values >> A_SECOND_LEVEL_SHIFT_AMT
inner_b = (inner_table_values >> B_SECOND_LEVEL_SHIFT_AMT) & (
(one << B_SECOND_LEVEL_POW) - one
)
size = inner_table_values & ((one << BITS_FOR_INNER_TABLE_SIZE) - one)
inner_offset = _hash_func(k, inner_a, inner_b, size)
kv = hash_table[start_offset_in_ht + inner_offset]
key, value = kv >> 16, kv & ((1 << 16) - 1)
indicator = key == k
return indicator * value + (not indicator) * NOT_FOUND
def hash_vocab(
vocab_path,
output_path,
unk_tok="[UNK]",
first_token="[CLS]",
sep_token="[SEP]",
):
"""
Write the vocab vocabulary hashtable to the output_path
"""
np.random.seed(1243342)
vocab = _load_vocab_dict(vocab_path)
keys = list(map(_sdbm_hash, vocab.keys()))
hashed_vocab = {_sdbm_hash(key): value for key, value in vocab.items()}
error_message = (
"A collision occurred and only sdbm token hash is currently "
"supported. This can be extended to use random hashes if needed."
)
assert len(hashed_vocab) == len(vocab), error_message
(
outer_a,
outer_b,
num_outer_bins,
hash_table,
inner_table_coeffs,
offsets_into_ht,
) = _perfect_hash(keys, 10)
_pack_keys_and_values(hash_table, hashed_vocab)
_store_func(
output_path,
outer_a,
outer_b,
num_outer_bins,
hash_table,
inner_table_coeffs,
offsets_into_ht,
vocab[unk_tok],
vocab[first_token],
vocab[sep_token],
)
for key, value in hashed_vocab.items():
val = _retrieve(
key,
outer_a,
outer_b,
num_outer_bins,
hash_table,
inner_table_coeffs,
offsets_into_ht,
)
assert (
val == value
), f"Incorrect value found. Got {val} expected {value}"
print("All present tokens return correct value.")
| 0 |
rapidsai_public_repos/cudf/python/cudf/cudf
|
rapidsai_public_repos/cudf/python/cudf/cudf/utils/cudautils.py
|
# Copyright (c) 2018-2023, NVIDIA CORPORATION.
from pickle import dumps
import cachetools
from numba import cuda
from numba.np import numpy_support
from cudf.utils._numba import _CUDFNumbaConfig
#
# Misc kernels
#
@cuda.jit
def gpu_window_sizes_from_offset(arr, window_sizes, offset):
i = cuda.grid(1)
j = i
if i < arr.size:
while j > -1:
if (arr[i] - arr[j]) >= offset:
break
j -= 1
window_sizes[i] = i - j
def window_sizes_from_offset(arr, offset):
window_sizes = cuda.device_array(shape=(arr.shape), dtype="int32")
if arr.size > 0:
with _CUDFNumbaConfig():
gpu_window_sizes_from_offset.forall(arr.size)(
arr, window_sizes, offset
)
return window_sizes
@cuda.jit
def gpu_grouped_window_sizes_from_offset(
arr, window_sizes, group_starts, offset
):
i = cuda.grid(1)
j = i
if i < arr.size:
while j > (group_starts[i] - 1):
if (arr[i] - arr[j]) >= offset:
break
j -= 1
window_sizes[i] = i - j
def grouped_window_sizes_from_offset(arr, group_starts, offset):
window_sizes = cuda.device_array(shape=(arr.shape), dtype="int32")
if arr.size > 0:
with _CUDFNumbaConfig():
gpu_grouped_window_sizes_from_offset.forall(arr.size)(
arr, window_sizes, group_starts, offset
)
return window_sizes
# This cache is keyed on the (signature, code, closure variables) of UDFs, so
# it can hit for distinct functions that are similar. The lru_cache wrapping
# compile_udf misses for these similar functions, but doesn't need to serialize
# closure variables to check for a hit.
_udf_code_cache: cachetools.LRUCache = cachetools.LRUCache(maxsize=32)
def make_cache_key(udf, sig):
"""
Build a cache key for a user defined function. Used to avoid
recompiling the same function for the same set of types
"""
codebytes = udf.__code__.co_code
constants = udf.__code__.co_consts
names = udf.__code__.co_names
if udf.__closure__ is not None:
cvars = tuple(x.cell_contents for x in udf.__closure__)
cvarbytes = dumps(cvars)
else:
cvarbytes = b""
return names, constants, codebytes, cvarbytes, sig
def compile_udf(udf, type_signature):
"""Compile ``udf`` with `numba`
Compile a python callable function ``udf`` with
`numba.cuda.compile_ptx_for_current_device(device=True)` using
``type_signature`` into CUDA PTX together with the generated output type.
The output is expected to be passed to the PTX parser in `libcudf`
to generate a CUDA device function to be inlined into CUDA kernels,
compiled at runtime and launched.
Parameters
----------
udf:
a python callable function
type_signature:
a tuple that specifies types of each of the input parameters of ``udf``.
The types should be one in `numba.types` and could be converted from
numpy types with `numba.numpy_support.from_dtype(...)`.
Returns
-------
ptx_code:
The compiled CUDA PTX
output_type:
An numpy type
"""
import cudf.core.udf
key = make_cache_key(udf, type_signature)
res = _udf_code_cache.get(key)
if res:
return res
# We haven't compiled a function like this before, so need to fall back to
# compilation with Numba
ptx_code, return_type = cuda.compile_ptx_for_current_device(
udf, type_signature, device=True
)
if not isinstance(return_type, cudf.core.udf.masked_typing.MaskedType):
output_type = numpy_support.as_dtype(return_type).type
else:
output_type = return_type
# Populate the cache for this function
res = (ptx_code, output_type)
_udf_code_cache[key] = res
return res
| 0 |
rapidsai_public_repos/cudf/python/cudf/cudf
|
rapidsai_public_repos/cudf/python/cudf/cudf/utils/nvtx_annotation.py
|
# Copyright (c) 2023, NVIDIA CORPORATION.
import hashlib
from functools import partial
from nvtx import annotate
_NVTX_COLORS = ["green", "blue", "purple", "rapids"]
def _get_color_for_nvtx(name):
m = hashlib.sha256()
m.update(name.encode())
hash_value = int(m.hexdigest(), 16)
idx = hash_value % len(_NVTX_COLORS)
return _NVTX_COLORS[idx]
def _cudf_nvtx_annotate(func, domain="cudf_python"):
"""Decorator for applying nvtx annotations to methods in cudf."""
return annotate(
message=func.__qualname__,
color=_get_color_for_nvtx(func.__qualname__),
domain=domain,
)(func)
_dask_cudf_nvtx_annotate = partial(
_cudf_nvtx_annotate, domain="dask_cudf_python"
)
| 0 |
rapidsai_public_repos/cudf/python/cudf/cudf
|
rapidsai_public_repos/cudf/python/cudf/cudf/utils/dtypes.py
|
# Copyright (c) 2020-2023, NVIDIA CORPORATION.
import datetime
from collections import namedtuple
from decimal import Decimal
import cupy as cp
import numpy as np
import pandas as pd
import pyarrow as pa
from pandas.core.dtypes.common import infer_dtype_from_object
import cudf
from cudf._typing import DtypeObj
from cudf.api.types import is_bool, is_float, is_integer
"""Map numpy dtype to pyarrow types.
Note that np.bool_ bitwidth (8) is different from pa.bool_ (1). Special
handling is required when converting a Boolean column into arrow.
"""
_np_pa_dtypes = {
np.float64: pa.float64(),
np.float32: pa.float32(),
np.int64: pa.int64(),
np.longlong: pa.int64(),
np.int32: pa.int32(),
np.int16: pa.int16(),
np.int8: pa.int8(),
np.bool_: pa.bool_(),
np.uint64: pa.uint64(),
np.uint32: pa.uint32(),
np.uint16: pa.uint16(),
np.uint8: pa.uint8(),
np.datetime64: pa.date64(),
np.object_: pa.string(),
np.str_: pa.string(),
}
np_dtypes_to_pandas_dtypes = {
np.dtype("uint8"): pd.UInt8Dtype(),
np.dtype("uint16"): pd.UInt16Dtype(),
np.dtype("uint32"): pd.UInt32Dtype(),
np.dtype("uint64"): pd.UInt64Dtype(),
np.dtype("int8"): pd.Int8Dtype(),
np.dtype("int16"): pd.Int16Dtype(),
np.dtype("int32"): pd.Int32Dtype(),
np.dtype("int64"): pd.Int64Dtype(),
np.dtype("bool_"): pd.BooleanDtype(),
np.dtype("object"): pd.StringDtype(),
}
pyarrow_dtypes_to_pandas_dtypes = {
pa.uint8(): pd.UInt8Dtype(),
pa.uint16(): pd.UInt16Dtype(),
pa.uint32(): pd.UInt32Dtype(),
pa.uint64(): pd.UInt64Dtype(),
pa.int8(): pd.Int8Dtype(),
pa.int16(): pd.Int16Dtype(),
pa.int32(): pd.Int32Dtype(),
pa.int64(): pd.Int64Dtype(),
pa.bool_(): pd.BooleanDtype(),
pa.string(): pd.StringDtype(),
}
pandas_dtypes_to_np_dtypes = {
pd.UInt8Dtype(): np.dtype("uint8"),
pd.UInt16Dtype(): np.dtype("uint16"),
pd.UInt32Dtype(): np.dtype("uint32"),
pd.UInt64Dtype(): np.dtype("uint64"),
pd.Int8Dtype(): np.dtype("int8"),
pd.Int16Dtype(): np.dtype("int16"),
pd.Int32Dtype(): np.dtype("int32"),
pd.Int64Dtype(): np.dtype("int64"),
pd.BooleanDtype(): np.dtype("bool_"),
pd.StringDtype(): np.dtype("object"),
}
pandas_dtypes_alias_to_cudf_alias = {
"UInt8": "uint8",
"UInt16": "uint16",
"UInt32": "uint32",
"UInt64": "uint64",
"Int8": "int8",
"Int16": "int16",
"Int32": "int32",
"Int64": "int64",
"boolean": "bool",
}
np_dtypes_to_pandas_dtypes[np.dtype("float32")] = pd.Float32Dtype()
np_dtypes_to_pandas_dtypes[np.dtype("float64")] = pd.Float64Dtype()
pandas_dtypes_to_np_dtypes[pd.Float32Dtype()] = np.dtype("float32")
pandas_dtypes_to_np_dtypes[pd.Float64Dtype()] = np.dtype("float64")
pandas_dtypes_alias_to_cudf_alias["Float32"] = "float32"
pandas_dtypes_alias_to_cudf_alias["Float64"] = "float64"
SIGNED_INTEGER_TYPES = {"int8", "int16", "int32", "int64"}
UNSIGNED_TYPES = {"uint8", "uint16", "uint32", "uint64"}
INTEGER_TYPES = SIGNED_INTEGER_TYPES | UNSIGNED_TYPES
FLOAT_TYPES = {"float32", "float64"}
SIGNED_TYPES = SIGNED_INTEGER_TYPES | FLOAT_TYPES
NUMERIC_TYPES = SIGNED_TYPES | UNSIGNED_TYPES
DATETIME_TYPES = {
"datetime64[s]",
"datetime64[ms]",
"datetime64[us]",
"datetime64[ns]",
}
TIMEDELTA_TYPES = {
"timedelta64[s]",
"timedelta64[ms]",
"timedelta64[us]",
"timedelta64[ns]",
}
OTHER_TYPES = {"bool", "category", "str"}
STRING_TYPES = {"object"}
BOOL_TYPES = {"bool"}
ALL_TYPES = NUMERIC_TYPES | DATETIME_TYPES | TIMEDELTA_TYPES | OTHER_TYPES
def np_to_pa_dtype(dtype):
"""Util to convert numpy dtype to PyArrow dtype."""
# special case when dtype is np.datetime64
if dtype.kind == "M":
time_unit, _ = np.datetime_data(dtype)
if time_unit in ("s", "ms", "us", "ns"):
# return a pa.Timestamp of the appropriate unit
return pa.timestamp(time_unit)
# default is int64_t UNIX ms
return pa.date64()
elif dtype.kind == "m":
time_unit, _ = np.datetime_data(dtype)
if time_unit in ("s", "ms", "us", "ns"):
# return a pa.Duration of the appropriate unit
return pa.duration(time_unit)
# default fallback unit is ns
return pa.duration("ns")
return _np_pa_dtypes[cudf.dtype(dtype).type]
def get_numeric_type_info(dtype):
_TypeMinMax = namedtuple("_TypeMinMax", "min,max")
if dtype.kind in {"i", "u"}:
info = np.iinfo(dtype)
return _TypeMinMax(info.min, info.max)
elif dtype.kind == "f":
return _TypeMinMax(dtype.type("-inf"), dtype.type("+inf"))
else:
raise TypeError(dtype)
def numeric_normalize_types(*args):
"""Cast all args to a common type using numpy promotion logic"""
dtype = np.result_type(*[a.dtype for a in args])
return [a.astype(dtype) for a in args]
def _find_common_type_decimal(dtypes):
# Find the largest scale and the largest difference between
# precision and scale of the columns to be concatenated
s = max(dtype.scale for dtype in dtypes)
lhs = max(dtype.precision - dtype.scale for dtype in dtypes)
# Combine to get the necessary precision and clip at the maximum
# precision
p = s + lhs
if p > cudf.Decimal64Dtype.MAX_PRECISION:
return cudf.Decimal128Dtype(
min(cudf.Decimal128Dtype.MAX_PRECISION, p), s
)
elif p > cudf.Decimal32Dtype.MAX_PRECISION:
return cudf.Decimal64Dtype(
min(cudf.Decimal64Dtype.MAX_PRECISION, p), s
)
else:
return cudf.Decimal32Dtype(
min(cudf.Decimal32Dtype.MAX_PRECISION, p), s
)
def cudf_dtype_from_pydata_dtype(dtype):
"""Given a numpy or pandas dtype, converts it into the equivalent cuDF
Python dtype.
"""
if cudf.api.types.is_categorical_dtype(dtype):
return cudf.core.dtypes.CategoricalDtype
elif cudf.api.types.is_decimal32_dtype(dtype):
return cudf.core.dtypes.Decimal32Dtype
elif cudf.api.types.is_decimal64_dtype(dtype):
return cudf.core.dtypes.Decimal64Dtype
elif cudf.api.types.is_decimal128_dtype(dtype):
return cudf.core.dtypes.Decimal128Dtype
elif dtype in cudf._lib.types.SUPPORTED_NUMPY_TO_LIBCUDF_TYPES:
return dtype.type
return infer_dtype_from_object(dtype)
def cudf_dtype_to_pa_type(dtype):
"""Given a cudf pandas dtype, converts it into the equivalent cuDF
Python dtype.
"""
if cudf.api.types.is_categorical_dtype(dtype):
raise NotImplementedError()
elif (
cudf.api.types.is_list_dtype(dtype)
or cudf.api.types.is_struct_dtype(dtype)
or cudf.api.types.is_decimal_dtype(dtype)
):
return dtype.to_arrow()
else:
return np_to_pa_dtype(cudf.dtype(dtype))
def cudf_dtype_from_pa_type(typ):
"""Given a cuDF pyarrow dtype, converts it into the equivalent
cudf pandas dtype.
"""
if pa.types.is_list(typ):
return cudf.core.dtypes.ListDtype.from_arrow(typ)
elif pa.types.is_struct(typ):
return cudf.core.dtypes.StructDtype.from_arrow(typ)
elif pa.types.is_decimal(typ):
return cudf.core.dtypes.Decimal128Dtype.from_arrow(typ)
else:
return cudf.api.types.pandas_dtype(typ.to_pandas_dtype())
def to_cudf_compatible_scalar(val, dtype=None):
"""
Converts the value `val` to a numpy/Pandas scalar,
optionally casting to `dtype`.
If `val` is None, returns None.
"""
if cudf._lib.scalar._is_null_host_scalar(val) or isinstance(
val, cudf.Scalar
):
return val
if not cudf.api.types._is_scalar_or_zero_d_array(val):
raise ValueError(
f"Cannot convert value of type {type(val).__name__} "
"to cudf scalar"
)
if isinstance(val, Decimal):
return val
if isinstance(val, (np.ndarray, cp.ndarray)) and val.ndim == 0:
val = val.item()
if (
(dtype is None) and isinstance(val, str)
) or cudf.api.types.is_string_dtype(dtype):
dtype = "str"
if isinstance(val, str) and val.endswith("\x00"):
# Numpy string dtypes are fixed width and use NULL to
# indicate the end of the string, so they cannot
# distinguish between "abc\x00" and "abc".
# https://github.com/numpy/numpy/issues/20118
# In this case, don't try going through numpy and just use
# the string value directly (cudf.DeviceScalar will DTRT)
return val
tz_error_msg = (
"Cannot covert a timezone-aware timestamp to timezone-naive scalar."
)
if isinstance(val, pd.Timestamp):
if val.tz is not None:
raise NotImplementedError(tz_error_msg)
val = val.to_datetime64()
elif isinstance(val, pd.Timedelta):
val = val.to_timedelta64()
elif isinstance(val, datetime.datetime):
if val.tzinfo is not None:
raise NotImplementedError(tz_error_msg)
val = np.datetime64(val)
elif isinstance(val, datetime.timedelta):
val = np.timedelta64(val)
val = _maybe_convert_to_default_type(
cudf.api.types.pandas_dtype(type(val))
).type(val)
if dtype is not None:
if isinstance(val, str) and np.dtype(dtype).kind == "M":
# pd.Timestamp can handle str, but not np.str_
val = pd.Timestamp(str(val)).to_datetime64().astype(dtype)
else:
val = val.astype(dtype)
if val.dtype.type is np.datetime64:
time_unit, _ = np.datetime_data(val.dtype)
if time_unit in ("D", "W", "M", "Y"):
val = val.astype("datetime64[s]")
elif val.dtype.type is np.timedelta64:
time_unit, _ = np.datetime_data(val.dtype)
if time_unit in ("D", "W", "M", "Y"):
val = val.astype("timedelta64[ns]")
return val
def is_column_like(obj):
"""
This function checks if the given `obj`
is a column-like (Series, Index...)
type or not.
Parameters
----------
obj : object of any type which needs to be validated.
Returns
-------
Boolean: True or False depending on whether the
input `obj` is column-like or not.
"""
return (
isinstance(
obj,
(
cudf.core.column.ColumnBase,
cudf.Series,
cudf.Index,
pd.Series,
pd.Index,
),
)
or (
hasattr(obj, "__cuda_array_interface__")
and len(obj.__cuda_array_interface__["shape"]) == 1
)
or (
hasattr(obj, "__array_interface__")
and len(obj.__array_interface__["shape"]) == 1
)
)
def can_convert_to_column(obj):
"""
This function checks if the given `obj`
can be used to create a column or not.
Parameters
----------
obj : object of any type which needs to be validated.
Returns
-------
Boolean: True or False depending on whether the
input `obj` is column-compatible or not.
"""
return is_column_like(obj) or cudf.api.types.is_list_like(obj)
def min_scalar_type(a, min_size=8):
return min_signed_type(a, min_size=min_size)
def min_signed_type(x, min_size=8):
"""
Return the smallest *signed* integer dtype
that can represent the integer ``x``
"""
for int_dtype in np.sctypes["int"]:
if (cudf.dtype(int_dtype).itemsize * 8) >= min_size:
if np.iinfo(int_dtype).min <= x <= np.iinfo(int_dtype).max:
return int_dtype
# resort to using `int64` and let numpy raise appropriate exception:
return np.int64(x).dtype
def min_unsigned_type(x, min_size=8):
"""
Return the smallest *unsigned* integer dtype
that can represent the integer ``x``
"""
for int_dtype in np.sctypes["uint"]:
if (cudf.dtype(int_dtype).itemsize * 8) >= min_size:
if 0 <= x <= np.iinfo(int_dtype).max:
return int_dtype
# resort to using `uint64` and let numpy raise appropriate exception:
return np.uint64(x).dtype
def min_column_type(x, expected_type):
"""
Return the smallest dtype which can represent all
elements of the `NumericalColumn` `x`
If the column is not a subtype of `np.signedinteger` or `np.floating`
returns the same dtype as the dtype of `x` without modification
"""
if not isinstance(x, cudf.core.column.NumericalColumn):
raise TypeError("Argument x must be of type column.NumericalColumn")
if x.valid_count == 0:
return x.dtype
if np.issubdtype(x.dtype, np.floating):
return get_min_float_dtype(x)
elif np.issubdtype(expected_type, np.integer):
max_bound_dtype = np.min_scalar_type(x.max())
min_bound_dtype = np.min_scalar_type(x.min())
result_type = np.promote_types(max_bound_dtype, min_bound_dtype)
else:
result_type = x.dtype
return cudf.dtype(result_type)
def get_min_float_dtype(col):
max_bound_dtype = np.min_scalar_type(float(col.max()))
min_bound_dtype = np.min_scalar_type(float(col.min()))
result_type = np.promote_types(
"float32", np.promote_types(max_bound_dtype, min_bound_dtype)
)
return cudf.dtype(result_type)
def is_mixed_with_object_dtype(lhs, rhs):
if cudf.api.types.is_categorical_dtype(lhs.dtype):
return is_mixed_with_object_dtype(lhs.dtype.categories, rhs)
elif cudf.api.types.is_categorical_dtype(rhs.dtype):
return is_mixed_with_object_dtype(lhs, rhs.dtype.categories)
return (lhs.dtype == "object" and rhs.dtype != "object") or (
rhs.dtype == "object" and lhs.dtype != "object"
)
def get_time_unit(obj):
if isinstance(
obj,
(
cudf.core.column.datetime.DatetimeColumn,
cudf.core.column.timedelta.TimeDeltaColumn,
),
):
return obj.time_unit
time_unit, _ = np.datetime_data(obj.dtype)
return time_unit
def _get_nan_for_dtype(dtype):
dtype = cudf.dtype(dtype)
if pd.api.types.is_datetime64_dtype(
dtype
) or pd.api.types.is_timedelta64_dtype(dtype):
time_unit, _ = np.datetime_data(dtype)
return dtype.type("nat", time_unit)
elif dtype.kind == "f":
return dtype.type("nan")
else:
return np.float64("nan")
def get_allowed_combinations_for_operator(dtype_l, dtype_r, op):
error = TypeError(
f"{op} not supported between {dtype_l} and {dtype_r} scalars"
)
to_numpy_ops = {
"__add__": _ADD_TYPES,
"__radd__": _ADD_TYPES,
"__sub__": _SUB_TYPES,
"__rsub__": _SUB_TYPES,
"__mul__": _MUL_TYPES,
"__rmul__": _MUL_TYPES,
"__floordiv__": _FLOORDIV_TYPES,
"__rfloordiv__": _FLOORDIV_TYPES,
"__truediv__": _TRUEDIV_TYPES,
"__rtruediv__": _TRUEDIV_TYPES,
"__mod__": _MOD_TYPES,
"__rmod__": _MOD_TYPES,
"__pow__": _POW_TYPES,
"__rpow__": _POW_TYPES,
}
allowed = to_numpy_ops.get(op, op)
# special rules for string
if dtype_l == "object" or dtype_r == "object":
if (dtype_l == dtype_r == "object") and op == "__add__":
return "str"
else:
raise error
# Check if we can directly operate
for valid_combo in allowed:
ltype, rtype, outtype = valid_combo
if np.can_cast(dtype_l.char, ltype) and np.can_cast(
dtype_r.char, rtype
):
return outtype
raise error
def find_common_type(dtypes):
"""
Wrapper over np.find_common_type to handle special cases
Corner cases:
1. "M8", "M8" -> "M8" | "m8", "m8" -> "m8"
Parameters
----------
dtypes : iterable, sequence of dtypes to find common types
Returns
-------
dtype : np.dtype optional, the result from np.find_common_type,
None if input is empty
"""
if len(dtypes) == 0:
return None
# Early exit for categoricals since they're not hashable and therefore
# can't be put in a set.
if any(cudf.api.types.is_categorical_dtype(dtype) for dtype in dtypes):
if all(
(
cudf.api.types.is_categorical_dtype(dtype)
and (not dtype.ordered if hasattr(dtype, "ordered") else True)
)
for dtype in dtypes
):
if len({dtype._categories.dtype for dtype in dtypes}) == 1:
return cudf.CategoricalDtype(
cudf.core.column.concat_columns(
[dtype._categories for dtype in dtypes]
).unique()
)
else:
raise ValueError(
"Only unordered categories of the same underlying type "
"may be coerced to a common type."
)
else:
# TODO: Should this be an error case (mixing categorical with other
# dtypes) or should this return object? Unclear if we have enough
# information to decide right now, may have to come back to this as
# usage of find_common_type increases.
return cudf.dtype("O")
# Aggregate same types
dtypes = set(dtypes)
if any(cudf.api.types.is_decimal_dtype(dtype) for dtype in dtypes):
if all(
cudf.api.types.is_decimal_dtype(dtype)
or cudf.api.types.is_numeric_dtype(dtype)
for dtype in dtypes
):
return _find_common_type_decimal(
[
dtype
for dtype in dtypes
if cudf.api.types.is_decimal_dtype(dtype)
]
)
else:
return cudf.dtype("O")
if any(cudf.api.types.is_list_dtype(dtype) for dtype in dtypes):
if len(dtypes) == 1:
return dtypes.get(0)
else:
# TODO: As list dtypes allow casting
# to identical types, improve this logic of returning a
# common dtype, for example:
# ListDtype(int64) & ListDtype(int32) common
# dtype could be ListDtype(int64).
raise NotImplementedError(
"Finding a common type for `ListDtype` is currently "
"not supported"
)
if any(cudf.api.types.is_struct_dtype(dtype) for dtype in dtypes):
if len(dtypes) == 1:
return dtypes.get(0)
else:
raise NotImplementedError(
"Finding a common type for `StructDtype` is currently "
"not supported"
)
# Corner case 1:
# Resort to np.result_type to handle "M" and "m" types separately
dt_dtypes = set(
filter(lambda t: cudf.api.types.is_datetime_dtype(t), dtypes)
)
if len(dt_dtypes) > 0:
dtypes = dtypes - dt_dtypes
dtypes.add(np.result_type(*dt_dtypes))
td_dtypes = set(
filter(lambda t: pd.api.types.is_timedelta64_dtype(t), dtypes)
)
if len(td_dtypes) > 0:
dtypes = dtypes - td_dtypes
dtypes.add(np.result_type(*td_dtypes))
common_dtype = np.find_common_type(list(dtypes), [])
if common_dtype == np.dtype("float16"):
return cudf.dtype("float32")
return cudf.dtype(common_dtype)
def _dtype_pandas_compatible(dtype):
"""
A utility function, that returns `str` instead of `object`
dtype when pandas comptibility mode is enabled.
"""
if cudf.get_option("mode.pandas_compatible") and dtype == cudf.dtype("O"):
return "str"
return dtype
def _can_cast(from_dtype, to_dtype):
"""
Utility function to determine if we can cast
from `from_dtype` to `to_dtype`. This function primarily calls
`np.can_cast` but with some special handling around
cudf specific dtypes.
"""
if cudf.utils.utils.is_na_like(from_dtype):
return True
if isinstance(from_dtype, type):
from_dtype = cudf.dtype(from_dtype)
if isinstance(to_dtype, type):
to_dtype = cudf.dtype(to_dtype)
# TODO : Add precision & scale checking for
# decimal types in future
if isinstance(from_dtype, cudf.core.dtypes.DecimalDtype):
if isinstance(to_dtype, cudf.core.dtypes.DecimalDtype):
return True
elif isinstance(to_dtype, np.dtype):
if to_dtype.kind in {"i", "f", "u", "U", "O"}:
return True
else:
return False
elif isinstance(from_dtype, np.dtype):
if isinstance(to_dtype, np.dtype):
return np.can_cast(from_dtype, to_dtype)
elif isinstance(to_dtype, cudf.core.dtypes.DecimalDtype):
if from_dtype.kind in {"i", "f", "u", "U", "O"}:
return True
else:
return False
elif isinstance(to_dtype, cudf.core.types.CategoricalDtype):
return True
else:
return False
elif isinstance(from_dtype, cudf.core.dtypes.ListDtype):
# TODO: Add level based checks too once casting of
# list columns is supported
if isinstance(to_dtype, cudf.core.dtypes.ListDtype):
return np.can_cast(from_dtype.leaf_type, to_dtype.leaf_type)
else:
return False
elif isinstance(from_dtype, cudf.core.dtypes.CategoricalDtype):
if isinstance(to_dtype, cudf.core.dtypes.CategoricalDtype):
return True
elif isinstance(to_dtype, np.dtype):
return np.can_cast(from_dtype._categories.dtype, to_dtype)
else:
return False
else:
return np.can_cast(from_dtype, to_dtype)
def _maybe_convert_to_default_type(dtype):
"""Convert `dtype` to default if specified by user.
If not specified, return as is.
"""
if cudf.get_option("default_integer_bitwidth"):
if cudf.api.types.is_signed_integer_dtype(dtype):
return cudf.dtype(
f'i{cudf.get_option("default_integer_bitwidth")//8}'
)
elif cudf.api.types.is_unsigned_integer_dtype(dtype):
return cudf.dtype(
f'u{cudf.get_option("default_integer_bitwidth")//8}'
)
if cudf.get_option(
"default_float_bitwidth"
) and cudf.api.types.is_float_dtype(dtype):
return cudf.dtype(f'f{cudf.get_option("default_float_bitwidth")//8}')
return dtype
def _dtype_can_hold_range(rng: range, dtype: np.dtype) -> bool:
if not len(rng):
return True
return np.can_cast(rng[0], dtype) and np.can_cast(rng[-1], dtype)
def _dtype_can_hold_element(dtype: np.dtype, element) -> bool:
if dtype.kind in {"i", "u"}:
if isinstance(element, range):
if _dtype_can_hold_range(element, dtype):
return True
return False
elif is_integer(element) or (
is_float(element) and element.is_integer()
):
info = np.iinfo(dtype)
if info.min <= element <= info.max:
return True
return False
elif dtype.kind == "f":
if is_integer(element) or is_float(element):
casted = dtype.type(element)
if np.isnan(casted) or casted == element:
return True
# otherwise e.g. overflow see TestCoercionFloat32
return False
elif dtype.kind == "b":
if is_bool(element):
return True
return False
raise NotImplementedError(f"Unsupported dtype: {dtype}")
def _get_base_dtype(dtype: DtypeObj) -> DtypeObj:
# TODO: replace the use of this function with just `dtype.base`
# when Pandas 2.1.0 is the minimum version we support:
# https://github.com/pandas-dev/pandas/pull/52706
if isinstance(dtype, pd.DatetimeTZDtype):
return np.dtype(f"<M8[{dtype.unit}]")
else:
return dtype.base
# Type dispatch loops similar to what are found in `np.add.types`
# In NumPy, whether or not an op can be performed between two
# operands is determined by checking to see if NumPy has a c/c++
# loop specifically for adding those two operands built in. If
# not it will search lists like these for a loop for types that
# the operands can be safely cast to. These are those lookups,
# modified slightly for cuDF's rules
_ADD_TYPES = [
"???",
"BBB",
"HHH",
"III",
"LLL",
"bbb",
"hhh",
"iii",
"lll",
"fff",
"ddd",
"mMM",
"MmM",
"mmm",
"LMM",
"MLM",
"Lmm",
"mLm",
]
_SUB_TYPES = [
"BBB",
"HHH",
"III",
"LLL",
"bbb",
"hhh",
"iii",
"lll",
"fff",
"ddd",
"???",
"MMm",
"mmm",
"MmM",
"MLM",
"mLm",
"Lmm",
]
_MUL_TYPES = [
"???",
"BBB",
"HHH",
"III",
"LLL",
"bbb",
"hhh",
"iii",
"lll",
"fff",
"ddd",
"mLm",
"Lmm",
"mlm",
"lmm",
]
_FLOORDIV_TYPES = [
"bbb",
"BBB",
"HHH",
"III",
"LLL",
"hhh",
"iii",
"lll",
"fff",
"ddd",
"???",
"mqm",
"mdm",
"mmq",
]
_TRUEDIV_TYPES = ["fff", "ddd", "mqm", "mmd", "mLm"]
_MOD_TYPES = [
"bbb",
"BBB",
"hhh",
"HHH",
"iii",
"III",
"lll",
"LLL",
"fff",
"ddd",
"mmm",
]
_POW_TYPES = [
"bbb",
"BBB",
"hhh",
"HHH",
"iii",
"III",
"lll",
"LLL",
"fff",
"ddd",
]
| 0 |
rapidsai_public_repos/cudf/python/cudf/cudf/utils
|
rapidsai_public_repos/cudf/python/cudf/cudf/utils/metadata/orc_column_statistics.proto
|
syntax = "proto2";
message IntegerStatistics {
optional sint64 minimum = 1;
optional sint64 maximum = 2;
optional sint64 sum = 3;
}
message DoubleStatistics {
optional double minimum = 1;
optional double maximum = 2;
optional double sum = 3;
}
message StringStatistics {
optional string minimum = 1;
optional string maximum = 2;
// sum will store the total length of all strings in a stripe
optional sint64 sum = 3;
}
message BucketStatistics {
repeated uint64 count = 1 [packed=true];
}
message DecimalStatistics {
optional string minimum = 1;
optional string maximum = 2;
optional string sum = 3;
}
message DateStatistics {
// min,max values saved as days since epoch
optional sint32 minimum = 1;
optional sint32 maximum = 2;
}
message TimestampStatistics {
// min,max values saved as milliseconds since epoch
optional sint64 minimum = 1;
optional sint64 maximum = 2;
optional sint64 minimumUtc = 3;
optional sint64 maximumUtc = 4;
}
message BinaryStatistics {
// sum will store the total binary blob length in a stripe
optional sint64 sum = 1;
}
message ColumnStatistics {
optional uint64 numberOfValues = 1;
optional IntegerStatistics intStatistics = 2;
optional DoubleStatistics doubleStatistics = 3;
optional StringStatistics stringStatistics = 4;
optional BucketStatistics bucketStatistics = 5;
optional DecimalStatistics decimalStatistics = 6;
optional DateStatistics dateStatistics = 7;
optional BinaryStatistics binaryStatistics = 8;
optional TimestampStatistics timestampStatistics = 9;
optional bool hasNull = 10;
}
| 0 |
rapidsai_public_repos/cudf/python/cudf/cudf/utils
|
rapidsai_public_repos/cudf/python/cudf/cudf/utils/metadata/__init__.py
|
# Copyright (c) 2020, NVIDIA CORPORATION.
| 0 |
rapidsai_public_repos/cudf/python/cudf/cudf
|
rapidsai_public_repos/cudf/python/cudf/cudf/tests/test_sorting.py
|
# Copyright (c) 2018-2023, NVIDIA CORPORATION.
import string
from itertools import product
import numpy as np
import pandas as pd
import pytest
from cudf import DataFrame, Series
from cudf.core.column import NumericalColumn
from cudf.testing._utils import (
DATETIME_TYPES,
NUMERIC_TYPES,
assert_eq,
assert_exceptions_equal,
expect_warning_if,
)
sort_nelem_args = [2, 257]
sort_dtype_args = [
np.int32,
np.int64,
np.uint32,
np.uint64,
np.float32,
np.float64,
]
sort_slice_args = [slice(1, None), slice(None, -1), slice(1, -1)]
@pytest.mark.parametrize(
"nelem,dtype", list(product(sort_nelem_args, sort_dtype_args))
)
def test_dataframe_sort_values(nelem, dtype):
np.random.seed(0)
df = DataFrame()
df["a"] = aa = (100 * np.random.random(nelem)).astype(dtype)
df["b"] = bb = (100 * np.random.random(nelem)).astype(dtype)
sorted_df = df.sort_values(by="a")
# Check
sorted_index = np.argsort(aa, kind="mergesort")
assert_eq(sorted_df.index.values, sorted_index)
assert_eq(sorted_df["a"].values, aa[sorted_index])
assert_eq(sorted_df["b"].values, bb[sorted_index])
@pytest.mark.parametrize("ignore_index", [True, False])
@pytest.mark.parametrize("index", ["a", "b", ["a", "b"]])
def test_dataframe_sort_values_ignore_index(index, ignore_index):
gdf = DataFrame(
{"a": [1, 3, 5, 2, 4], "b": [1, 1, 2, 2, 3], "c": [9, 7, 7, 7, 1]}
)
gdf = gdf.set_index(index)
pdf = gdf.to_pandas()
expect = pdf.sort_values(list(pdf.columns), ignore_index=ignore_index)
got = gdf.sort_values((gdf.columns), ignore_index=ignore_index)
assert_eq(expect, got)
@pytest.mark.parametrize("ignore_index", [True, False])
def test_series_sort_values_ignore_index(ignore_index):
gsr = Series([1, 3, 5, 2, 4])
psr = gsr.to_pandas()
expect = psr.sort_values(ignore_index=ignore_index)
got = gsr.sort_values(ignore_index=ignore_index)
assert_eq(expect, got)
@pytest.mark.parametrize(
"nelem,sliceobj", list(product([10, 100], sort_slice_args))
)
def test_dataframe_sort_values_sliced(nelem, sliceobj):
np.random.seed(0)
df = pd.DataFrame()
df["a"] = np.random.random(nelem)
expect = df[sliceobj]["a"].sort_values()
gdf = DataFrame.from_pandas(df)
got = gdf[sliceobj]["a"].sort_values()
assert (got.to_pandas() == expect).all()
@pytest.mark.parametrize(
"nelem,dtype,asc",
list(product(sort_nelem_args, sort_dtype_args, [True, False])),
)
def test_series_argsort(nelem, dtype, asc):
np.random.seed(0)
sr = Series((100 * np.random.random(nelem)).astype(dtype))
res = sr.argsort(ascending=asc)
if asc:
expected = np.argsort(sr.to_numpy(), kind="mergesort")
else:
expected = np.argsort(sr.to_numpy() * -1, kind="mergesort")
np.testing.assert_array_equal(expected, res.to_numpy())
@pytest.mark.parametrize(
"nelem,asc", list(product(sort_nelem_args, [True, False]))
)
def test_series_sort_index(nelem, asc):
np.random.seed(0)
sr = Series(100 * np.random.random(nelem))
psr = sr.to_pandas()
expected = psr.sort_index(ascending=asc)
got = sr.sort_index(ascending=asc)
assert_eq(expected, got)
@pytest.mark.parametrize("data", [[0, 1, 1, 2, 2, 2, 3, 3], [0], [1, 2, 3]])
@pytest.mark.parametrize("n", [-100, -50, -12, -2, 0, 1, 2, 3, 4, 7])
def test_series_nlargest(data, n):
"""Indirectly tests Series.sort_values()"""
sr = Series(data)
psr = pd.Series(data)
assert_eq(sr.nlargest(n), psr.nlargest(n))
assert_eq(sr.nlargest(n, keep="last"), psr.nlargest(n, keep="last"))
assert_exceptions_equal(
lfunc=psr.nlargest,
rfunc=sr.nlargest,
lfunc_args_and_kwargs=([], {"n": 3, "keep": "what"}),
rfunc_args_and_kwargs=([], {"n": 3, "keep": "what"}),
)
@pytest.mark.parametrize("data", [[0, 1, 1, 2, 2, 2, 3, 3], [0], [1, 2, 3]])
@pytest.mark.parametrize("n", [-100, -50, -12, -2, 0, 1, 2, 3, 4, 9])
def test_series_nsmallest(data, n):
"""Indirectly tests Series.sort_values()"""
sr = Series(data)
psr = pd.Series(data)
assert_eq(sr.nsmallest(n), psr.nsmallest(n))
assert_eq(
sr.nsmallest(n, keep="last").sort_index(),
psr.nsmallest(n, keep="last").sort_index(),
)
assert_exceptions_equal(
lfunc=psr.nsmallest,
rfunc=sr.nsmallest,
lfunc_args_and_kwargs=([], {"n": 3, "keep": "what"}),
rfunc_args_and_kwargs=([], {"n": 3, "keep": "what"}),
)
@pytest.mark.parametrize("nelem,n", [(1, 1), (100, 100), (10, 5), (100, 10)])
@pytest.mark.parametrize("op", ["nsmallest", "nlargest"])
@pytest.mark.parametrize("columns", ["a", ["b", "a"]])
def test_dataframe_nlargest_nsmallest(nelem, n, op, columns):
np.random.seed(0)
aa = np.random.random(nelem)
bb = np.random.random(nelem)
df = DataFrame({"a": aa, "b": bb})
pdf = df.to_pandas()
assert_eq(getattr(df, op)(n, columns), getattr(pdf, op)(n, columns))
@pytest.mark.parametrize(
"counts,sliceobj", list(product([(10, 5), (100, 10)], sort_slice_args))
)
def test_dataframe_nlargest_sliced(counts, sliceobj):
nelem, n = counts
np.random.seed(0)
df = pd.DataFrame()
df["a"] = np.random.random(nelem)
df["b"] = np.random.random(nelem)
expect = df[sliceobj].nlargest(n, "a")
gdf = DataFrame.from_pandas(df)
got = gdf[sliceobj].nlargest(n, "a")
assert (got.to_pandas() == expect).all().all()
@pytest.mark.parametrize(
"counts,sliceobj", list(product([(10, 5), (100, 10)], sort_slice_args))
)
def test_dataframe_nsmallest_sliced(counts, sliceobj):
nelem, n = counts
np.random.seed(0)
df = pd.DataFrame()
df["a"] = np.random.random(nelem)
df["b"] = np.random.random(nelem)
expect = df[sliceobj].nsmallest(n, "a")
gdf = DataFrame.from_pandas(df)
got = gdf[sliceobj].nsmallest(n, "a")
assert (got.to_pandas() == expect).all().all()
@pytest.mark.parametrize("num_cols", [1, 2, 3, 5])
@pytest.mark.parametrize("num_rows", [0, 1, 2, 1000])
@pytest.mark.parametrize("dtype", NUMERIC_TYPES + DATETIME_TYPES)
@pytest.mark.parametrize("ascending", [True, False])
@pytest.mark.parametrize("na_position", ["first", "last"])
def test_dataframe_multi_column(
num_cols, num_rows, dtype, ascending, na_position
):
np.random.seed(0)
by = list(string.ascii_lowercase[:num_cols])
pdf = pd.DataFrame()
for i in range(5):
colname = string.ascii_lowercase[i]
data = np.random.randint(0, 26, num_rows).astype(dtype)
pdf[colname] = data
gdf = DataFrame.from_pandas(pdf)
got = gdf.sort_values(by, ascending=ascending, na_position=na_position)
expect = pdf.sort_values(by, ascending=ascending, na_position=na_position)
assert_eq(
got[by].reset_index(drop=True), expect[by].reset_index(drop=True)
)
@pytest.mark.parametrize("num_cols", [1, 2, 3])
@pytest.mark.parametrize("num_rows", [0, 1, 2, 3, 5])
@pytest.mark.parametrize("dtype", ["float32", "float64"])
@pytest.mark.parametrize("nulls", ["some", "all"])
@pytest.mark.parametrize("ascending", [True, False])
@pytest.mark.parametrize("na_position", ["first", "last"])
def test_dataframe_multi_column_nulls(
num_cols, num_rows, dtype, nulls, ascending, na_position
):
np.random.seed(0)
by = list(string.ascii_lowercase[:num_cols])
pdf = pd.DataFrame()
for i in range(3):
colname = string.ascii_lowercase[i]
data = np.random.randint(0, 26, num_rows).astype(dtype)
if nulls == "some":
idx = np.array([], dtype="int64")
if num_rows > 0:
idx = np.random.choice(
num_rows, size=int(num_rows / 4), replace=False
)
data[idx] = np.nan
elif nulls == "all":
data[:] = np.nan
pdf[colname] = data
gdf = DataFrame.from_pandas(pdf)
got = gdf.sort_values(by, ascending=ascending, na_position=na_position)
expect = pdf.sort_values(by, ascending=ascending, na_position=na_position)
assert_eq(
got[by].reset_index(drop=True), expect[by].reset_index(drop=True)
)
@pytest.mark.parametrize(
"ascending", list(product((True, False), (True, False)))
)
@pytest.mark.parametrize("na_position", ["first", "last"])
def test_dataframe_multi_column_nulls_multiple_ascending(
ascending, na_position
):
pdf = pd.DataFrame(
{"a": [3, 1, None, 2, 2, None, 1], "b": [1, 2, 3, 4, 5, 6, 7]}
)
gdf = DataFrame.from_pandas(pdf)
expect = pdf.sort_values(
by=["a", "b"], ascending=ascending, na_position=na_position
)
actual = gdf.sort_values(
by=["a", "b"], ascending=ascending, na_position=na_position
)
assert_eq(actual, expect)
@pytest.mark.parametrize("nelem", [1, 100])
def test_series_nlargest_nelem(nelem):
np.random.seed(0)
elems = np.random.random(nelem)
gds = Series(elems).nlargest(nelem)
pds = pd.Series(elems).nlargest(nelem)
assert (pds == gds.to_pandas()).all().all()
@pytest.mark.parametrize("map_size", [1, 2, 8])
@pytest.mark.parametrize("nelem", [1, 10, 100])
@pytest.mark.parametrize("keep", [True, False])
def test_dataframe_scatter_by_map(map_size, nelem, keep):
strlist = ["dog", "cat", "fish", "bird", "pig", "fox", "cow", "goat"]
np.random.seed(0)
df = DataFrame()
df["a"] = np.random.choice(strlist[:map_size], nelem)
df["b"] = np.random.uniform(low=0, high=map_size, size=nelem)
df["c"] = np.random.randint(map_size, size=nelem)
df["d"] = df["a"].astype("category")
def _check_scatter_by_map(dfs, col):
assert len(dfs) == map_size
nrows = 0
# print(col._column)
name = col.name
for i, df in enumerate(dfs):
nrows += len(df)
if len(df) > 0:
# Make sure the column types were preserved
assert isinstance(df[name]._column, type(col._column))
try:
sr = df[name].astype(np.int32)
except ValueError:
sr = df[name]
assert sr.nunique() <= 1
if sr.nunique() == 1:
if isinstance(df[name]._column, NumericalColumn):
assert sr.iloc[0] == i
assert nrows == nelem
with pytest.warns(UserWarning):
_check_scatter_by_map(
df.scatter_by_map("a", map_size, keep_index=keep), df["a"]
)
_check_scatter_by_map(
df.scatter_by_map("b", map_size, keep_index=keep), df["b"]
)
_check_scatter_by_map(
df.scatter_by_map("c", map_size, keep_index=keep), df["c"]
)
with pytest.warns(UserWarning):
_check_scatter_by_map(
df.scatter_by_map("d", map_size, keep_index=keep), df["d"]
)
if map_size == 2 and nelem == 100:
with pytest.warns(UserWarning):
df.scatter_by_map("a") # Auto-detect map_size
with pytest.raises(ValueError):
with pytest.warns(UserWarning):
df.scatter_by_map("a", map_size=1, debug=True) # Bad map_size
# Test GenericIndex
df2 = df.set_index("c")
generic_result = df2.scatter_by_map("b", map_size, keep_index=keep)
_check_scatter_by_map(generic_result, df2["b"])
if keep:
for frame in generic_result:
isinstance(frame.index, type(df2.index))
# Test MultiIndex
df2 = df.set_index(["a", "c"])
multiindex_result = df2.scatter_by_map("b", map_size, keep_index=keep)
_check_scatter_by_map(multiindex_result, df2["b"])
if keep:
for frame in multiindex_result:
isinstance(frame.index, type(df2.index))
@pytest.mark.parametrize(
"nelem,dtype", list(product(sort_nelem_args, sort_dtype_args))
)
@pytest.mark.parametrize(
"kind", ["quicksort", "mergesort", "heapsort", "stable"]
)
def test_dataframe_sort_values_kind(nelem, dtype, kind):
np.random.seed(0)
df = DataFrame()
df["a"] = aa = (100 * np.random.random(nelem)).astype(dtype)
df["b"] = bb = (100 * np.random.random(nelem)).astype(dtype)
with expect_warning_if(kind != "quicksort", UserWarning):
sorted_df = df.sort_values(by="a", kind=kind)
# Check
sorted_index = np.argsort(aa, kind="mergesort")
assert_eq(sorted_df.index.values, sorted_index)
assert_eq(sorted_df["a"].values, aa[sorted_index])
assert_eq(sorted_df["b"].values, bb[sorted_index])
@pytest.mark.parametrize("ids", [[-1, 0, 1, 0], [0, 2, 3, 0]])
def test_dataframe_scatter_by_map_7513(ids):
df = DataFrame({"id": ids, "val": [0, 1, 2, 3]})
with pytest.raises(ValueError):
df.scatter_by_map(df["id"])
def test_dataframe_scatter_by_map_empty():
df = DataFrame({"a": [], "b": []})
scattered = df.scatter_by_map(df["a"])
assert len(scattered) == 0
| 0 |
rapidsai_public_repos/cudf/python/cudf/cudf
|
rapidsai_public_repos/cudf/python/cudf/cudf/tests/test_parquet.py
|
# Copyright (c) 2019-2023, NVIDIA CORPORATION.
import datetime
import glob
import math
import os
import pathlib
import random
from contextlib import contextmanager
from io import BytesIO
from string import ascii_letters
import cupy
import numpy as np
import pandas as pd
import pyarrow as pa
import pytest
from fsspec.core import get_fs_token_paths
from packaging import version
from pyarrow import fs as pa_fs, parquet as pq
import cudf
from cudf.core._compat import PANDAS_LT_153
from cudf.io.parquet import (
ParquetDatasetWriter,
ParquetWriter,
merge_parquet_filemetadata,
)
from cudf.testing import dataset_generator as dg
from cudf.testing._utils import (
TIMEDELTA_TYPES,
assert_eq,
assert_exceptions_equal,
set_random_null_mask_inplace,
)
@contextmanager
def _hide_pyarrow_parquet_cpu_warnings(engine):
if engine == "pyarrow":
with pytest.warns(
UserWarning,
match="Using CPU via PyArrow to read Parquet dataset. This option "
"is both inefficient and unstable!",
):
yield
else:
yield
@pytest.fixture(scope="module")
def datadir(datadir):
return datadir / "parquet"
@pytest.fixture(params=[1, 5, 10, 100000])
def simple_pdf(request):
types = [
"bool",
"int8",
"int16",
"int32",
"int64",
"uint8",
"uint16",
# "uint32", pandas promotes uint32 to int64
# https://issues.apache.org/jira/browse/ARROW-9215
"uint64",
"float32",
"float64",
]
nrows = request.param
# Create a pandas dataframe with random data of mixed types
test_pdf = pd.DataFrame(
{
f"col_{typ}": np.random.randint(0, nrows, nrows).astype(typ)
for typ in types
},
# Need to ensure that this index is not a RangeIndex to get the
# expected round-tripping behavior from Parquet reader/writer.
index=pd.Index(list(range(nrows))),
)
# Delete the name of the column index, and rename the row index
test_pdf.columns.name = None
test_pdf.index.name = "test_index"
return test_pdf
@pytest.fixture
def simple_gdf(simple_pdf):
return cudf.DataFrame.from_pandas(simple_pdf)
def build_pdf(num_columns, day_resolution_timestamps):
types = [
"bool",
"int8",
"int16",
"int32",
"int64",
"uint8",
"uint16",
# "uint32", pandas promotes uint32 to int64
# https://issues.apache.org/jira/browse/ARROW-9215
"uint64",
"float32",
"float64",
"datetime64[ms]",
"datetime64[us]",
"str",
]
nrows = num_columns.param
# Create a pandas dataframe with random data of mixed types
test_pdf = pd.DataFrame(
{
f"col_{typ}": np.random.randint(0, nrows, nrows).astype(typ)
for typ in types
},
# Need to ensure that this index is not a RangeIndex to get the
# expected round-tripping behavior from Parquet reader/writer.
index=pd.Index(list(range(nrows))),
)
# Delete the name of the column index, and rename the row index
test_pdf.columns.name = None
test_pdf.index.name = "test_index"
# make datetime64's a little more interesting by increasing the range of
# dates note that pandas will convert these to ns timestamps, so care is
# taken to avoid overflowing a ns timestamp. There is also the ability to
# request timestamps be whole days only via `day_resolution_timestamps`.
for t in [
{
"name": "datetime64[ms]",
"nsDivisor": 1000000,
"dayModulus": 86400000,
},
{
"name": "datetime64[us]",
"nsDivisor": 1000,
"dayModulus": 86400000000,
},
]:
data = [
np.random.randint(0, (0x7FFFFFFFFFFFFFFF / t["nsDivisor"]))
for i in range(nrows)
]
if day_resolution_timestamps:
data = [int(d / t["dayModulus"]) * t["dayModulus"] for d in data]
test_pdf["col_" + t["name"]] = pd.Series(
np.asarray(data, dtype=t["name"])
)
# Create non-numeric categorical data otherwise parquet may typecast it
data = [ascii_letters[np.random.randint(0, 52)] for i in range(nrows)]
test_pdf["col_category"] = pd.Series(data, dtype="category")
# Create non-numeric str data
data = [ascii_letters[np.random.randint(0, 52)] for i in range(nrows)]
test_pdf["col_str"] = pd.Series(data, dtype="str")
return test_pdf
@pytest.fixture(params=[0, 1, 10, 10000])
def pdf(request):
return build_pdf(request, False)
@pytest.fixture(params=[0, 1, 10, 10000])
def pdf_day_timestamps(request):
return build_pdf(request, True)
@pytest.fixture
def gdf(pdf):
return cudf.DataFrame.from_pandas(pdf)
@pytest.fixture
def gdf_day_timestamps(pdf_day_timestamps):
return cudf.DataFrame.from_pandas(pdf_day_timestamps)
@pytest.fixture(params=["snappy", "gzip", "brotli", None, np.str_("snappy")])
def parquet_file(request, tmp_path_factory, pdf):
fname = tmp_path_factory.mktemp("parquet") / (
str(request.param) + "_test.parquet"
)
pdf.to_parquet(fname, engine="pyarrow", compression=request.param)
return fname
@pytest.fixture(scope="module")
def rdg_seed():
return int(os.environ.get("TEST_CUDF_RDG_SEED", "42"))
def make_pdf(nrows, ncolumns=1, nvalids=0, dtype=np.int64):
test_pdf = pd.DataFrame(
[list(range(ncolumns * i, ncolumns * (i + 1))) for i in range(nrows)],
columns=pd.Index(["foo"], name="bar"),
# Need to ensure that this index is not a RangeIndex to get the
# expected round-tripping behavior from Parquet reader/writer.
index=pd.Index(list(range(nrows))),
)
test_pdf.columns.name = None
# Randomly but reproducibly mark subset of rows as invalid
random.seed(1337)
mask = random.sample(range(nrows), nvalids)
test_pdf[test_pdf.index.isin(mask)] = np.NaN
return test_pdf
@pytest.fixture
def parquet_path_or_buf(datadir):
fname = datadir / "spark_timestamp.snappy.parquet"
try:
with open(fname, "rb") as f:
buffer = BytesIO(f.read())
except Exception as excpr:
if type(excpr).__name__ == "FileNotFoundError":
pytest.skip(".parquet file is not found")
else:
print(type(excpr).__name__)
def _make_parquet_path_or_buf(src):
if src == "filepath":
return str(fname)
if src == "pathobj":
return fname
if src == "bytes_io":
return buffer
if src == "bytes":
return buffer.getvalue()
if src == "url":
return fname.as_uri()
raise ValueError("Invalid source type")
yield _make_parquet_path_or_buf
@pytest.fixture(scope="module")
def large_int64_gdf():
return cudf.DataFrame.from_pandas(pd.DataFrame({"col": range(0, 1 << 20)}))
@pytest.mark.filterwarnings("ignore:Using CPU")
@pytest.mark.parametrize("engine", ["pyarrow", "cudf"])
@pytest.mark.parametrize(
"columns",
[
["col_int8"],
["col_category"],
["col_int32", "col_float32"],
["col_int16", "col_float64", "col_int8"],
None,
],
)
def test_parquet_reader_basic(parquet_file, columns, engine):
expect = pd.read_parquet(parquet_file, columns=columns)
got = cudf.read_parquet(parquet_file, engine=engine, columns=columns)
# PANDAS returns category objects whereas cuDF returns hashes
if engine == "cudf":
if "col_category" in expect.columns:
expect = expect.drop(columns=["col_category"])
if "col_category" in got.columns:
got = got.drop(columns=["col_category"])
assert_eq(expect, got)
@pytest.mark.filterwarnings("ignore:Using CPU")
@pytest.mark.parametrize("engine", ["cudf"])
def test_parquet_reader_empty_pandas_dataframe(tmpdir, engine):
df = pd.DataFrame()
fname = tmpdir.join("test_pq_reader_empty_pandas_dataframe.parquet")
df.to_parquet(fname)
assert os.path.exists(fname)
expect = pd.read_parquet(fname)
got = cudf.read_parquet(fname, engine=engine)
expect = expect.reset_index(drop=True)
got = got.reset_index(drop=True)
assert_eq(expect, got)
@pytest.mark.parametrize("has_null", [False, True])
def test_parquet_reader_strings(tmpdir, has_null):
df = pd.DataFrame(
[(1, "aaa", 9.0), (2, "bbb", 8.0), (3, "ccc", 7.0)],
columns=pd.Index(list("abc")),
)
if has_null:
df.at[1, "b"] = None
fname = tmpdir.join("test_pq_reader_strings.parquet")
df.to_parquet(fname)
assert os.path.exists(fname)
gdf = cudf.read_parquet(fname, engine="cudf")
assert gdf["b"].dtype == np.dtype("object")
assert_eq(gdf["b"], df["b"])
@pytest.mark.parametrize("columns", [None, ["b"]])
@pytest.mark.parametrize("index_col", ["b", "Nameless", None])
def test_parquet_reader_index_col(tmpdir, index_col, columns):
df = pd.DataFrame({"a": range(3), "b": range(3, 6), "c": range(6, 9)})
if index_col is None:
# No index column
df.reset_index(drop=True, inplace=True)
elif index_col == "Nameless":
# Index column but no name
df.set_index("a", inplace=True)
df.index.name = None
else:
# Index column as normal
df.set_index(index_col, inplace=True)
fname = tmpdir.join("test_pq_reader_index_col.parquet")
# PANDAS' PyArrow backend always writes the index unless disabled
df.to_parquet(fname, index=(index_col is not None))
assert os.path.exists(fname)
pdf = pd.read_parquet(fname, columns=columns)
gdf = cudf.read_parquet(fname, engine="cudf", columns=columns)
assert_eq(pdf, gdf, check_categorical=False)
@pytest.mark.parametrize("pandas_compat", [True, False])
@pytest.mark.parametrize(
"columns", [["a"], ["d"], ["a", "b"], ["a", "d"], None]
)
def test_parquet_reader_pandas_metadata(tmpdir, columns, pandas_compat):
df = pd.DataFrame(
{
"a": range(6, 9),
"b": range(3, 6),
"c": range(6, 9),
"d": ["abc", "def", "xyz"],
}
)
df.set_index("b", inplace=True)
fname = tmpdir.join("test_pq_reader_pandas_metadata.parquet")
df.to_parquet(fname)
assert os.path.exists(fname)
# PANDAS `read_parquet()` and PyArrow `read_pandas()` always includes index
# Instead, directly use PyArrow to optionally omit the index
expect = pa.parquet.read_table(
fname, columns=columns, use_pandas_metadata=pandas_compat
).to_pandas()
got = cudf.read_parquet(
fname, columns=columns, use_pandas_metadata=pandas_compat
)
if pandas_compat or columns is None or "b" in columns:
assert got.index.name == "b"
else:
assert got.index.name is None
assert_eq(expect, got, check_categorical=False)
@pytest.mark.parametrize("pandas_compat", [True, False])
@pytest.mark.parametrize("as_bytes", [True, False])
def test_parquet_range_index_pandas_metadata(tmpdir, pandas_compat, as_bytes):
df = pd.DataFrame(
{"a": range(6, 9), "b": ["abc", "def", "xyz"]},
index=pd.RangeIndex(3, 6, 1, name="c"),
)
fname = tmpdir.join("test_parquet_range_index_pandas_metadata")
df.to_parquet(fname)
assert os.path.exists(fname)
# PANDAS `read_parquet()` and PyArrow `read_pandas()` always includes index
# Instead, directly use PyArrow to optionally omit the index
expect = pa.parquet.read_table(
fname, use_pandas_metadata=pandas_compat
).to_pandas()
if as_bytes:
# Make sure we can handle RangeIndex parsing
# in pandas when the input is `bytes`
with open(fname, "rb") as f:
got = cudf.read_parquet(
f.read(), use_pandas_metadata=pandas_compat
)
else:
got = cudf.read_parquet(fname, use_pandas_metadata=pandas_compat)
assert_eq(expect, got)
def test_parquet_read_metadata(tmpdir, pdf):
if len(pdf) > 100:
pytest.skip("Skipping long setup test")
def num_row_groups(rows, group_size):
return max(1, (rows + (group_size - 1)) // group_size)
fname = tmpdir.join("metadata.parquet")
row_group_size = 5
pdf.to_parquet(fname, compression="snappy", row_group_size=row_group_size)
num_rows, row_groups, col_names = cudf.io.read_parquet_metadata(fname)
assert num_rows == len(pdf.index)
assert row_groups == num_row_groups(num_rows, row_group_size)
for a, b in zip(col_names, pdf.columns):
assert a == b
def test_parquet_read_filtered(tmpdir, rdg_seed):
# Generate data
fname = tmpdir.join("filtered.parquet")
dg.generate(
fname,
dg.Parameters(
num_rows=2048,
column_parameters=[
dg.ColumnParameters(
cardinality=40,
null_frequency=0.05,
generator=lambda g: [g.address.city() for _ in range(40)],
is_sorted=False,
),
dg.ColumnParameters(
40,
0.2,
lambda g: [g.person.age() for _ in range(40)],
True,
),
],
seed=rdg_seed,
),
format={"name": "parquet", "row_group_size": 64},
)
# Get dataframes to compare
df = cudf.read_parquet(fname)
df_filtered = cudf.read_parquet(fname, filters=[("1", ">", 60)])
# PyArrow's read_table function does row-group-level filtering in addition
# to applying given filters once the table has been read into memory.
# Because of this, we aren't using PyArrow as a reference for testing our
# row-group selection method since the only way to only select row groups
# with PyArrow is with the method we use and intend to test.
tbl_filtered = pq.read_table(
fname, filters=[("1", ">", 60)], use_legacy_dataset=False
)
assert_eq(cudf.io.read_parquet_metadata(fname)[1], 2048 / 64)
print(len(df_filtered))
print(len(tbl_filtered))
assert len(df_filtered) < len(df)
assert len(tbl_filtered) <= len(df_filtered)
def test_parquet_read_filtered_everything(tmpdir):
# Generate data
fname = tmpdir.join("filtered_everything.parquet")
df = pd.DataFrame({"x": range(10), "y": list("aabbccddee")})
df.to_parquet(fname, row_group_size=2)
# Check filter
df_filtered = cudf.read_parquet(fname, filters=[("x", "==", 12)])
assert_eq(len(df_filtered), 0)
assert_eq(df_filtered["x"].dtype, "int64")
assert_eq(df_filtered["y"].dtype, "object")
def test_parquet_read_filtered_multiple_files(tmpdir):
# Generate data
fname_0 = tmpdir.join("filtered_multiple_files_0.parquet")
df = pd.DataFrame({"x": range(10), "y": list("aabbccddee")})
df.to_parquet(fname_0, row_group_size=2)
fname_1 = tmpdir.join("filtered_multiple_files_1.parquet")
df = pd.DataFrame({"x": range(10), "y": list("aaccccddee")})
df.to_parquet(fname_1, row_group_size=2)
fname_2 = tmpdir.join("filtered_multiple_files_2.parquet")
df = pd.DataFrame(
{"x": [0, 1, 9, 9, 4, 5, 6, 7, 8, 9], "y": list("aabbzzddee")}
)
df.to_parquet(fname_2, row_group_size=2)
# Check filter
filtered_df = cudf.read_parquet(
[fname_0, fname_1, fname_2], filters=[("x", "==", 2)]
)
assert_eq(
filtered_df,
cudf.DataFrame({"x": [2, 2], "y": list("bc")}, index=[2, 2]),
)
@pytest.mark.skipif(
version.parse(pa.__version__) < version.parse("1.0.1"),
reason="pyarrow 1.0.0 needed for various operators and operand types",
)
@pytest.mark.parametrize(
"predicate,expected_len",
[
([[("x", "==", 0)], [("z", "==", 0)]], 2),
([("x", "==", 0), ("z", "==", 0)], 0),
([("x", "==", 0), ("z", "!=", 0)], 1),
([("y", "==", "c"), ("x", ">", 8)], 0),
([("y", "==", "c"), ("x", ">=", 5)], 1),
([[("y", "==", "c")], [("x", "<", 3)]], 5),
([[("x", "not in", (0, 9)), ("z", "not in", (4, 5))]], 6),
([[("y", "==", "c")], [("x", "in", (0, 9)), ("z", "in", (0, 9))]], 4),
([[("x", "==", 0)], [("x", "==", 1)], [("x", "==", 2)]], 3),
([[("x", "==", 0), ("z", "==", 9), ("y", "==", "a")]], 1),
],
)
def test_parquet_read_filtered_complex_predicate(
tmpdir, predicate, expected_len
):
# Generate data
fname = tmpdir.join("filtered_complex_predicate.parquet")
df = pd.DataFrame(
{
"x": range(10),
"y": list("aabbccddee"),
"z": reversed(range(10)),
}
)
df.to_parquet(fname, row_group_size=2)
# Check filters
df_filtered = cudf.read_parquet(fname, filters=predicate)
assert_eq(cudf.io.read_parquet_metadata(fname)[1], 10 / 2)
assert_eq(len(df_filtered), expected_len)
@pytest.mark.parametrize("row_group_size", [1, 5, 100])
def test_parquet_read_row_groups(tmpdir, pdf, row_group_size):
if len(pdf) > 100:
pytest.skip("Skipping long setup test")
if "col_category" in pdf.columns:
pdf = pdf.drop(columns=["col_category"])
fname = tmpdir.join("row_group.parquet")
pdf.to_parquet(fname, compression="gzip", row_group_size=row_group_size)
num_rows, row_groups, col_names = cudf.io.read_parquet_metadata(fname)
gdf = [cudf.read_parquet(fname, row_groups=[i]) for i in range(row_groups)]
gdf = cudf.concat(gdf)
assert_eq(pdf.reset_index(drop=True), gdf.reset_index(drop=True))
# first half rows come from the first source, rest from the second
gdf = cudf.read_parquet(
[fname, fname],
row_groups=[
list(range(row_groups // 2)),
list(range(row_groups // 2, row_groups)),
],
)
assert_eq(pdf.reset_index(drop=True), gdf.reset_index(drop=True))
@pytest.mark.parametrize("row_group_size", [1, 5, 100])
def test_parquet_read_row_groups_non_contiguous(tmpdir, pdf, row_group_size):
if len(pdf) > 100:
pytest.skip("Skipping long setup test")
fname = tmpdir.join("row_group.parquet")
pdf.to_parquet(fname, compression="gzip", row_group_size=row_group_size)
num_rows, row_groups, col_names = cudf.io.read_parquet_metadata(fname)
# alternate rows between the two sources
gdf = cudf.read_parquet(
[fname, fname],
row_groups=[
list(range(0, row_groups, 2)),
list(range(1, row_groups, 2)),
],
)
ref_df = [
cudf.read_parquet(fname, row_groups=i)
for i in list(range(0, row_groups, 2)) + list(range(1, row_groups, 2))
]
ref_df = cudf.concat(ref_df)
assert_eq(ref_df, gdf)
def test_parquet_reader_spark_timestamps(datadir):
fname = datadir / "spark_timestamp.snappy.parquet"
expect = pd.read_parquet(fname)
got = cudf.read_parquet(fname)
assert_eq(expect, got)
def test_parquet_reader_spark_decimals(datadir):
fname = datadir / "spark_decimal.parquet"
expect = pd.read_parquet(fname)
got = cudf.read_parquet(fname)
assert_eq(expect, got)
@pytest.mark.parametrize("columns", [["a"], ["b", "a"], None])
def test_parquet_reader_decimal128(datadir, columns):
fname = datadir / "nested_decimal128_file.parquet"
got = cudf.read_parquet(fname, columns=columns)
expect = cudf.read_parquet(fname, columns=columns)
assert_eq(expect, got)
def test_parquet_reader_microsecond_timestamps(datadir):
fname = datadir / "usec_timestamp.parquet"
expect = pd.read_parquet(fname)
got = cudf.read_parquet(fname)
assert_eq(expect, got)
def test_parquet_reader_mixedcompression(datadir):
fname = datadir / "mixed_compression.parquet"
expect = pd.read_parquet(fname)
got = cudf.read_parquet(fname)
assert_eq(expect, got)
def test_parquet_reader_select_columns(datadir):
fname = datadir / "nested_column_map.parquet"
expect = cudf.read_parquet(fname).to_pandas()[["value"]]
got = cudf.read_parquet(fname, columns=["value"])
assert_eq(expect, got)
def test_parquet_reader_invalids(tmpdir):
test_pdf = make_pdf(nrows=1000, nvalids=1000 // 4, dtype=np.int64)
fname = tmpdir.join("invalids.parquet")
test_pdf.to_parquet(fname, engine="pyarrow")
expect = pd.read_parquet(fname)
got = cudf.read_parquet(fname)
assert_eq(expect, got)
def test_parquet_reader_filenotfound(tmpdir):
with pytest.raises(FileNotFoundError):
cudf.read_parquet("TestMissingFile.parquet")
with pytest.raises(FileNotFoundError):
cudf.read_parquet(tmpdir.mkdir("cudf_parquet"))
def test_parquet_reader_local_filepath():
fname = "~/TestLocalFile.parquet"
if not os.path.isfile(fname):
pytest.skip("Local .parquet file is not found")
cudf.read_parquet(fname)
@pytest.mark.parametrize(
"src", ["filepath", "pathobj", "bytes_io", "bytes", "url"]
)
def test_parquet_reader_filepath_or_buffer(parquet_path_or_buf, src):
expect = pd.read_parquet(parquet_path_or_buf("filepath"))
got = cudf.read_parquet(parquet_path_or_buf(src))
assert_eq(expect, got)
def test_parquet_reader_arrow_nativefile(parquet_path_or_buf):
# Check that we can read a file opened with the
# Arrow FileSystem interface
expect = cudf.read_parquet(parquet_path_or_buf("filepath"))
fs, path = pa_fs.FileSystem.from_uri(parquet_path_or_buf("filepath"))
with fs.open_input_file(path) as fil:
got = cudf.read_parquet(fil)
assert_eq(expect, got)
@pytest.mark.parametrize("use_python_file_object", [True, False])
def test_parquet_reader_use_python_file_object(
parquet_path_or_buf, use_python_file_object
):
# Check that the non-default `use_python_file_object=True`
# option works as expected
expect = cudf.read_parquet(parquet_path_or_buf("filepath"))
fs, _, paths = get_fs_token_paths(parquet_path_or_buf("filepath"))
# Pass open fsspec file
with fs.open(paths[0], mode="rb") as fil:
got1 = cudf.read_parquet(
fil, use_python_file_object=use_python_file_object
)
assert_eq(expect, got1)
# Pass path only
got2 = cudf.read_parquet(
paths[0], use_python_file_object=use_python_file_object
)
assert_eq(expect, got2)
def create_parquet_source(df, src_type, fname):
if src_type == "filepath":
df.to_parquet(fname, engine="pyarrow")
return str(fname)
if src_type == "pathobj":
df.to_parquet(fname, engine="pyarrow")
return fname
if src_type == "bytes_io":
buffer = BytesIO()
df.to_parquet(buffer, engine="pyarrow")
return buffer
if src_type == "bytes":
buffer = BytesIO()
df.to_parquet(buffer, engine="pyarrow")
return buffer.getvalue()
if src_type == "url":
df.to_parquet(fname, engine="pyarrow")
return pathlib.Path(fname).as_uri()
@pytest.mark.parametrize(
"src", ["filepath", "pathobj", "bytes_io", "bytes", "url"]
)
def test_parquet_reader_multiple_files(tmpdir, src):
test_pdf1 = make_pdf(nrows=1000, nvalids=1000 // 2)
test_pdf2 = make_pdf(nrows=500)
expect = pd.concat([test_pdf1, test_pdf2])
src1 = create_parquet_source(test_pdf1, src, tmpdir.join("multi1.parquet"))
src2 = create_parquet_source(test_pdf2, src, tmpdir.join("multi2.parquet"))
got = cudf.read_parquet([src1, src2])
assert_eq(expect, got)
def test_parquet_reader_reordered_columns(tmpdir):
src = pd.DataFrame(
{"name": ["cow", None, "duck", "fish", None], "id": [0, 1, 2, 3, 4]}
)
fname = tmpdir.join("test_parquet_reader_reordered_columns.parquet")
src.to_parquet(fname)
assert os.path.exists(fname)
expect = pd.DataFrame(
{"id": [0, 1, 2, 3, 4], "name": ["cow", None, "duck", "fish", None]}
)
got = cudf.read_parquet(fname, columns=["id", "name"])
assert_eq(expect, got, check_dtype=False)
def test_parquet_reader_reordered_columns_mixed(tmpdir):
src = pd.DataFrame(
{
"name": ["cow", None, "duck", "fish", None],
"list0": [
[[1, 2], [3, 4]],
None,
[[5, 6], None],
[[1]],
[[5], [6, None, 8]],
],
"id": [0, 1, 2, 3, 4],
"list1": [
[[1, 2], [3, 4]],
[[0, 0]],
[[5, 6], [10, 12]],
[[1]],
[[5], [6, 8]],
],
}
)
fname = tmpdir.join("test_parquet_reader_reordered_columns.parquet")
src.to_parquet(fname)
assert os.path.exists(fname)
expect = pd.DataFrame(
{
"list1": [
[[1, 2], [3, 4]],
[[0, 0]],
[[5, 6], [10, 12]],
[[1]],
[[5], [6, 8]],
],
"id": [0, 1, 2, 3, 4],
"list0": [
[[1, 2], [3, 4]],
None,
[[5, 6], None],
[[1]],
[[5], [6, None, 8]],
],
"name": ["cow", None, "duck", "fish", None],
}
)
got = cudf.read_parquet(fname, columns=["list1", "id", "list0", "name"])
assert_eq(expect, got, check_dtype=False)
def test_parquet_reader_list_basic(tmpdir):
expect = pd.DataFrame({"a": [[[1, 2], [3, 4]], None, [[5, 6], None]]})
fname = tmpdir.join("test_parquet_reader_list_basic.parquet")
expect.to_parquet(fname)
assert os.path.exists(fname)
got = cudf.read_parquet(fname)
assert_eq(expect, got)
def test_parquet_reader_list_table(tmpdir):
expect = pd.DataFrame(
{
"a": [[[1, 2], [3, 4]], None, [[5, 6], None]],
"b": [[None, None], None, [None, None]],
"c": [[[1, 2, 3]], [[None]], [[], None]],
"d": [[[]], [[None]], [[1, 2, 3], None]],
"e": [[["cows"]], [["dogs"]], [["cats", "birds", "owls"], None]],
}
)
fname = tmpdir.join("test_parquet_reader_list_table.parquet")
expect.to_parquet(fname)
assert os.path.exists(fname)
got = cudf.read_parquet(fname)
assert pa.Table.from_pandas(expect).equals(got.to_arrow())
def int_gen(first_val, i):
"""
Returns an integer based on an absolute index and a starting value. Used
as input to `list_gen`.
"""
return int(i + first_val)
strings = [
"cats",
"dogs",
"cows",
"birds",
"fish",
"sheep",
"owls",
"bears",
"ants",
]
def string_gen(first_val, i):
"""
Returns a string based on an absolute index and a starting value. Used as
input to `list_gen`.
"""
return strings[int_gen(first_val, i) % len(strings)]
def list_row_gen(
gen, first_val, list_size, lists_per_row, include_validity=False
):
"""
Generate a single row for a List<List<>> column based on input parameters.
Parameters
----------
gen : A callable which generates an individual leaf element based on an
absolute index.
first_val : Generate the column as if it had started at 'first_val'
instead of 0.
list_size : Size of each generated list.
lists_per_row : Number of lists to generate per row.
include_validity : Whether or not to include nulls as part of the
column. If true, it will add a selection of nulls at both the
topmost row level and at the leaf level.
Returns
-------
The generated list column.
"""
def L(list_size, first_val):
return [
(gen(first_val, i) if i % 2 == 0 else None)
if include_validity
else (gen(first_val, i))
for i in range(list_size)
]
return [
(L(list_size, first_val + (list_size * i)) if i % 2 == 0 else None)
if include_validity
else L(list_size, first_val + (list_size * i))
for i in range(lists_per_row)
]
def list_gen(gen, num_rows, lists_per_row, list_size, include_validity=False):
"""
Generate a list column based on input parameters.
Parameters
----------
gen : A callable which generates an individual leaf element based on an
absolute index.
num_rows : Number of rows to generate.
lists_per_row : Number of lists to generate per row.
list_size : Size of each generated list.
include_validity : Whether or not to include nulls as part of the
column. If true, it will add a selection of nulls at both the
topmost row level and at the leaf level.
Returns
-------
The generated list column.
"""
def L(list_size, first_val):
return [
(gen(first_val, i) if i % 2 == 0 else None)
if include_validity
else (gen(first_val, i))
for i in range(list_size)
]
def R(first_val, lists_per_row, list_size):
return [
L(list_size, first_val + (list_size * i))
for i in range(lists_per_row)
]
return [
(
R(
lists_per_row * list_size * i,
lists_per_row,
list_size,
)
if i % 2 == 0
else None
)
if include_validity
else R(
lists_per_row * list_size * i,
lists_per_row,
list_size,
)
for i in range(num_rows)
]
def test_parquet_reader_list_large(tmpdir):
expect = pd.DataFrame({"a": list_gen(int_gen, 256, 80, 50)})
fname = tmpdir.join("test_parquet_reader_list_large.parquet")
expect.to_parquet(fname)
assert os.path.exists(fname)
got = cudf.read_parquet(fname)
assert_eq(expect, got, check_dtype=False)
def test_parquet_reader_list_validity(tmpdir):
expect = pd.DataFrame(
{"a": list_gen(int_gen, 256, 80, 50, include_validity=True)}
)
fname = tmpdir.join("test_parquet_reader_list_validity.parquet")
expect.to_parquet(fname)
assert os.path.exists(fname)
got = cudf.read_parquet(fname)
assert_eq(expect, got, check_dtype=False)
def test_parquet_reader_list_large_mixed(tmpdir):
expect = pd.DataFrame(
{
"a": list_gen(string_gen, 128, 80, 50),
"b": list_gen(int_gen, 128, 80, 50),
"c": list_gen(int_gen, 128, 80, 50, include_validity=True),
"d": list_gen(string_gen, 128, 80, 50, include_validity=True),
}
)
fname = tmpdir.join("test_parquet_reader_list_large_mixed.parquet")
expect.to_parquet(fname)
assert os.path.exists(fname)
got = cudf.read_parquet(fname)
assert pa.Table.from_pandas(expect).equals(got.to_arrow())
def test_parquet_reader_list_large_multi_rowgroup(tmpdir):
# > 40 row groups
num_rows = 100000
num_docs = num_rows / 2
num_categories = 1_000
row_group_size = 1000
cupy.random.seed(0)
# generate a random pairing of doc: category
documents = cudf.DataFrame(
{
"document_id": cupy.random.randint(num_docs, size=num_rows),
"category_id": cupy.random.randint(num_categories, size=num_rows),
}
)
# group categories by document_id to create a list column
expect = documents.groupby("document_id").agg({"category_id": ["collect"]})
expect.columns = expect.columns.get_level_values(0)
expect.reset_index(inplace=True)
# round trip the dataframe to/from parquet
fname = tmpdir.join(
"test_parquet_reader_list_large_multi_rowgroup.parquet"
)
expect.to_pandas().to_parquet(fname, row_group_size=row_group_size)
got = cudf.read_parquet(fname)
assert_eq(expect, got)
def test_parquet_reader_list_large_multi_rowgroup_nulls(tmpdir):
# 25 row groups
num_rows = 25000
row_group_size = 1000
expect = cudf.DataFrame(
{"a": list_gen(int_gen, num_rows, 3, 2, include_validity=True)}
)
# round trip the dataframe to/from parquet
fname = tmpdir.join(
"test_parquet_reader_list_large_multi_rowgroup_nulls.parquet"
)
expect.to_pandas().to_parquet(fname, row_group_size=row_group_size)
assert os.path.exists(fname)
got = cudf.read_parquet(fname)
assert_eq(expect, got)
def struct_gen(gen, skip_rows, num_rows, include_validity=False):
"""
Generate a struct column based on input parameters.
Parameters
----------
gen : A array of callables which generate an individual row based on an
absolute index.
skip_rows : Generate the column as if it had started at 'skip_rows'
instead of 0. The intent here is to emulate the skip_rows
parameter of the parquet reader.
num_fields : Number of fields in the struct.
include_validity : Whether or not to include nulls as part of the
column. If true, it will add a selection of nulls at both the
field level and at the value level.
Returns
-------
The generated struct column.
"""
def R(first_val, num_fields):
return {
"col"
+ str(f): (gen[f](first_val, first_val) if f % 4 != 0 else None)
if include_validity
else (gen[f](first_val, first_val))
for f in range(len(gen))
}
return [
(R((i + skip_rows), len(gen)) if (i + skip_rows) % 4 != 0 else None)
if include_validity
else R((i + skip_rows), len(gen))
for i in range(num_rows)
]
@pytest.mark.parametrize(
"data",
[
# struct
[
{"a": 1, "b": 2},
{"a": 10, "b": 20},
{"a": None, "b": 22},
{"a": None, "b": None},
{"a": 15, "b": None},
],
# struct-of-list
[
{"a": 1, "b": 2, "c": [1, 2, 3]},
{"a": 10, "b": 20, "c": [4, 5]},
{"a": None, "b": 22, "c": [6]},
{"a": None, "b": None, "c": None},
{"a": 15, "b": None, "c": [-1, -2]},
None,
{"a": 100, "b": 200, "c": [-10, None, -20]},
],
# list-of-struct
[
[{"a": 1, "b": 2}, {"a": 2, "b": 3}, {"a": 4, "b": 5}],
None,
[{"a": 10, "b": 20}],
[{"a": 100, "b": 200}, {"a": None, "b": 300}, None],
],
# struct-of-struct
[
{"a": 1, "b": {"inner_a": 10, "inner_b": 20}, "c": 2},
{"a": 3, "b": {"inner_a": 30, "inner_b": 40}, "c": 4},
{"a": 5, "b": {"inner_a": 50, "inner_b": None}, "c": 6},
{"a": 7, "b": None, "c": 8},
{"a": None, "b": {"inner_a": None, "inner_b": None}, "c": None},
None,
{"a": None, "b": {"inner_a": None, "inner_b": 100}, "c": 10},
],
],
)
def test_parquet_reader_struct_basic(tmpdir, data):
expect = pa.Table.from_pydict({"struct": data})
fname = tmpdir.join("test_parquet_reader_struct_basic.parquet")
pa.parquet.write_table(expect, fname)
assert os.path.exists(fname)
got = cudf.read_parquet(fname)
assert expect.equals(got.to_arrow())
def select_columns_params():
dfs = [
# struct
(
[
{"a": 1, "b": 2},
{"a": 10, "b": 20},
{"a": None, "b": 22},
{"a": None, "b": None},
{"a": 15, "b": None},
],
[["struct"], ["struct.a"], ["struct.b"], ["c"]],
),
# struct-of-list
(
[
{"a": 1, "b": 2, "c": [1, 2, 3]},
{"a": 10, "b": 20, "c": [4, 5]},
{"a": None, "b": 22, "c": [6]},
{"a": None, "b": None, "c": None},
{"a": 15, "b": None, "c": [-1, -2]},
None,
{"a": 100, "b": 200, "c": [-10, None, -20]},
],
[
["struct"],
["struct.c"],
["struct.c.list"],
["struct.c.list.item"],
["struct.b", "struct.c"],
["struct.b", "struct.d", "struct.c"],
],
),
# list-of-struct
(
[
[{"a": 1, "b": 2}, {"a": 2, "b": 3}, {"a": 4, "b": 5}],
None,
[{"a": 10, "b": 20}],
[{"a": 100, "b": 200}, {"a": None, "b": 300}, None],
],
[
["struct"],
["struct.list"],
["struct.list.item"],
["struct.list.item.a", "struct.list.item.b"],
["struct.list.item.c"],
],
),
# struct with "." in field names
(
[
{"a.b": 1, "b.a": 2},
{"a.b": 10, "b.a": 20},
{"a.b": None, "b.a": 22},
{"a.b": None, "b.a": None},
{"a.b": 15, "b.a": None},
],
[["struct"], ["struct.a"], ["struct.b.a"]],
),
]
for df_col_pair in dfs:
for cols in df_col_pair[1]:
yield df_col_pair[0], cols
@pytest.mark.parametrize("data, columns", select_columns_params())
def test_parquet_reader_struct_select_columns(tmpdir, data, columns):
table = pa.Table.from_pydict({"struct": data})
buff = BytesIO()
pa.parquet.write_table(table, buff)
expect = pq.ParquetFile(buff).read(columns=columns)
got = cudf.read_parquet(buff, columns=columns)
assert expect.equals(got.to_arrow())
def test_parquet_reader_struct_los_large(tmpdir):
num_rows = 256
list_size = 64
data = [
struct_gen([string_gen, int_gen, string_gen], 0, list_size, False)
if i % 2 == 0
else None
for i in range(num_rows)
]
expect = pa.Table.from_pydict({"los": data})
fname = tmpdir.join("test_parquet_reader_struct_los_large.parquet")
pa.parquet.write_table(expect, fname)
assert os.path.exists(fname)
got = cudf.read_parquet(fname)
assert expect.equals(got.to_arrow())
@pytest.mark.parametrize(
"params", [[3, 4, 32, False], [3, 4, 32, True], [100, 25, 256, True]]
)
def test_parquet_reader_struct_sol_table(tmpdir, params):
# Struct<List<List>>
lists_per_row = params[0]
list_size = params[1]
num_rows = params[2]
include_validity = params[3]
def list_gen_wrapped(x, y):
return list_row_gen(
int_gen, x * list_size * lists_per_row, list_size, lists_per_row
)
def string_list_gen_wrapped(x, y):
return list_row_gen(
string_gen,
x * list_size * lists_per_row,
list_size,
lists_per_row,
include_validity,
)
data = struct_gen(
[int_gen, string_gen, list_gen_wrapped, string_list_gen_wrapped],
0,
num_rows,
include_validity,
)
expect = pa.Table.from_pydict({"sol": data})
fname = tmpdir.join("test_parquet_reader_struct_sol_table.parquet")
pa.parquet.write_table(expect, fname)
assert os.path.exists(fname)
got = cudf.read_parquet(fname)
assert expect.equals(got.to_arrow())
def test_parquet_reader_v2(tmpdir, simple_pdf):
pdf_fname = tmpdir.join("pdfv2.parquet")
simple_pdf.to_parquet(pdf_fname, data_page_version="2.0")
assert_eq(cudf.read_parquet(pdf_fname), simple_pdf)
cudf.from_pandas(simple_pdf).to_parquet(pdf_fname, header_version="2.0")
assert_eq(cudf.read_parquet(pdf_fname), simple_pdf)
def test_parquet_delta_byte_array(datadir):
fname = datadir / "delta_byte_arr.parquet"
assert_eq(cudf.read_parquet(fname), pd.read_parquet(fname))
def delta_num_rows():
return [1, 2, 23, 32, 33, 34, 64, 65, 66, 128, 129, 130, 20000, 50000]
@pytest.mark.parametrize("nrows", [1, 100000])
@pytest.mark.parametrize("add_nulls", [True, False])
@pytest.mark.parametrize(
"dtype",
[
"int8",
"int16",
"int32",
"int64",
],
)
def test_delta_binary(nrows, add_nulls, dtype, tmpdir):
null_frequency = 0.25 if add_nulls else 0
# Create a pandas dataframe with random data of mixed types
arrow_table = dg.rand_dataframe(
dtypes_meta=[
{
"dtype": dtype,
"null_frequency": null_frequency,
"cardinality": nrows,
},
],
rows=nrows,
seed=0,
use_threads=False,
)
# Roundabout conversion to pandas to preserve nulls/data types
cudf_table = cudf.DataFrame.from_arrow(arrow_table)
test_pdf = cudf_table.to_pandas(nullable=True)
pdf_fname = tmpdir.join("pdfv2.parquet")
test_pdf.to_parquet(
pdf_fname,
version="2.6",
column_encoding="DELTA_BINARY_PACKED",
data_page_version="2.0",
data_page_size=64 * 1024,
engine="pyarrow",
use_dictionary=False,
)
cdf = cudf.read_parquet(pdf_fname)
pcdf = cudf.from_pandas(test_pdf)
assert_eq(cdf, pcdf)
# Write back out with cudf and make sure pyarrow can read it
cudf_fname = tmpdir.join("cudfv2.parquet")
pcdf.to_parquet(
cudf_fname,
compression=None,
header_version="2.0",
use_dictionary=False,
)
# FIXME(ets): should probably not use more bits than the data type
try:
cdf2 = cudf.from_pandas(pd.read_parquet(cudf_fname))
except OSError as e:
if dtype == "int32" and nrows == 100000:
pytest.mark.xfail(
reason="arrow does not support 33-bit delta encoding"
)
else:
raise e
else:
assert_eq(cdf2, cdf)
@pytest.mark.parametrize("nrows", delta_num_rows())
@pytest.mark.parametrize("add_nulls", [True, False])
@pytest.mark.parametrize("str_encoding", ["DELTA_BYTE_ARRAY"])
def test_delta_byte_array_roundtrip(nrows, add_nulls, str_encoding, tmpdir):
null_frequency = 0.25 if add_nulls else 0
# Create a pandas dataframe with random data of mixed lengths
test_pdf = dg.rand_dataframe(
dtypes_meta=[
{
"dtype": "str",
"null_frequency": null_frequency,
"cardinality": nrows,
"max_string_length": 10,
},
{
"dtype": "str",
"null_frequency": null_frequency,
"cardinality": nrows,
"max_string_length": 100,
},
],
rows=nrows,
seed=0,
use_threads=False,
).to_pandas()
pdf_fname = tmpdir.join("pdfdeltaba.parquet")
test_pdf.to_parquet(
pdf_fname,
version="2.6",
column_encoding=str_encoding,
data_page_version="2.0",
data_page_size=64 * 1024,
engine="pyarrow",
use_dictionary=False,
)
cdf = cudf.read_parquet(pdf_fname)
pcdf = cudf.from_pandas(test_pdf)
assert_eq(cdf, pcdf)
@pytest.mark.parametrize("nrows", delta_num_rows())
@pytest.mark.parametrize("add_nulls", [True, False])
@pytest.mark.parametrize("str_encoding", ["DELTA_BYTE_ARRAY"])
def test_delta_struct_list(tmpdir, nrows, add_nulls, str_encoding):
# Struct<List<List>>
lists_per_row = 3
list_size = 4
num_rows = nrows
include_validity = add_nulls
def list_gen_wrapped(x, y):
return list_row_gen(
int_gen, x * list_size * lists_per_row, list_size, lists_per_row
)
def string_list_gen_wrapped(x, y):
return list_row_gen(
string_gen,
x * list_size * lists_per_row,
list_size,
lists_per_row,
include_validity,
)
data = struct_gen(
[int_gen, string_gen, list_gen_wrapped, string_list_gen_wrapped],
0,
num_rows,
include_validity,
)
test_pdf = pa.Table.from_pydict({"sol": data}).to_pandas()
pdf_fname = tmpdir.join("pdfdeltaba.parquet")
test_pdf.to_parquet(
pdf_fname,
version="2.6",
column_encoding={
"sol.col0": "DELTA_BINARY_PACKED",
"sol.col1": str_encoding,
"sol.col2.list.element.list.element": "DELTA_BINARY_PACKED",
"sol.col3.list.element.list.element": str_encoding,
},
data_page_version="2.0",
data_page_size=64 * 1024,
engine="pyarrow",
use_dictionary=False,
)
# sanity check to verify file is written properly
assert_eq(test_pdf, pd.read_parquet(pdf_fname))
cdf = cudf.read_parquet(pdf_fname)
assert_eq(cdf, cudf.from_pandas(test_pdf))
@pytest.mark.parametrize(
"data",
[
# Structs
{
"being": [
None,
{"human?": True, "Deets": {"Name": "Carrot", "Age": 27}},
{"human?": None, "Deets": {"Name": "Angua", "Age": 25}},
{"human?": False, "Deets": {"Name": "Cheery", "Age": 31}},
{"human?": False, "Deets": None},
{"human?": None, "Deets": {"Name": "Mr", "Age": None}},
]
},
# List of Structs
{
"family": [
[None, {"human?": True, "deets": {"weight": 2.4, "age": 27}}],
[
{"human?": None, "deets": {"weight": 5.3, "age": 25}},
{"human?": False, "deets": {"weight": 8.0, "age": 31}},
{"human?": False, "deets": None},
],
[],
[{"human?": None, "deets": {"weight": 6.9, "age": None}}],
]
},
# Struct of Lists
{
"Real estate records": [
None,
{
"Status": "NRI",
"Ownerships": {
"land_unit": [None, 2, None],
"flats": [[1, 2, 3], [], [4, 5], [], [0, 6, 0]],
},
},
{
"Status": None,
"Ownerships": {
"land_unit": [4, 5],
"flats": [[7, 8], []],
},
},
{
"Status": "RI",
"Ownerships": {"land_unit": None, "flats": [[]]},
},
{"Status": "RI", "Ownerships": None},
{
"Status": None,
"Ownerships": {
"land_unit": [7, 8, 9],
"flats": [[], [], []],
},
},
]
},
],
)
def test_parquet_reader_nested_v2(tmpdir, data):
expect = pd.DataFrame(data)
pdf_fname = tmpdir.join("pdfv2.parquet")
expect.to_parquet(pdf_fname, data_page_version="2.0")
assert_eq(cudf.read_parquet(pdf_fname), expect)
@pytest.mark.filterwarnings("ignore:Using CPU")
def test_parquet_writer_cpu_pyarrow(
tmpdir, pdf_day_timestamps, gdf_day_timestamps
):
pdf_fname = tmpdir.join("pdf.parquet")
gdf_fname = tmpdir.join("gdf.parquet")
if len(pdf_day_timestamps) == 0:
pdf_day_timestamps = pdf_day_timestamps.reset_index(drop=True)
gdf_day_timestamps = pdf_day_timestamps.reset_index(drop=True)
pdf_day_timestamps.to_parquet(pdf_fname.strpath)
gdf_day_timestamps.to_parquet(gdf_fname.strpath, engine="pyarrow")
assert os.path.exists(pdf_fname)
assert os.path.exists(gdf_fname)
expect = pa.parquet.read_pandas(pdf_fname)
got = pa.parquet.read_pandas(gdf_fname)
assert_eq(expect, got)
def clone_field(table, name, datatype):
f = table.schema.field(name)
return pa.field(f.name, datatype, f.nullable, f.metadata)
# Pandas uses a datetime64[ns] while we use a datetime64[ms]
for t in [expect, got]:
for t_col in ["col_datetime64[ms]", "col_datetime64[us]"]:
idx = t.schema.get_field_index(t_col)
field = clone_field(t, t_col, pa.timestamp("ms"))
t = t.set_column(idx, field, t.column(idx).cast(field.type))
t = t.replace_schema_metadata()
assert_eq(expect, got)
@pytest.mark.filterwarnings("ignore:Using CPU")
def test_parquet_writer_int96_timestamps(tmpdir, pdf, gdf):
gdf_fname = tmpdir.join("gdf.parquet")
if len(pdf) == 0:
pdf = pdf.reset_index(drop=True)
gdf = gdf.reset_index(drop=True)
if "col_category" in pdf.columns:
pdf = pdf.drop(columns=["col_category"])
if "col_category" in gdf.columns:
gdf = gdf.drop(columns=["col_category"])
assert_eq(pdf, gdf)
# Write out the gdf using the GPU accelerated writer with INT96 timestamps
gdf.to_parquet(gdf_fname.strpath, index=None, int96_timestamps=True)
assert os.path.exists(gdf_fname)
expect = pdf
got = pd.read_parquet(gdf_fname)
# verify INT96 timestamps were converted back to the same data.
assert_eq(expect, got, check_categorical=False)
def test_multifile_parquet_folder(tmpdir):
test_pdf1 = make_pdf(nrows=10, nvalids=10 // 2)
test_pdf2 = make_pdf(nrows=20)
expect = pd.concat([test_pdf1, test_pdf2])
tmpdir.mkdir("multi_part")
create_parquet_source(
test_pdf1, "filepath", tmpdir.join("multi_part/multi1.parquet")
)
create_parquet_source(
test_pdf2, "filepath", tmpdir.join("multi_part/multi2.parquet")
)
got1 = cudf.read_parquet(tmpdir.join("multi_part/*.parquet"))
assert_eq(expect, got1)
got2 = cudf.read_parquet(tmpdir.join("multi_part"))
assert_eq(expect, got2)
# Validates the metadata return path of the parquet writer
def test_parquet_writer_return_metadata(tmpdir, simple_gdf):
gdf_fname = tmpdir.join("data1.parquet")
# Write out the gdf using the GPU accelerated writer
df_metadata = simple_gdf.to_parquet(
gdf_fname.strpath, index=None, metadata_file_path="test/data1.parquet"
)
# Verify that we got a valid parquet signature in the initial metadata blob
assert df_metadata.tobytes()[0:4] == b"PAR1"
df_metadata_list1 = [df_metadata]
df_metadata_list2 = [df_metadata, df_metadata]
merged_metadata1 = merge_parquet_filemetadata(df_metadata_list1)
merged_metadata2 = merge_parquet_filemetadata(df_metadata_list2)
# Verify that we got a valid parquet signature in the final metadata blob
assert merged_metadata1.tobytes()[0:4] == b"PAR1"
assert merged_metadata2.tobytes()[0:4] == b"PAR1"
# Make sure aggregation is combining metadata correctly
fmd1 = pa.parquet.ParquetFile(BytesIO(merged_metadata1.tobytes())).metadata
fmd2 = pa.parquet.ParquetFile(BytesIO(merged_metadata2.tobytes())).metadata
assert fmd2.num_columns == fmd1.num_columns
assert fmd2.num_rows == 2 * fmd1.num_rows
assert fmd2.num_row_groups == 2 * fmd1.num_row_groups
# Validates the integrity of the GPU accelerated parquet writer.
def test_parquet_writer_gpu_none_index(tmpdir, simple_pdf, simple_gdf):
gdf_fname = tmpdir.join("gdf.parquet")
pdf_fname = tmpdir.join("pdf.parquet")
assert_eq(simple_pdf, simple_gdf)
# Write out the gdf using the GPU accelerated writer
simple_gdf.to_parquet(gdf_fname.strpath, index=None)
simple_pdf.to_parquet(pdf_fname.strpath, index=None)
assert os.path.exists(gdf_fname)
assert os.path.exists(pdf_fname)
expect = pd.read_parquet(pdf_fname)
got = pd.read_parquet(gdf_fname)
assert_eq(expect, got, check_categorical=False)
def test_parquet_writer_gpu_true_index(tmpdir, simple_pdf, simple_gdf):
gdf_fname = tmpdir.join("gdf.parquet")
pdf_fname = tmpdir.join("pdf.parquet")
assert_eq(simple_pdf, simple_gdf)
# Write out the gdf using the GPU accelerated writer
simple_gdf.to_parquet(gdf_fname.strpath, index=True)
simple_pdf.to_parquet(pdf_fname.strpath, index=True)
assert os.path.exists(gdf_fname)
assert os.path.exists(pdf_fname)
expect = pd.read_parquet(pdf_fname)
got = pd.read_parquet(gdf_fname)
assert_eq(expect, got, check_categorical=False)
def test_parquet_writer_gpu_false_index(tmpdir, simple_pdf, simple_gdf):
gdf_fname = tmpdir.join("gdf.parquet")
pdf_fname = tmpdir.join("pdf.parquet")
assert_eq(simple_pdf, simple_gdf)
# Write out the gdf using the GPU accelerated writer
simple_gdf.to_parquet(gdf_fname.strpath, index=False)
simple_pdf.to_parquet(pdf_fname.strpath, index=False)
assert os.path.exists(gdf_fname)
assert os.path.exists(pdf_fname)
expect = pd.read_parquet(pdf_fname)
got = pd.read_parquet(gdf_fname)
assert_eq(expect, got, check_categorical=False)
def test_parquet_writer_gpu_multi_index(tmpdir, simple_pdf, simple_gdf):
gdf_fname = tmpdir.join("gdf.parquet")
pdf_fname = tmpdir.join("pdf.parquet")
simple_pdf = simple_pdf.set_index(["col_bool", "col_int8"])
simple_gdf = simple_gdf.set_index(["col_bool", "col_int8"])
assert_eq(simple_pdf, simple_gdf)
print("PDF Index Type: " + str(type(simple_pdf.index)))
print("GDF Index Type: " + str(type(simple_gdf.index)))
# Write out the gdf using the GPU accelerated writer
simple_gdf.to_parquet(gdf_fname.strpath, index=None)
simple_pdf.to_parquet(pdf_fname.strpath, index=None)
assert os.path.exists(gdf_fname)
assert os.path.exists(pdf_fname)
expect = pd.read_parquet(pdf_fname)
got = pd.read_parquet(gdf_fname)
assert_eq(expect, got, check_categorical=False)
def test_parquet_writer_gpu_chunked(tmpdir, simple_pdf, simple_gdf):
gdf_fname = tmpdir.join("gdf.parquet")
writer = ParquetWriter(gdf_fname)
writer.write_table(simple_gdf)
writer.write_table(simple_gdf)
writer.close()
assert_eq(pd.read_parquet(gdf_fname), pd.concat([simple_pdf, simple_pdf]))
def test_parquet_writer_gpu_chunked_context(tmpdir, simple_pdf, simple_gdf):
gdf_fname = tmpdir.join("gdf.parquet")
with ParquetWriter(gdf_fname) as writer:
writer.write_table(simple_gdf)
writer.write_table(simple_gdf)
got = pd.read_parquet(gdf_fname)
expect = pd.concat([simple_pdf, simple_pdf])
assert_eq(got, expect)
def test_parquet_write_bytes_io(simple_gdf):
output = BytesIO()
simple_gdf.to_parquet(output)
assert_eq(cudf.read_parquet(output), simple_gdf)
def test_parquet_writer_bytes_io(simple_gdf):
output = BytesIO()
writer = ParquetWriter(output)
writer.write_table(simple_gdf)
writer.write_table(simple_gdf)
writer.close()
assert_eq(cudf.read_parquet(output), cudf.concat([simple_gdf, simple_gdf]))
@pytest.mark.parametrize(
"row_group_size_kwargs",
[
{"row_group_size_bytes": 4 * 1024},
{"row_group_size_rows": 5000},
],
)
def test_parquet_writer_row_group_size(tmpdir, row_group_size_kwargs):
# Check that row_group_size options are exposed in Python
# See https://github.com/rapidsai/cudf/issues/10978
size = 20000
gdf = cudf.DataFrame({"a": range(size), "b": [1] * size})
fname = tmpdir.join("gdf.parquet")
with ParquetWriter(fname, **row_group_size_kwargs) as writer:
writer.write_table(gdf)
# Simple check for multiple row-groups
nrows, nrow_groups, columns = cudf.io.parquet.read_parquet_metadata(fname)
assert nrows == size
assert nrow_groups > 1
assert columns == ["a", "b"]
# Know the specific row-group count for row_group_size_rows
if "row_group_size_rows" in row_group_size_kwargs:
assert (
nrow_groups == size // row_group_size_kwargs["row_group_size_rows"]
)
assert_eq(cudf.read_parquet(fname), gdf)
def test_parquet_writer_column_index(tmpdir):
# Simple test for presence of indices. validity is checked
# in libcudf tests.
# Write 2 files, one with column index set, one without.
# Make sure the former is larger in size.
size = 20000
gdf = cudf.DataFrame({"a": range(size), "b": [1] * size})
fname = tmpdir.join("gdf.parquet")
with ParquetWriter(fname, statistics="ROWGROUP") as writer:
writer.write_table(gdf)
s1 = os.path.getsize(fname)
fname = tmpdir.join("gdfi.parquet")
with ParquetWriter(fname, statistics="COLUMN") as writer:
writer.write_table(gdf)
s2 = os.path.getsize(fname)
assert s2 > s1
@pytest.mark.parametrize(
"max_page_size_kwargs",
[
{"max_page_size_bytes": 4 * 1024},
{"max_page_size_rows": 5000},
],
)
def test_parquet_writer_max_page_size(tmpdir, max_page_size_kwargs):
# Check that max_page_size options are exposed in Python
# Since we don't have access to page metadata, instead check that
# file written with more pages will be slightly larger
size = 20000
gdf = cudf.DataFrame({"a": range(size), "b": [1] * size})
fname = tmpdir.join("gdf.parquet")
with ParquetWriter(fname, **max_page_size_kwargs) as writer:
writer.write_table(gdf)
s1 = os.path.getsize(fname)
assert_eq(cudf.read_parquet(fname), gdf)
fname = tmpdir.join("gdf0.parquet")
with ParquetWriter(fname) as writer:
writer.write_table(gdf)
s2 = os.path.getsize(fname)
assert_eq(cudf.read_parquet(fname), gdf)
assert s1 > s2
@pytest.mark.parametrize("filename", ["myfile.parquet", None])
@pytest.mark.parametrize("cols", [["b"], ["c", "b"]])
def test_parquet_partitioned(tmpdir_factory, cols, filename):
# Checks that write_to_dataset is wrapping to_parquet
# as expected
gdf_dir = str(tmpdir_factory.mktemp("gdf_dir"))
pdf_dir = str(tmpdir_factory.mktemp("pdf_dir"))
size = 100
pdf = pd.DataFrame(
{
"a": np.arange(0, stop=size, dtype="int64"),
"b": np.random.choice(list("abcd"), size=size),
"c": np.random.choice(np.arange(4), size=size),
}
)
pdf.to_parquet(pdf_dir, index=False, partition_cols=cols)
gdf = cudf.from_pandas(pdf)
gdf.to_parquet(
gdf_dir, index=False, partition_cols=cols, partition_file_name=filename
)
# Read back with pandas to compare
expect_pd = pd.read_parquet(pdf_dir)
got_pd = pd.read_parquet(gdf_dir)
assert_eq(expect_pd, got_pd)
# Check that cudf and pd return the same read
got_cudf = cudf.read_parquet(gdf_dir)
assert_eq(got_pd, got_cudf)
# If filename is specified, check that it is correct
if filename:
for _, _, files in os.walk(gdf_dir):
for fn in files:
assert fn == filename
@pytest.mark.parametrize("return_meta", [True, False])
def test_parquet_writer_chunked_partitioned(tmpdir_factory, return_meta):
pdf_dir = str(tmpdir_factory.mktemp("pdf_dir"))
gdf_dir = str(tmpdir_factory.mktemp("gdf_dir"))
df1 = cudf.DataFrame({"a": [1, 1, 2, 2, 1], "b": [9, 8, 7, 6, 5]})
df2 = cudf.DataFrame({"a": [1, 3, 3, 1, 3], "b": [4, 3, 2, 1, 0]})
cw = ParquetDatasetWriter(gdf_dir, partition_cols=["a"], index=False)
cw.write_table(df1)
cw.write_table(df2)
meta_byte_array = cw.close(return_metadata=return_meta)
pdf = cudf.concat([df1, df2]).to_pandas()
pdf.to_parquet(pdf_dir, index=False, partition_cols=["a"])
if return_meta:
fmd = pq.ParquetFile(BytesIO(meta_byte_array)).metadata
assert fmd.num_rows == len(pdf)
assert fmd.num_row_groups == 4
files = {
os.path.join(directory, files[0])
for directory, _, files in os.walk(gdf_dir)
if files
}
meta_files = {
os.path.join(gdf_dir, fmd.row_group(i).column(c).file_path)
for i in range(fmd.num_row_groups)
for c in range(fmd.row_group(i).num_columns)
}
assert files == meta_files
# Read back with pandas to compare
expect_pd = pd.read_parquet(pdf_dir)
got_pd = pd.read_parquet(gdf_dir)
assert_eq(expect_pd, got_pd)
# Check that cudf and pd return the same read
got_cudf = cudf.read_parquet(gdf_dir)
assert_eq(got_pd, got_cudf)
@pytest.mark.parametrize(
"max_file_size,max_file_size_in_bytes",
[("500KB", 500000), ("MB", 1000000)],
)
def test_parquet_writer_chunked_max_file_size(
tmpdir_factory, max_file_size, max_file_size_in_bytes
):
pdf_dir = str(tmpdir_factory.mktemp("pdf_dir"))
gdf_dir = str(tmpdir_factory.mktemp("gdf_dir"))
df1 = cudf.DataFrame({"a": [1, 1, 2, 2, 1] * 10000, "b": range(0, 50000)})
df2 = cudf.DataFrame(
{"a": [1, 3, 3, 1, 3] * 10000, "b": range(50000, 100000)}
)
cw = ParquetDatasetWriter(
gdf_dir,
partition_cols=["a"],
max_file_size=max_file_size,
file_name_prefix="sample",
)
cw.write_table(df1)
cw.write_table(df2)
cw.close()
pdf = cudf.concat([df1, df2]).to_pandas()
pdf.to_parquet(pdf_dir, index=False, partition_cols=["a"])
expect_pd = pd.read_parquet(pdf_dir)
got_pd = pd.read_parquet(gdf_dir)
assert_eq(
expect_pd.sort_values(["b"]).reset_index(drop=True),
got_pd.sort_values(["b"]).reset_index(drop=True),
)
# Check that cudf and pd return the same read
got_cudf = cudf.read_parquet(gdf_dir)
assert_eq(
got_pd.sort_values(["b"]).reset_index(drop=True),
got_cudf.sort_values(["b"]).reset_index(drop=True),
)
all_files = glob.glob(gdf_dir + "/**/*.parquet", recursive=True)
for each_file in all_files:
# Validate file sizes with some extra 1000
# bytes buffer to spare
assert os.path.getsize(each_file) <= (
max_file_size_in_bytes
), "File exceeded max_file_size"
def test_parquet_writer_chunked_max_file_size_error():
with pytest.raises(
ValueError,
match="file_name_prefix cannot be None if max_file_size is passed",
):
ParquetDatasetWriter("sample", partition_cols=["a"], max_file_size=100)
def test_parquet_writer_chunked_partitioned_context(tmpdir_factory):
pdf_dir = str(tmpdir_factory.mktemp("pdf_dir"))
gdf_dir = str(tmpdir_factory.mktemp("gdf_dir"))
df1 = cudf.DataFrame({"a": [1, 1, 2, 2, 1], "b": [9, 8, 7, 6, 5]})
df2 = cudf.DataFrame({"a": [1, 3, 3, 1, 3], "b": [4, 3, 2, 1, 0]})
with ParquetDatasetWriter(
gdf_dir, partition_cols=["a"], index=False
) as cw:
cw.write_table(df1)
cw.write_table(df2)
pdf = cudf.concat([df1, df2]).to_pandas()
pdf.to_parquet(pdf_dir, index=False, partition_cols=["a"])
# Read back with pandas to compare
expect_pd = pd.read_parquet(pdf_dir)
got_pd = pd.read_parquet(gdf_dir)
assert_eq(expect_pd, got_pd)
# Check that cudf and pd return the same read
got_cudf = cudf.read_parquet(gdf_dir)
assert_eq(got_pd, got_cudf)
@pytest.mark.parametrize("cols", [None, ["b"]])
def test_parquet_write_to_dataset(tmpdir_factory, cols):
dir1 = tmpdir_factory.mktemp("dir1")
dir2 = tmpdir_factory.mktemp("dir2")
if cols is None:
dir1 = dir1.join("file.pq")
dir2 = dir2.join("file.pq")
dir1 = str(dir1)
dir2 = str(dir2)
size = 100
gdf = cudf.DataFrame(
{
"a": np.arange(0, stop=size),
"b": np.random.choice(np.arange(4), size=size),
}
)
gdf.to_parquet(dir1, partition_cols=cols)
cudf.io.write_to_dataset(gdf, dir2, partition_cols=cols)
# Read back with cudf
expect = cudf.read_parquet(dir1)
got = cudf.read_parquet(dir2)
assert_eq(expect, got)
gdf = cudf.DataFrame(
{
"a": cudf.Series([1, 2, 3]),
"b": cudf.Series([1, 2, 3]),
"c": cudf.Series(["a", "b", "c"], dtype="category"),
}
)
with pytest.raises(ValueError):
gdf.to_parquet(dir1, partition_cols=cols)
@pytest.mark.parametrize(
"pfilters",
[[("b", "==", "b")], [("b", "==", "a"), ("c", "==", 1)]],
)
@pytest.mark.parametrize("selection", ["directory", "files", "row-groups"])
@pytest.mark.parametrize("use_cat", [True, False])
def test_read_parquet_partitioned_filtered(
tmpdir, pfilters, selection, use_cat
):
path = str(tmpdir)
size = 100
df = cudf.DataFrame(
{
"a": np.arange(0, stop=size, dtype="int64"),
"b": np.random.choice(list("abcd"), size=size),
"c": np.random.choice(np.arange(4), size=size),
}
)
df.to_parquet(path, partition_cols=["c", "b"])
if selection == "files":
# Pass in a list of paths
fs = get_fs_token_paths(path)[0]
read_path = fs.find(path)
row_groups = None
elif selection == "row-groups":
# Pass in a list of paths AND row-group ids
fs = get_fs_token_paths(path)[0]
read_path = fs.find(path)
row_groups = [[0] for p in read_path]
else:
# Pass in a directory path
# (row-group selection not allowed in this case)
read_path = path
row_groups = None
# Filter on partitioned columns
expect = pd.read_parquet(read_path, filters=pfilters)
got = cudf.read_parquet(
read_path,
filters=pfilters,
row_groups=row_groups,
categorical_partitions=use_cat,
)
expect["b"] = expect["b"].astype(str)
expect["c"] = expect["c"].astype(int)
if use_cat:
assert got.dtypes["b"] == "category"
assert got.dtypes["c"] == "category"
got["b"] = got["b"].astype(str)
got["c"] = got["c"].astype(int)
else:
# Check that we didn't get categorical
# columns, but convert back to categorical
# for comparison with pandas
assert got.dtypes["b"] == "object"
assert got.dtypes["c"] == "int"
assert_eq(expect, got)
# Filter on non-partitioned column
filters = [("a", "==", 10)]
got = cudf.read_parquet(read_path, filters=filters)
expect = pd.read_parquet(read_path, filters=filters)
# Filter on both kinds of columns
filters = [[("a", "==", 10)], [("c", "==", 1)]]
got = cudf.read_parquet(read_path, filters=filters)
expect = pd.read_parquet(read_path, filters=filters)
assert_eq(expect, got)
def test_parquet_writer_chunked_metadata(tmpdir, simple_pdf, simple_gdf):
gdf_fname = tmpdir.join("gdf.parquet")
test_path = "test/path"
writer = ParquetWriter(gdf_fname)
writer.write_table(simple_gdf)
writer.write_table(simple_gdf)
meta_byte_array = writer.close(metadata_file_path=test_path)
fmd = pq.ParquetFile(BytesIO(meta_byte_array)).metadata
assert fmd.num_rows == 2 * len(simple_gdf)
assert fmd.num_row_groups == 2
for r in range(fmd.num_row_groups):
for c in range(fmd.num_columns):
assert fmd.row_group(r).column(c).file_path == test_path
def test_write_read_cudf(tmpdir, pdf):
file_path = tmpdir.join("cudf.parquet")
if "col_category" in pdf.columns:
pdf = pdf.drop(columns=["col_category"])
gdf = cudf.from_pandas(pdf)
gdf.to_parquet(file_path)
gdf = cudf.read_parquet(file_path)
assert_eq(gdf, pdf, check_index_type=not pdf.empty)
def test_write_cudf_read_pandas_pyarrow(tmpdir, pdf):
cudf_path = tmpdir.join("cudf.parquet")
pandas_path = tmpdir.join("pandas.parquet")
if "col_category" in pdf.columns:
pdf = pdf.drop(columns=["col_category"])
df = cudf.from_pandas(pdf)
df.to_parquet(cudf_path)
pdf.to_parquet(pandas_path)
cudf_res = pd.read_parquet(cudf_path)
pd_res = pd.read_parquet(pandas_path)
assert_eq(pd_res, cudf_res, check_index_type=not pdf.empty)
cudf_res = pa.parquet.read_table(
cudf_path, use_pandas_metadata=True
).to_pandas()
pd_res = pa.parquet.read_table(
pandas_path, use_pandas_metadata=True
).to_pandas()
assert_eq(cudf_res, pd_res, check_index_type=not pdf.empty)
def test_parquet_writer_criteo(tmpdir):
# To run this test, download the day 0 of criteo dataset from
# http://labs.criteo.com/2013/12/download-terabyte-click-logs/
# and place the uncompressed dataset in the home directory
fname = os.path.expanduser("~/day_0")
if not os.path.isfile(fname):
pytest.skip("Local criteo day 0 tsv file is not found")
cudf_path = tmpdir.join("cudf.parquet")
cont_names = ["I" + str(x) for x in range(1, 14)]
cat_names = ["C" + str(x) for x in range(1, 27)]
cols = ["label"] + cont_names + cat_names
df = cudf.read_csv(fname, sep="\t", names=cols, byte_range=(0, 1000000000))
df = df.drop(columns=cont_names)
df.to_parquet(cudf_path)
def test_trailing_nans(datadir, tmpdir):
fname = "trailing_nans.parquet"
file_path = datadir / fname
cu_df = cudf.read_parquet(file_path)
tmp_file_path = tmpdir.join(fname)
cu_df.to_parquet(tmp_file_path)
pd.read_parquet(tmp_file_path)
def test_parquet_writer_sliced(tmpdir):
cudf_path = tmpdir.join("cudf.parquet")
df = pd.DataFrame()
df["String"] = np.array(["Alpha", "Beta", "Gamma", "Delta"])
df = cudf.from_pandas(df)
df_select = df.iloc[1:3]
df_select.to_parquet(cudf_path)
assert_eq(cudf.read_parquet(cudf_path), df_select)
def test_parquet_writer_list_basic(tmpdir):
expect = pd.DataFrame({"a": [[[1, 2], [3, 4]], None, [[5, 6], None]]})
fname = tmpdir.join("test_parquet_writer_list_basic.parquet")
gdf = cudf.from_pandas(expect)
gdf.to_parquet(fname)
assert os.path.exists(fname)
got = pd.read_parquet(fname)
assert_eq(expect, got)
def test_parquet_writer_list_large(tmpdir):
expect = pd.DataFrame({"a": list_gen(int_gen, 256, 80, 50)})
fname = tmpdir.join("test_parquet_writer_list_large.parquet")
gdf = cudf.from_pandas(expect)
gdf.to_parquet(fname)
assert os.path.exists(fname)
got = pd.read_parquet(fname)
assert_eq(expect, got)
def test_parquet_writer_list_large_mixed(tmpdir):
expect = pd.DataFrame(
{
"a": list_gen(string_gen, 128, 80, 50),
"b": list_gen(int_gen, 128, 80, 50),
"c": list_gen(int_gen, 128, 80, 50, include_validity=True),
"d": list_gen(string_gen, 128, 80, 50, include_validity=True),
}
)
fname = tmpdir.join("test_parquet_writer_list_large_mixed.parquet")
gdf = cudf.from_pandas(expect)
gdf.to_parquet(fname)
assert os.path.exists(fname)
got = pd.read_parquet(fname)
assert_eq(expect, got)
def test_parquet_writer_list_chunked(tmpdir):
table1 = cudf.DataFrame(
{
"a": list_gen(string_gen, 128, 80, 50),
"b": list_gen(int_gen, 128, 80, 50),
"c": list_gen(int_gen, 128, 80, 50, include_validity=True),
"d": list_gen(string_gen, 128, 80, 50, include_validity=True),
}
)
table2 = cudf.DataFrame(
{
"a": list_gen(string_gen, 128, 80, 50),
"b": list_gen(int_gen, 128, 80, 50),
"c": list_gen(int_gen, 128, 80, 50, include_validity=True),
"d": list_gen(string_gen, 128, 80, 50, include_validity=True),
}
)
fname = tmpdir.join("test_parquet_writer_list_chunked.parquet")
expect = cudf.concat([table1, table2])
expect = expect.reset_index(drop=True)
writer = ParquetWriter(fname)
writer.write_table(table1)
writer.write_table(table2)
writer.close()
assert os.path.exists(fname)
got = pd.read_parquet(fname)
assert_eq(expect, got)
@pytest.mark.parametrize("engine", ["cudf", "pyarrow"])
def test_parquet_nullable_boolean(tmpdir, engine):
pandas_path = tmpdir.join("pandas_bools.parquet")
pdf = pd.DataFrame(
{
"a": pd.Series(
[True, False, None, True, False], dtype=pd.BooleanDtype()
)
}
)
expected_gdf = cudf.DataFrame({"a": [True, False, None, True, False]})
pdf.to_parquet(pandas_path)
with _hide_pyarrow_parquet_cpu_warnings(engine):
actual_gdf = cudf.read_parquet(pandas_path, engine=engine)
assert_eq(actual_gdf, expected_gdf)
def run_parquet_index(pdf, index):
pandas_buffer = BytesIO()
cudf_buffer = BytesIO()
gdf = cudf.from_pandas(pdf)
pdf.to_parquet(pandas_buffer, index=index)
gdf.to_parquet(cudf_buffer, index=index)
expected = pd.read_parquet(cudf_buffer)
actual = cudf.read_parquet(pandas_buffer)
assert_eq(expected, actual, check_index_type=True)
expected = pd.read_parquet(pandas_buffer)
actual = cudf.read_parquet(cudf_buffer)
assert_eq(expected, actual, check_index_type=True)
@pytest.mark.parametrize(
"pdf",
[
pd.DataFrame(index=[1, 2, 3]),
pd.DataFrame({"a": [1, 2, 3]}, index=[0.43534, 345, 0.34534]),
pd.DataFrame(
{"b": [11, 22, 33], "c": ["a", "b", "c"]},
index=pd.Index(["a", "b", "c"], name="custom name"),
),
pd.DataFrame(
{"a": [10, 11, 12], "b": [99, 88, 77]},
index=pd.RangeIndex(12, 17, 2),
),
pd.DataFrame(
{"b": [99, 88, 77]},
index=pd.RangeIndex(22, 27, 2, name="hello index"),
),
pd.DataFrame(index=pd.Index(["a", "b", "c"], name="custom name")),
pd.DataFrame(
{"a": ["a", "bb", "cc"], "b": [10, 21, 32]},
index=pd.MultiIndex.from_tuples([[1, 2], [10, 11], [15, 16]]),
),
pd.DataFrame(
{"a": ["a", "bb", "cc"], "b": [10, 21, 32]},
index=pd.MultiIndex.from_tuples(
[[1, 2], [10, 11], [15, 16]], names=["first", "second"]
),
),
],
)
@pytest.mark.parametrize("index", [None, True, False])
def test_parquet_index(pdf, index):
run_parquet_index(pdf, index)
@pytest.mark.parametrize("index", [None, True])
@pytest.mark.xfail(
reason="https://github.com/rapidsai/cudf/issues/12243",
)
def test_parquet_index_empty(index):
pdf = pd.DataFrame(index=pd.RangeIndex(0, 10, 1))
run_parquet_index(pdf, index)
def test_parquet_no_index_empty():
pdf = pd.DataFrame(index=pd.RangeIndex(0, 10, 1))
run_parquet_index(pdf, index=False)
@pytest.mark.parametrize("engine", ["cudf", "pyarrow"])
def test_parquet_allnull_str(tmpdir, engine):
pandas_path = tmpdir.join("pandas_allnulls.parquet")
pdf = pd.DataFrame(
{"a": pd.Series([None, None, None, None, None], dtype="str")}
)
expected_gdf = cudf.DataFrame(
{"a": cudf.Series([None, None, None, None, None], dtype="str")}
)
pdf.to_parquet(pandas_path)
with _hide_pyarrow_parquet_cpu_warnings(engine):
actual_gdf = cudf.read_parquet(pandas_path, engine=engine)
assert_eq(actual_gdf, expected_gdf)
def normalized_equals(value1, value2):
if value1 is pd.NA or value1 is pd.NaT:
value1 = None
if value2 is pd.NA or value2 is pd.NaT:
value2 = None
if isinstance(value1, pd.Timestamp):
value1 = value1.to_pydatetime()
if isinstance(value2, pd.Timestamp):
value2 = value2.to_pydatetime()
if isinstance(value1, datetime.datetime):
value1 = value1.replace(tzinfo=None)
if isinstance(value2, datetime.datetime):
value2 = value2.replace(tzinfo=None)
# if one is datetime then both values are datetimes now
if isinstance(value1, datetime.datetime):
return value1 == value2
# Compare integers with floats now
if isinstance(value1, float) or isinstance(value2, float):
return math.isclose(value1, value2)
return value1 == value2
@pytest.mark.parametrize("add_nulls", [True, False])
def test_parquet_writer_statistics(tmpdir, pdf, add_nulls):
file_path = tmpdir.join("cudf.parquet")
if "col_category" in pdf.columns:
pdf = pdf.drop(columns=["col_category", "col_bool"])
if not add_nulls:
# Timedelta types convert NaT to None when reading from parquet into
# pandas which interferes with series.max()/min()
for t in TIMEDELTA_TYPES:
pdf["col_" + t] = pd.Series(np.arange(len(pdf.index))).astype(t)
# pyarrow can't read values with non-zero nanoseconds
pdf["col_timedelta64[ns]"] = pdf["col_timedelta64[ns]"] * 1000
gdf = cudf.from_pandas(pdf)
if add_nulls:
for col in gdf:
set_random_null_mask_inplace(gdf[col])
gdf.to_parquet(file_path, index=False)
# Read back from pyarrow
pq_file = pq.ParquetFile(file_path)
# verify each row group's statistics
for rg in range(0, pq_file.num_row_groups):
pd_slice = pq_file.read_row_group(rg).to_pandas()
# statistics are per-column. So need to verify independently
for i, col in enumerate(pd_slice):
stats = pq_file.metadata.row_group(rg).column(i).statistics
actual_min = pd_slice[col].min()
stats_min = stats.min
assert normalized_equals(actual_min, stats_min)
actual_max = pd_slice[col].max()
stats_max = stats.max
assert normalized_equals(actual_max, stats_max)
assert stats.null_count == pd_slice[col].isna().sum()
assert stats.num_values == pd_slice[col].count()
def test_parquet_writer_list_statistics(tmpdir):
df = pd.DataFrame(
{
"a": list_gen(string_gen, 128, 80, 50),
"b": list_gen(int_gen, 128, 80, 50),
"c": list_gen(int_gen, 128, 80, 50, include_validity=True),
"d": list_gen(string_gen, 128, 80, 50, include_validity=True),
}
)
fname = tmpdir.join("test_parquet_writer_list_statistics.parquet")
gdf = cudf.from_pandas(df)
gdf.to_parquet(fname)
assert os.path.exists(fname)
# Read back from pyarrow
pq_file = pq.ParquetFile(fname)
# verify each row group's statistics
for rg in range(0, pq_file.num_row_groups):
pd_slice = pq_file.read_row_group(rg).to_pandas()
# statistics are per-column. So need to verify independently
for i, col in enumerate(pd_slice):
stats = pq_file.metadata.row_group(rg).column(i).statistics
actual_min = pd_slice[col].explode().explode().dropna().min()
stats_min = stats.min
assert normalized_equals(actual_min, stats_min)
actual_max = pd_slice[col].explode().explode().dropna().max()
stats_max = stats.max
assert normalized_equals(actual_max, stats_max)
@pytest.mark.parametrize(
"data",
[
# Structs
{
"being": [
None,
{"human?": True, "Deets": {"Name": "Carrot", "Age": 27}},
{"human?": None, "Deets": {"Name": "Angua", "Age": 25}},
{"human?": False, "Deets": {"Name": "Cheery", "Age": 31}},
{"human?": False, "Deets": None},
{"human?": None, "Deets": {"Name": "Mr", "Age": None}},
]
},
# List of Structs
{
"family": [
[None, {"human?": True, "deets": {"weight": 2.4, "age": 27}}],
[
{"human?": None, "deets": {"weight": 5.3, "age": 25}},
{"human?": False, "deets": {"weight": 8.0, "age": 31}},
{"human?": False, "deets": None},
],
[],
[{"human?": None, "deets": {"weight": 6.9, "age": None}}],
]
},
# Struct of Lists
pytest.param(
{
"Real estate records": [
None,
{
"Status": "NRI",
"Ownerships": {
"land_unit": [None, 2, None],
"flats": [[1, 2, 3], [], [4, 5], [], [0, 6, 0]],
},
},
{
"Status": None,
"Ownerships": {
"land_unit": [4, 5],
"flats": [[7, 8], []],
},
},
{
"Status": "RI",
"Ownerships": {"land_unit": None, "flats": [[]]},
},
{"Status": "RI", "Ownerships": None},
{
"Status": None,
"Ownerships": {
"land_unit": [7, 8, 9],
"flats": [[], [], []],
},
},
]
},
marks=pytest.mark.xfail(
condition=PANDAS_LT_153,
reason="pandas assertion fixed in pandas 1.5.3",
),
),
],
)
def test_parquet_writer_nested(tmpdir, data):
expect = pd.DataFrame(data)
gdf = cudf.from_pandas(expect)
fname = tmpdir.join("test_parquet_writer_nested.parquet")
gdf.to_parquet(fname)
assert os.path.exists(fname)
got = pd.read_parquet(fname)
assert_eq(expect, got)
@pytest.mark.parametrize(
"decimal_type",
[cudf.Decimal32Dtype, cudf.Decimal64Dtype, cudf.Decimal128Dtype],
)
@pytest.mark.parametrize("data", [[1, 2, 3], [0.00, 0.01, None, 0.5]])
def test_parquet_writer_decimal(decimal_type, data):
gdf = cudf.DataFrame({"val": data})
gdf["dec_val"] = gdf["val"].astype(decimal_type(7, 2))
buff = BytesIO()
gdf.to_parquet(buff)
got = pd.read_parquet(buff, use_nullable_dtypes=True)
assert_eq(gdf.to_pandas(nullable=True), got)
def test_parquet_writer_column_validation():
df = cudf.DataFrame({1: [1, 2, 3], "1": ["a", "b", "c"]})
pdf = df.to_pandas()
assert_exceptions_equal(
lfunc=df.to_parquet,
rfunc=pdf.to_parquet,
lfunc_args_and_kwargs=(["cudf.parquet"],),
rfunc_args_and_kwargs=(["pandas.parquet"],),
)
def test_parquet_writer_nulls_pandas_read(tmpdir, pdf):
if "col_bool" in pdf.columns:
pdf.drop(columns="col_bool", inplace=True)
if "col_category" in pdf.columns:
pdf.drop(columns="col_category", inplace=True)
gdf = cudf.from_pandas(pdf)
num_rows = len(gdf)
if num_rows > 0:
for col in gdf.columns:
gdf[col][random.randint(0, num_rows - 1)] = None
fname = tmpdir.join("test_parquet_writer_nulls_pandas_read.parquet")
gdf.to_parquet(fname)
assert os.path.exists(fname)
got = pd.read_parquet(fname)
nullable = num_rows > 0
assert_eq(gdf.to_pandas(nullable=nullable), got)
@pytest.mark.parametrize(
"decimal_type",
[cudf.Decimal32Dtype, cudf.Decimal64Dtype, cudf.Decimal128Dtype],
)
def test_parquet_decimal_precision(tmpdir, decimal_type):
df = cudf.DataFrame({"val": ["3.5", "4.2"]}).astype(decimal_type(5, 2))
assert df.val.dtype.precision == 5
fname = tmpdir.join("decimal_test.parquet")
df.to_parquet(fname)
df = cudf.read_parquet(fname)
assert df.val.dtype.precision == 5
def test_parquet_decimal_precision_empty(tmpdir):
df = (
cudf.DataFrame({"val": ["3.5", "4.2"]})
.astype(cudf.Decimal64Dtype(5, 2))
.iloc[:0]
)
assert df.val.dtype.precision == 5
fname = tmpdir.join("decimal_test.parquet")
df.to_parquet(fname)
df = cudf.read_parquet(fname)
assert df.val.dtype.precision == 5
def test_parquet_reader_brotli(datadir):
fname = datadir / "brotli_int16.parquet"
expect = pd.read_parquet(fname)
got = cudf.read_parquet(fname).to_pandas(nullable=True)
assert_eq(expect, got)
def test_parquet_reader_one_level_list(datadir):
fname = datadir / "one_level_list.parquet"
expect = pd.read_parquet(fname)
got = cudf.read_parquet(fname).to_pandas(nullable=True)
assert_eq(expect, got)
def test_parquet_reader_binary_decimal(datadir):
fname = datadir / "binary_decimal.parquet"
expect = pd.read_parquet(fname)
got = cudf.read_parquet(fname).to_pandas()
assert_eq(expect, got)
def test_parquet_reader_fixed_bin(datadir):
fname = datadir / "fixed_len_byte_array.parquet"
expect = pd.read_parquet(fname)
got = cudf.read_parquet(fname)
assert_eq(expect, got)
def test_parquet_reader_rle_boolean(datadir):
fname = datadir / "rle_boolean_encoding.parquet"
expect = pd.read_parquet(fname)
got = cudf.read_parquet(fname)
assert_eq(expect, got)
# testing a specific bug-fix/edge case.
# specifically: int a parquet file containing a particular way of representing
# a list column in a schema, the cudf reader was confusing
# nesting information between a list column and a subsequent
# string column, ultimately causing a crash.
def test_parquet_reader_one_level_list2(datadir):
# we are reading in a file containing binary types, but cudf returns
# those as strings. so we have to massage the pandas data to get
# them to compare correctly.
def postprocess(val):
if isinstance(val, bytes):
return val.decode()
elif isinstance(val, np.ndarray):
return np.array([v.decode() for v in val])
else:
return val
fname = datadir / "one_level_list2.parquet"
expect = pd.read_parquet(fname)
expect = expect.applymap(postprocess)
got = cudf.read_parquet(fname)
assert_eq(expect, got, check_dtype=False)
# testing a specific bug-fix/edge case.
# specifically: in a parquet file containing a particular way of representing
# a list column in a schema, the cudf reader was confusing
# nesting information and building a list of list of int instead
# of a list of int
def test_parquet_reader_one_level_list3(datadir):
fname = datadir / "one_level_list3.parquet"
expect = pd.read_parquet(fname)
got = cudf.read_parquet(fname)
assert_eq(expect, got, check_dtype=True)
@pytest.mark.parametrize("size_bytes", [4_000_000, 1_000_000, 600_000])
@pytest.mark.parametrize("size_rows", [1_000_000, 100_000, 10_000])
def test_to_parquet_row_group_size(
tmpdir, large_int64_gdf, size_bytes, size_rows
):
fname = tmpdir.join("row_group_size.parquet")
large_int64_gdf.to_parquet(
fname, row_group_size_bytes=size_bytes, row_group_size_rows=size_rows
)
num_rows, row_groups, col_names = cudf.io.read_parquet_metadata(fname)
# 8 bytes per row, as the column is int64
expected_num_rows = max(
math.ceil(num_rows / size_rows), math.ceil(8 * num_rows / size_bytes)
)
assert expected_num_rows == row_groups
def test_parquet_reader_decimal_columns():
df = cudf.DataFrame(
{
"col1": cudf.Series([1, 2, 3], dtype=cudf.Decimal64Dtype(10, 2)),
"col2": [10, 11, 12],
"col3": [12, 13, 14],
"col4": ["a", "b", "c"],
}
)
buffer = BytesIO()
df.to_parquet(buffer)
actual = cudf.read_parquet(buffer, columns=["col3", "col2", "col1"])
expected = pd.read_parquet(buffer, columns=["col3", "col2", "col1"])
assert_eq(actual, expected)
def test_parquet_reader_zstd_compression(datadir):
fname = datadir / "spark_zstd.parquet"
try:
df = cudf.read_parquet(fname)
pdf = pd.read_parquet(fname)
assert_eq(df, pdf)
except RuntimeError:
pytest.mark.xfail(reason="zstd support is not enabled")
def test_read_parquet_multiple_files(tmpdir):
df_1_path = tmpdir / "df_1.parquet"
df_2_path = tmpdir / "df_2.parquet"
df_1 = cudf.DataFrame({"id": range(100), "a": [1] * 100})
df_1.to_parquet(df_1_path)
df_2 = cudf.DataFrame({"id": range(200, 2200), "a": [2] * 2000})
df_2.to_parquet(df_2_path)
expected = pd.read_parquet([df_1_path, df_2_path])
actual = cudf.read_parquet([df_1_path, df_2_path])
assert_eq(expected, actual)
expected = pd.read_parquet([df_2_path, df_1_path])
actual = cudf.read_parquet([df_2_path, df_1_path])
assert_eq(expected, actual)
@pytest.mark.parametrize("index", [True, False, None])
@pytest.mark.parametrize("columns", [None, [], ["b", "a"]])
def test_parquet_columns_and_index_param(index, columns):
buffer = BytesIO()
df = cudf.DataFrame({"a": [1, 2, 3], "b": ["a", "b", "c"]})
df.to_parquet(buffer, index=index)
expected = pd.read_parquet(buffer, columns=columns)
got = cudf.read_parquet(buffer, columns=columns)
assert_eq(expected, got, check_index_type=True)
@pytest.mark.parametrize("columns", [None, ["b", "a"]])
def test_parquet_columns_and_range_index(columns):
buffer = BytesIO()
df = cudf.DataFrame(
{"a": [1, 2, 3], "b": ["a", "b", "c"]}, index=pd.RangeIndex(2, 5)
)
df.to_parquet(buffer)
expected = pd.read_parquet(buffer, columns=columns)
got = cudf.read_parquet(buffer, columns=columns)
assert_eq(expected, got, check_index_type=True)
def test_parquet_nested_struct_list():
buffer = BytesIO()
data = {
"payload": {
"Domain": {
"Name": "abc",
"Id": {"Name": "host", "Value": "127.0.0.8"},
},
"StreamId": "12345678",
"Duration": 10,
"Offset": 12,
"Resource": [{"Name": "ZoneName", "Value": "RAPIDS"}],
}
}
df = cudf.DataFrame({"a": cudf.Series(data)})
df.to_parquet(buffer)
expected = pd.read_parquet(buffer)
actual = cudf.read_parquet(buffer)
assert_eq(expected, actual)
assert_eq(actual.a.dtype, df.a.dtype)
def test_parquet_writer_zstd():
size = 12345
expected = cudf.DataFrame(
{
"a": np.arange(0, stop=size, dtype="float64"),
"b": np.random.choice(list("abcd"), size=size),
"c": np.random.choice(np.arange(4), size=size),
}
)
buff = BytesIO()
try:
expected.to_parquet(buff, compression="ZSTD")
except RuntimeError:
pytest.mark.xfail(reason="Newer nvCOMP version is required")
else:
got = pd.read_parquet(buff)
assert_eq(expected, got)
def test_parquet_writer_time_delta_physical_type():
df = cudf.DataFrame(
{
"s": cudf.Series([1], dtype="timedelta64[s]"),
"ms": cudf.Series([2], dtype="timedelta64[ms]"),
"us": cudf.Series([3], dtype="timedelta64[us]"),
# 4K because Pandas/pyarrow don't support non-zero nanoseconds
# in Parquet files
"ns": cudf.Series([4000], dtype="timedelta64[ns]"),
}
)
buffer = BytesIO()
df.to_parquet(buffer)
got = pd.read_parquet(buffer)
expected = pd.DataFrame(
{
"s": ["00:00:01"],
"ms": ["00:00:00.002000"],
"us": ["00:00:00.000003"],
"ns": ["00:00:00.000004"],
},
dtype="str",
)
assert_eq(got.astype("str"), expected)
def test_parquet_roundtrip_time_delta():
num_rows = 12345
df = cudf.DataFrame(
{
"s": cudf.Series(
random.sample(range(0, 200000), num_rows),
dtype="timedelta64[s]",
),
"ms": cudf.Series(
random.sample(range(0, 200000), num_rows),
dtype="timedelta64[ms]",
),
"us": cudf.Series(
random.sample(range(0, 200000), num_rows),
dtype="timedelta64[us]",
),
"ns": cudf.Series(
random.sample(range(0, 200000), num_rows),
dtype="timedelta64[ns]",
),
}
)
buffer = BytesIO()
df.to_parquet(buffer)
assert_eq(df, cudf.read_parquet(buffer))
def test_parquet_reader_malformed_file(datadir):
fname = datadir / "nested-unsigned-malformed.parquet"
# expect a failure when reading the whole file
with pytest.raises(RuntimeError):
cudf.read_parquet(fname)
def test_parquet_reader_unsupported_page_encoding(datadir):
fname = datadir / "delta_encoding.parquet"
# expect a failure when reading the whole file
with pytest.raises(RuntimeError):
cudf.read_parquet(fname)
def test_parquet_reader_detect_bad_dictionary(datadir):
fname = datadir / "bad_dict.parquet"
# expect a failure when reading the whole file
with pytest.raises(RuntimeError):
cudf.read_parquet(fname)
@pytest.mark.parametrize("data", [{"a": [1, 2, 3, 4]}, {"b": [1, None, 2, 3]}])
@pytest.mark.parametrize("force_nullable_schema", [True, False])
def test_parquet_writer_schema_nullability(data, force_nullable_schema):
df = cudf.DataFrame(data)
file_obj = BytesIO()
df.to_parquet(file_obj, force_nullable_schema=force_nullable_schema)
assert pa.parquet.read_schema(file_obj).field(0).nullable == (
force_nullable_schema or df.isnull().any().any()
)
def test_parquet_read_filter_and_project():
# Filter on columns that are not included
# in the current column projection
with BytesIO() as buffer:
# Write parquet data
df = cudf.DataFrame(
{
"a": [1, 2, 3, 4, 5] * 10,
"b": [0, 1, 2, 3, 4] * 10,
"c": range(50),
"d": [6, 7] * 25,
"e": [8, 9] * 25,
}
)
df.to_parquet(buffer)
# Read back with filter and projection
columns = ["b"]
filters = [[("a", "==", 5), ("c", ">", 20)]]
got = cudf.read_parquet(buffer, columns=columns, filters=filters)
# Check result
expected = df[(df.a == 5) & (df.c > 20)][columns].reset_index(drop=True)
assert_eq(got, expected)
def test_parquet_reader_multiindex():
expected = pd.DataFrame(
{"A": [1, 2, 3]},
index=pd.MultiIndex.from_tuples([("a", 1), ("a", 2), ("b", 1)]),
)
file_obj = BytesIO()
expected.to_parquet(file_obj, engine="pyarrow")
with pytest.warns(UserWarning):
actual = cudf.read_parquet(file_obj, engine="pyarrow")
assert_eq(actual, expected)
def test_parquet_reader_engine_error():
with pytest.raises(ValueError):
cudf.read_parquet(BytesIO(), engine="abc")
| 0 |
rapidsai_public_repos/cudf/python/cudf/cudf
|
rapidsai_public_repos/cudf/python/cudf/cudf/tests/test_dataframe.py
|
# Copyright (c) 2018-2023, NVIDIA CORPORATION.
import array as arr
import datetime
import decimal
import io
import operator
import random
import re
import string
import textwrap
import warnings
from collections import OrderedDict, defaultdict, namedtuple
from copy import copy
import cupy
import numpy as np
import pandas as pd
import pyarrow as pa
import pytest
from numba import cuda
from packaging import version
import cudf
from cudf.core._compat import (
PANDAS_GE_134,
PANDAS_GE_150,
PANDAS_GE_200,
PANDAS_LT_140,
)
from cudf.core.buffer.spill_manager import get_global_manager
from cudf.core.column import column
from cudf.testing import _utils as utils
from cudf.testing._utils import (
ALL_TYPES,
DATETIME_TYPES,
NUMERIC_TYPES,
_create_cudf_series_float64_default,
assert_eq,
assert_exceptions_equal,
assert_neq,
does_not_raise,
expect_warning_if,
gen_rand,
)
pytest_xfail = pytest.mark.xfail
pytestmark = pytest.mark.spilling
# Use this to "unmark" the module level spilling mark
pytest_unmark_spilling = pytest.mark.skipif(
get_global_manager() is not None, reason="unmarked spilling"
)
# If spilling is enabled globally, we skip many test permutations
# to reduce running time.
if get_global_manager() is not None:
ALL_TYPES = ["float32"] # noqa: F811
DATETIME_TYPES = ["datetime64[ms]"] # noqa: F811
NUMERIC_TYPES = ["float32"] # noqa: F811
# To save time, we skip tests marked "xfail"
pytest_xfail = pytest.mark.skipif
def test_init_via_list_of_tuples():
data = [
(5, "cats", "jump", np.nan),
(2, "dogs", "dig", 7.5),
(3, "cows", "moo", -2.1, "occasionally"),
]
pdf = pd.DataFrame(data)
gdf = cudf.DataFrame(data)
assert_eq(pdf, gdf)
@pytest.mark.parametrize("columns", [["a", "b"], pd.Series(["a", "b"])])
def test_init_via_list_of_series(columns):
data = [pd.Series([1, 2]), pd.Series([3, 4])]
pdf = cudf.DataFrame(data, columns=columns)
gdf = cudf.DataFrame(data, columns=columns)
assert_eq(pdf, gdf)
@pytest.mark.parametrize("index", [None, [0, 1, 2]])
def test_init_with_missing_columns(index):
"""Test initialization when columns and data keys are disjoint."""
data = {"a": [1, 2, 3], "b": [2, 3, 4]}
columns = ["c", "d"]
pdf = cudf.DataFrame(data, columns=columns, index=index)
gdf = cudf.DataFrame(data, columns=columns, index=index)
assert_eq(pdf, gdf)
def _dataframe_na_data():
return [
pd.DataFrame(
{
"a": [0, 1, 2, np.nan, 4, None, 6],
"b": [np.nan, None, "u", "h", "d", "a", "m"],
},
index=["q", "w", "e", "r", "t", "y", "u"],
),
pd.DataFrame({"a": [0, 1, 2, 3, 4], "b": ["a", "b", "u", "h", "d"]}),
pd.DataFrame(
{
"a": [None, None, np.nan, None],
"b": [np.nan, None, np.nan, None],
}
),
pd.DataFrame({"a": []}),
pd.DataFrame({"a": [np.nan], "b": [None]}),
pd.DataFrame({"a": ["a", "b", "c", None, "e"]}),
pd.DataFrame({"a": ["a", "b", "c", "d", "e"]}),
]
@pytest.mark.parametrize(
"rows",
[
pytest.param(
0,
marks=pytest.mark.xfail(
not PANDAS_GE_200, reason=".column returns Index[object]"
),
),
1,
2,
100,
],
)
def test_init_via_list_of_empty_tuples(rows):
data = [()] * rows
pdf = pd.DataFrame(data)
gdf = cudf.DataFrame(data)
assert_eq(
pdf,
gdf,
check_like=True,
check_index_type=False,
)
@pytest.mark.parametrize(
"dict_of_series",
[
{"a": pd.Series([1.0, 2.0, 3.0])},
{"a": pd.Series([1.0, 2.0, 3.0], index=[4, 5, 6])},
{
"a": pd.Series([1.0, 2.0, 3.0], index=[4, 5, 6]),
"b": pd.Series([1.0, 2.0, 4.0], index=[1, 2, 3]),
},
{"a": [1, 2, 3], "b": pd.Series([1.0, 2.0, 3.0], index=[4, 5, 6])},
{
"a": pd.Series([1.0, 2.0, 3.0], index=["a", "b", "c"]),
"b": pd.Series([1.0, 2.0, 4.0], index=["c", "d", "e"]),
},
{
"a": pd.Series(
["a", "b", "c"],
index=pd.MultiIndex.from_tuples([(1, 2), (1, 3), (2, 3)]),
),
"b": pd.Series(
["a", " b", "d"],
index=pd.MultiIndex.from_tuples([(1, 2), (1, 3), (2, 3)]),
),
},
],
)
def test_init_from_series_align(dict_of_series):
pdf = pd.DataFrame(dict_of_series)
gdf = cudf.DataFrame(dict_of_series)
assert_eq(pdf, gdf)
for key in dict_of_series:
if isinstance(dict_of_series[key], pd.Series):
dict_of_series[key] = cudf.Series(dict_of_series[key])
gdf = cudf.DataFrame(dict_of_series)
assert_eq(pdf, gdf)
@pytest.mark.parametrize(
("dict_of_series", "expectation"),
[
(
{
"a": pd.Series(["a", "b", "c"], index=[4, 4, 5]),
"b": pd.Series(["a", "b", "c"], index=[4, 5, 6]),
},
pytest.raises(
ValueError, match="Cannot align indices with non-unique values"
),
),
(
{
"a": pd.Series(["a", "b", "c"], index=[4, 4, 5]),
"b": pd.Series(["a", "b", "c"], index=[4, 4, 5]),
},
does_not_raise(),
),
],
)
def test_init_from_series_align_nonunique(dict_of_series, expectation):
with expectation:
gdf = cudf.DataFrame(dict_of_series)
if expectation == does_not_raise():
pdf = pd.DataFrame(dict_of_series)
assert_eq(pdf, gdf)
def test_init_unaligned_with_index():
pdf = pd.DataFrame(
{
"a": pd.Series([1.0, 2.0, 3.0], index=[4, 5, 6]),
"b": pd.Series([1.0, 2.0, 3.0], index=[1, 2, 3]),
},
index=[7, 8, 9],
)
gdf = cudf.DataFrame(
{
"a": cudf.Series([1.0, 2.0, 3.0], index=[4, 5, 6]),
"b": cudf.Series([1.0, 2.0, 3.0], index=[1, 2, 3]),
},
index=[7, 8, 9],
)
assert_eq(pdf, gdf, check_dtype=False)
def test_init_series_list_columns_unsort():
pseries = [
pd.Series(i, index=["b", "a", "c"], name=str(i)) for i in range(3)
]
gseries = [
cudf.Series(i, index=["b", "a", "c"], name=str(i)) for i in range(3)
]
pdf = pd.DataFrame(pseries)
gdf = cudf.DataFrame(gseries)
assert_eq(pdf, gdf)
def test_series_basic():
# Make series from buffer
a1 = np.arange(10, dtype=np.float64)
series = cudf.Series(a1)
assert len(series) == 10
np.testing.assert_equal(series.to_numpy(), np.hstack([a1]))
def test_series_from_cupy_scalars():
data = [0.1, 0.2, 0.3]
data_np = np.array(data)
data_cp = cupy.array(data)
s_np = cudf.Series([data_np[0], data_np[2]])
s_cp = cudf.Series([data_cp[0], data_cp[2]])
assert_eq(s_np, s_cp)
@pytest.mark.parametrize("a", [[1, 2, 3], [1, 10, 30]])
@pytest.mark.parametrize("b", [[4, 5, 6], [-11, -100, 30]])
def test_append_index(a, b):
df = pd.DataFrame()
df["a"] = a
df["b"] = b
gdf = cudf.DataFrame()
gdf["a"] = a
gdf["b"] = b
# Check the default index after appending two columns(Series)
with pytest.warns(FutureWarning, match="append method is deprecated"):
expected = df.a.append(df.b)
with pytest.warns(FutureWarning, match="append method is deprecated"):
actual = gdf.a.append(gdf.b)
assert len(expected) == len(actual)
assert_eq(expected.index, actual.index)
with pytest.warns(FutureWarning, match="append method is deprecated"):
expected = df.a.append(df.b, ignore_index=True)
with pytest.warns(FutureWarning, match="append method is deprecated"):
actual = gdf.a.append(gdf.b, ignore_index=True)
assert len(expected) == len(actual)
assert_eq(expected.index, actual.index)
@pytest.mark.parametrize(
"data",
[
{"a": [1, 2]},
{"a": [1, 2, 3], "b": [3, 4, 5]},
{"a": [1, 2, 3, 4], "b": [3, 4, 5, 6], "c": [1, 3, 5, 7]},
{"a": [np.nan, 2, 3, 4], "b": [3, 4, np.nan, 6], "c": [1, 3, 5, 7]},
{1: [1, 2, 3], 2: [3, 4, 5]},
{"a": [1, None, None], "b": [3, np.nan, np.nan]},
{1: ["a", "b", "c"], 2: ["q", "w", "u"]},
{1: ["a", np.nan, "c"], 2: ["q", None, "u"]},
pytest.param(
{},
marks=pytest_xfail(
reason="https://github.com/rapidsai/cudf/issues/11080"
),
),
pytest.param(
{1: [], 2: [], 3: []},
marks=pytest_xfail(
condition=not PANDAS_GE_150,
reason="https://github.com/rapidsai/cudf/issues/11080",
),
),
pytest.param(
[1, 2, 3],
marks=pytest_xfail(
condition=not PANDAS_GE_150,
reason="https://github.com/rapidsai/cudf/issues/11080",
),
),
],
)
def test_axes(data):
csr = cudf.DataFrame(data)
psr = pd.DataFrame(data)
expected = psr.axes
actual = csr.axes
for e, a in zip(expected, actual):
assert_eq(e, a)
def test_dataframe_truncate_axis_0():
df = cudf.DataFrame(
{
"A": ["a", "b", "c", "d", "e"],
"B": ["f", "g", "h", "i", "j"],
"C": ["k", "l", "m", "n", "o"],
},
index=[1, 2, 3, 4, 5],
)
pdf = df.to_pandas()
expected = pdf.truncate(before=2, after=4, axis="index")
actual = df.truncate(before=2, after=4, axis="index")
assert_eq(actual, expected)
expected = pdf.truncate(before=1, after=4, axis=0)
actual = df.truncate(before=1, after=4, axis=0)
assert_eq(expected, actual)
def test_dataframe_truncate_axis_1():
df = cudf.DataFrame(
{
"A": ["a", "b", "c", "d", "e"],
"B": ["f", "g", "h", "i", "j"],
"C": ["k", "l", "m", "n", "o"],
},
index=[1, 2, 3, 4, 5],
)
pdf = df.to_pandas()
expected = pdf.truncate(before="A", after="B", axis="columns")
actual = df.truncate(before="A", after="B", axis="columns")
assert_eq(actual, expected)
expected = pdf.truncate(before="A", after="B", axis=1)
actual = df.truncate(before="A", after="B", axis=1)
assert_eq(actual, expected)
def test_dataframe_truncate_datetimeindex():
dates = cudf.date_range(
"2021-01-01 23:45:00", "2021-01-01 23:46:00", freq="s"
)
df = cudf.DataFrame(data={"A": 1, "B": 2}, index=dates)
pdf = df.to_pandas()
expected = pdf.truncate(
before="2021-01-01 23:45:18", after="2021-01-01 23:45:27"
)
actual = df.truncate(
before="2021-01-01 23:45:18", after="2021-01-01 23:45:27"
)
assert_eq(actual, expected)
def test_series_init_none():
# test for creating empty series
# 1: without initializing
sr1 = cudf.Series()
got = sr1.to_string()
expect = repr(sr1.to_pandas())
assert got == expect
# 2: Using `None` as an initializer
sr2 = cudf.Series(None)
got = sr2.to_string()
expect = repr(sr2.to_pandas())
assert got == expect
def test_dataframe_basic():
np.random.seed(0)
df = cudf.DataFrame()
# Populate with cuda memory
df["keys"] = np.arange(10, dtype=np.float64)
np.testing.assert_equal(df["keys"].to_numpy(), np.arange(10))
assert len(df) == 10
# Populate with numpy array
rnd_vals = np.random.random(10)
df["vals"] = rnd_vals
np.testing.assert_equal(df["vals"].to_numpy(), rnd_vals)
assert len(df) == 10
assert tuple(df.columns) == ("keys", "vals")
# Make another dataframe
df2 = cudf.DataFrame()
df2["keys"] = np.array([123], dtype=np.float64)
df2["vals"] = np.array([321], dtype=np.float64)
# Concat
df = cudf.concat([df, df2])
assert len(df) == 11
hkeys = np.asarray(np.arange(10, dtype=np.float64).tolist() + [123])
hvals = np.asarray(rnd_vals.tolist() + [321])
np.testing.assert_equal(df["keys"].to_numpy(), hkeys)
np.testing.assert_equal(df["vals"].to_numpy(), hvals)
# As matrix
mat = df.values_host
expect = np.vstack([hkeys, hvals]).T
np.testing.assert_equal(mat, expect)
# test dataframe with tuple name
df_tup = cudf.DataFrame()
data = np.arange(10)
df_tup[(1, "foobar")] = data
np.testing.assert_equal(data, df_tup[(1, "foobar")].to_numpy())
df = cudf.DataFrame(pd.DataFrame({"a": [1, 2, 3], "c": ["a", "b", "c"]}))
pdf = pd.DataFrame(pd.DataFrame({"a": [1, 2, 3], "c": ["a", "b", "c"]}))
assert_eq(df, pdf)
gdf = cudf.DataFrame({"id": [0, 1], "val": [None, None]})
gdf["val"] = gdf["val"].astype("int")
assert gdf["val"].isnull().all()
@pytest.mark.parametrize(
"pdf",
[
pd.DataFrame(
{"a": range(10), "b": range(10, 20), "c": range(1, 11)},
index=pd.Index(
["a", "b", "c", "d", "e", "f", "g", "h", "i", "j"],
name="custom_name",
),
),
pd.DataFrame(
{"a": range(10), "b": range(10, 20), "d": ["a", "v"] * 5}
),
],
)
@pytest.mark.parametrize(
"columns",
[["a"], ["b"], "a", "b", ["a", "b"]],
)
@pytest.mark.parametrize("inplace", [True, False])
def test_dataframe_drop_columns(pdf, columns, inplace):
pdf = pdf.copy()
gdf = cudf.from_pandas(pdf)
expected = pdf.drop(columns=columns, inplace=inplace)
actual = gdf.drop(columns=columns, inplace=inplace)
if inplace:
expected = pdf
actual = gdf
assert_eq(expected, actual)
@pytest.mark.parametrize(
"pdf",
[
pd.DataFrame(
{"a": range(10), "b": range(10, 20), "c": range(1, 11)},
index=pd.Index(list(range(10)), name="custom_name"),
),
pd.DataFrame(
{"a": range(10), "b": range(10, 20), "d": ["a", "v"] * 5}
),
],
)
@pytest.mark.parametrize(
"labels",
[
[1],
[0],
1,
5,
[5, 9],
pd.Index([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]),
pd.Index([0, 1, 8, 9], name="new name"),
],
)
@pytest.mark.parametrize("inplace", [True, False])
def test_dataframe_drop_labels_axis_0(pdf, labels, inplace):
pdf = pdf.copy()
gdf = cudf.from_pandas(pdf)
expected = pdf.drop(labels=labels, axis=0, inplace=inplace)
actual = gdf.drop(labels=labels, axis=0, inplace=inplace)
if inplace:
expected = pdf
actual = gdf
assert_eq(expected, actual)
@pytest.mark.parametrize(
"pdf",
[
pd.DataFrame({"a": range(10), "b": range(10, 20), "c": range(1, 11)}),
pd.DataFrame(
{"a": range(10), "b": range(10, 20), "d": ["a", "v"] * 5}
),
pd.DataFrame(
{
"a": range(10),
"b": range(10, 20),
},
index=pd.Index(list(range(10)), dtype="uint64"),
),
],
)
@pytest.mark.parametrize(
"index",
[[1], [0], 1, 5, [5, 9], pd.Index([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])],
)
@pytest.mark.parametrize("inplace", [True, False])
def test_dataframe_drop_index(pdf, index, inplace):
pdf = pdf.copy()
gdf = cudf.from_pandas(pdf)
expected = pdf.drop(index=index, inplace=inplace)
actual = gdf.drop(index=index, inplace=inplace)
if inplace:
expected = pdf
actual = gdf
assert_eq(expected, actual)
@pytest.mark.parametrize(
"pdf",
[
pd.DataFrame(
{"a": range(10), "b": range(10, 20), "d": ["a", "v"] * 5},
index=pd.MultiIndex(
levels=[
["lama", "cow", "falcon"],
["speed", "weight", "length"],
],
codes=[
[0, 0, 0, 1, 1, 1, 2, 2, 2, 1],
[0, 1, 2, 0, 1, 2, 0, 1, 2, 1],
],
),
)
],
)
@pytest.mark.parametrize(
"index,level",
[
("cow", 0),
("lama", 0),
("falcon", 0),
("speed", 1),
("weight", 1),
("length", 1),
("cow", None),
(
"lama",
None,
),
(
"falcon",
None,
),
],
)
@pytest.mark.parametrize("inplace", [True, False])
def test_dataframe_drop_multiindex(pdf, index, level, inplace):
pdf = pdf.copy()
gdf = cudf.from_pandas(pdf)
expected = pdf.drop(index=index, inplace=inplace, level=level)
actual = gdf.drop(index=index, inplace=inplace, level=level)
if inplace:
expected = pdf
actual = gdf
assert_eq(expected, actual)
@pytest.mark.parametrize(
"pdf",
[
pd.DataFrame({"a": range(10), "b": range(10, 20), "c": range(1, 11)}),
pd.DataFrame(
{"a": range(10), "b": range(10, 20), "d": ["a", "v"] * 5}
),
],
)
@pytest.mark.parametrize(
"labels",
[["a"], ["b"], "a", "b", ["a", "b"]],
)
@pytest.mark.parametrize("inplace", [True, False])
def test_dataframe_drop_labels_axis_1(pdf, labels, inplace):
pdf = pdf.copy()
gdf = cudf.from_pandas(pdf)
expected = pdf.drop(labels=labels, axis=1, inplace=inplace)
actual = gdf.drop(labels=labels, axis=1, inplace=inplace)
if inplace:
expected = pdf
actual = gdf
assert_eq(expected, actual)
def test_dataframe_drop_error():
df = cudf.DataFrame({"a": [1], "b": [2], "c": [3]})
pdf = df.to_pandas()
assert_exceptions_equal(
lfunc=pdf.drop,
rfunc=df.drop,
lfunc_args_and_kwargs=([], {"columns": "d"}),
rfunc_args_and_kwargs=([], {"columns": "d"}),
)
assert_exceptions_equal(
lfunc=pdf.drop,
rfunc=df.drop,
lfunc_args_and_kwargs=([], {"columns": ["a", "d", "b"]}),
rfunc_args_and_kwargs=([], {"columns": ["a", "d", "b"]}),
)
assert_exceptions_equal(
lfunc=pdf.drop,
rfunc=df.drop,
lfunc_args_and_kwargs=(["a"], {"columns": "a", "axis": 1}),
rfunc_args_and_kwargs=(["a"], {"columns": "a", "axis": 1}),
)
assert_exceptions_equal(
lfunc=pdf.drop,
rfunc=df.drop,
lfunc_args_and_kwargs=([], {"axis": 1}),
rfunc_args_and_kwargs=([], {"axis": 1}),
)
assert_exceptions_equal(
lfunc=pdf.drop,
rfunc=df.drop,
lfunc_args_and_kwargs=([[2, 0]],),
rfunc_args_and_kwargs=([[2, 0]],),
)
def test_dataframe_swaplevel_axis_0():
midx = cudf.MultiIndex(
levels=[
["Work"],
["Final exam", "Coursework"],
["History", "Geography"],
["January", "February", "March", "April"],
],
codes=[[0, 0, 0, 0], [0, 0, 1, 1], [0, 1, 0, 1], [0, 1, 2, 3]],
names=["a", "b", "c", "d"],
)
cdf = cudf.DataFrame(
{
"Grade": ["A", "B", "A", "C"],
"Percentage": ["95", "85", "95", "75"],
},
index=midx,
)
pdf = cdf.to_pandas()
assert_eq(pdf.swaplevel(), cdf.swaplevel())
assert_eq(pdf.swaplevel(), cdf.swaplevel(-2, -1, 0))
assert_eq(pdf.swaplevel(1, 2), cdf.swaplevel(1, 2))
assert_eq(cdf.swaplevel(2, 1), cdf.swaplevel(1, 2))
assert_eq(pdf.swaplevel(-1, -3), cdf.swaplevel(-1, -3))
assert_eq(pdf.swaplevel("a", "b", 0), cdf.swaplevel("a", "b", 0))
assert_eq(cdf.swaplevel("a", "b"), cdf.swaplevel("b", "a"))
def test_dataframe_swaplevel_TypeError():
cdf = cudf.DataFrame(
{"a": [1, 2, 3], "c": [10, 20, 30]}, index=["x", "y", "z"]
)
with pytest.raises(TypeError):
cdf.swaplevel()
def test_dataframe_swaplevel_axis_1():
midx = cudf.MultiIndex(
levels=[
["b", "a"],
["bb", "aa"],
["bbb", "aaa"],
],
codes=[[0, 0, 1, 1], [0, 1, 0, 1], [0, 1, 0, 1]],
names=[None, "a", "b"],
)
cdf = cudf.DataFrame(
data=[[45, 30, 100, 90], [200, 100, 50, 80]],
columns=midx,
)
pdf = cdf.to_pandas()
assert_eq(pdf.swaplevel(1, 2, 1), cdf.swaplevel(1, 2, 1))
assert_eq(pdf.swaplevel("a", "b", 1), cdf.swaplevel("a", "b", 1))
assert_eq(cdf.swaplevel(2, 1, 1), cdf.swaplevel(1, 2, 1))
assert_eq(pdf.swaplevel(0, 2, 1), cdf.swaplevel(0, 2, 1))
assert_eq(pdf.swaplevel(2, 0, 1), cdf.swaplevel(2, 0, 1))
assert_eq(cdf.swaplevel("a", "a", 1), cdf.swaplevel("b", "b", 1))
def test_dataframe_drop_raises():
df = cudf.DataFrame(
{"a": [1, 2, 3], "c": [10, 20, 30]}, index=["x", "y", "z"]
)
pdf = df.to_pandas()
assert_exceptions_equal(
lfunc=pdf.drop,
rfunc=df.drop,
lfunc_args_and_kwargs=(["p"],),
rfunc_args_and_kwargs=(["p"],),
)
# label dtype mismatch
assert_exceptions_equal(
lfunc=pdf.drop,
rfunc=df.drop,
lfunc_args_and_kwargs=([3],),
rfunc_args_and_kwargs=([3],),
)
expect = pdf.drop("p", errors="ignore")
actual = df.drop("p", errors="ignore")
assert_eq(actual, expect)
assert_exceptions_equal(
lfunc=pdf.drop,
rfunc=df.drop,
lfunc_args_and_kwargs=([], {"columns": "p"}),
rfunc_args_and_kwargs=([], {"columns": "p"}),
)
expect = pdf.drop(columns="p", errors="ignore")
actual = df.drop(columns="p", errors="ignore")
assert_eq(actual, expect)
assert_exceptions_equal(
lfunc=pdf.drop,
rfunc=df.drop,
lfunc_args_and_kwargs=([], {"labels": "p", "axis": 1}),
rfunc_args_and_kwargs=([], {"labels": "p", "axis": 1}),
)
expect = pdf.drop(labels="p", axis=1, errors="ignore")
actual = df.drop(labels="p", axis=1, errors="ignore")
assert_eq(actual, expect)
def test_dataframe_column_add_drop_via_setitem():
df = cudf.DataFrame()
data = np.asarray(range(10))
df["a"] = data
df["b"] = data
assert tuple(df.columns) == ("a", "b")
del df["a"]
assert tuple(df.columns) == ("b",)
df["c"] = data
assert tuple(df.columns) == ("b", "c")
df["a"] = data
assert tuple(df.columns) == ("b", "c", "a")
def test_dataframe_column_set_via_attr():
data_0 = np.asarray([0, 2, 4, 5])
data_1 = np.asarray([1, 4, 2, 3])
data_2 = np.asarray([2, 0, 3, 0])
df = cudf.DataFrame({"a": data_0, "b": data_1, "c": data_2})
for i in range(10):
df.c = df.a
assert assert_eq(df.c, df.a, check_names=False)
assert tuple(df.columns) == ("a", "b", "c")
df.c = df.b
assert assert_eq(df.c, df.b, check_names=False)
assert tuple(df.columns) == ("a", "b", "c")
def test_dataframe_column_drop_via_attr():
df = cudf.DataFrame({"a": []})
with pytest.raises(AttributeError):
del df.a
assert tuple(df.columns) == tuple("a")
@pytest.mark.parametrize("axis", [0, "index"])
def test_dataframe_index_rename(axis):
pdf = pd.DataFrame({"a": [1, 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]})
gdf = cudf.DataFrame.from_pandas(pdf)
expect = pdf.rename(mapper={1: 5, 2: 6}, axis=axis)
got = gdf.rename(mapper={1: 5, 2: 6}, axis=axis)
assert_eq(expect, got)
expect = pdf.rename(index={1: 5, 2: 6})
got = gdf.rename(index={1: 5, 2: 6})
assert_eq(expect, got)
expect = pdf.rename({1: 5, 2: 6})
got = gdf.rename({1: 5, 2: 6})
assert_eq(expect, got)
# `pandas` can support indexes with mixed values. We throw a
# `NotImplementedError`.
with pytest.raises(NotImplementedError):
gdf.rename(mapper={1: "x", 2: "y"}, axis=axis)
def test_dataframe_MI_rename():
gdf = cudf.DataFrame(
{"a": np.arange(10), "b": np.arange(10), "c": np.arange(10)}
)
gdg = gdf.groupby(["a", "b"]).count()
pdg = gdg.to_pandas()
expect = pdg.rename(mapper={1: 5, 2: 6}, axis=0)
got = gdg.rename(mapper={1: 5, 2: 6}, axis=0)
assert_eq(expect, got)
@pytest.mark.parametrize("axis", [1, "columns"])
def test_dataframe_column_rename(axis):
pdf = pd.DataFrame({"a": [1, 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]})
gdf = cudf.DataFrame.from_pandas(pdf)
expect = pdf.rename(mapper=lambda name: 2 * name, axis=axis)
got = gdf.rename(mapper=lambda name: 2 * name, axis=axis)
assert_eq(expect, got)
expect = pdf.rename(columns=lambda name: 2 * name)
got = gdf.rename(columns=lambda name: 2 * name)
assert_eq(expect, got)
rename_mapper = {"a": "z", "b": "y", "c": "x"}
expect = pdf.rename(columns=rename_mapper)
got = gdf.rename(columns=rename_mapper)
assert_eq(expect, got)
def test_dataframe_pop():
pdf = pd.DataFrame(
{"a": [1, 2, 3], "b": ["x", "y", "z"], "c": [7.0, 8.0, 9.0]}
)
gdf = cudf.DataFrame.from_pandas(pdf)
# Test non-existing column error
with pytest.raises(KeyError) as raises:
gdf.pop("fake_colname")
raises.match("fake_colname")
# check pop numeric column
pdf_pop = pdf.pop("a")
gdf_pop = gdf.pop("a")
assert_eq(pdf_pop, gdf_pop)
assert_eq(pdf, gdf)
# check string column
pdf_pop = pdf.pop("b")
gdf_pop = gdf.pop("b")
assert_eq(pdf_pop, gdf_pop)
assert_eq(pdf, gdf)
# check float column and empty dataframe
pdf_pop = pdf.pop("c")
gdf_pop = gdf.pop("c")
assert_eq(pdf_pop, gdf_pop)
assert_eq(pdf, gdf)
# check empty dataframe edge case
empty_pdf = pd.DataFrame(columns=["a", "b"])
empty_gdf = cudf.DataFrame(columns=["a", "b"])
pb = empty_pdf.pop("b")
gb = empty_gdf.pop("b")
assert len(pb) == len(gb)
assert empty_pdf.empty and empty_gdf.empty
@pytest.mark.parametrize("nelem", [0, 3, 100, 1000])
def test_dataframe_astype(nelem):
df = cudf.DataFrame()
data = np.asarray(range(nelem), dtype=np.int32)
df["a"] = data
assert df["a"].dtype is np.dtype(np.int32)
df["b"] = df["a"].astype(np.float32)
assert df["b"].dtype is np.dtype(np.float32)
np.testing.assert_equal(df["a"].to_numpy(), df["b"].to_numpy())
def test_astype_dict():
gdf = cudf.DataFrame({"a": [1, 2, 3], "b": ["1", "2", "3"]})
pdf = gdf.to_pandas()
assert_eq(pdf.astype({"a": "str"}), gdf.astype({"a": "str"}))
assert_eq(
pdf.astype({"a": "str", "b": np.int64}),
gdf.astype({"a": "str", "b": np.int64}),
)
@pytest.mark.parametrize("nelem", [0, 100])
def test_index_astype(nelem):
df = cudf.DataFrame()
data = np.asarray(range(nelem), dtype=np.int32)
df["a"] = data
assert df.index.dtype is np.dtype(np.int64)
df.index = df.index.astype(np.float32)
assert df.index.dtype is np.dtype(np.float32)
df["a"] = df["a"].astype(np.float32)
np.testing.assert_equal(df.index.to_numpy(), df["a"].to_numpy())
df["b"] = df["a"]
df = df.set_index("b")
df["a"] = df["a"].astype(np.int16)
df.index = df.index.astype(np.int16)
np.testing.assert_equal(df.index.to_numpy(), df["a"].to_numpy())
def test_dataframe_to_string_with_skipped_rows():
# Test skipped rows
df = cudf.DataFrame(
{"a": [1, 2, 3, 4, 5, 6], "b": [11, 12, 13, 14, 15, 16]}
)
with pd.option_context("display.max_rows", 5):
got = df.to_string()
expect = textwrap.dedent(
"""\
a b
0 1 11
1 2 12
.. .. ..
4 5 15
5 6 16
[6 rows x 2 columns]"""
)
assert got == expect
def test_dataframe_to_string_with_skipped_rows_and_columns():
# Test skipped rows and skipped columns
df = cudf.DataFrame(
{
"a": [1, 2, 3, 4, 5, 6],
"b": [11, 12, 13, 14, 15, 16],
"c": [11, 12, 13, 14, 15, 16],
"d": [11, 12, 13, 14, 15, 16],
}
)
with pd.option_context("display.max_rows", 5, "display.max_columns", 3):
got = df.to_string()
expect = textwrap.dedent(
"""\
a ... d
0 1 ... 11
1 2 ... 12
.. .. ... ..
4 5 ... 15
5 6 ... 16
[6 rows x 4 columns]"""
)
assert got == expect
def test_dataframe_to_string_with_masked_data():
# Test masked data
df = cudf.DataFrame(
{"a": [1, 2, 3, 4, 5, 6], "b": [11, 12, 13, 14, 15, 16]}
)
data = np.arange(6)
mask = np.zeros(1, dtype=cudf.utils.utils.mask_dtype)
mask[0] = 0b00101101
masked = cudf.Series.from_masked_array(data, mask)
assert masked.null_count == 2
df["c"] = masked
# Check data
values = masked.copy()
validids = [0, 2, 3, 5]
densearray = masked.dropna().to_numpy()
np.testing.assert_equal(data[validids], densearray)
# Valid position is correct
for i in validids:
assert data[i] == values[i]
# Null position is correct
for i in range(len(values)):
if i not in validids:
assert values[i] is cudf.NA
with pd.option_context("display.max_rows", 10):
got = df.to_string()
expect = textwrap.dedent(
"""\
a b c
0 1 11 0
1 2 12 <NA>
2 3 13 2
3 4 14 3
4 5 15 <NA>
5 6 16 5"""
)
assert got == expect
def test_dataframe_to_string_wide(monkeypatch):
monkeypatch.setenv("COLUMNS", "79")
# Test basic
df = cudf.DataFrame({f"a{i}": [0, 1, 2] for i in range(100)})
with pd.option_context("display.max_columns", 0):
got = df.to_string()
expect = textwrap.dedent(
"""\
a0 a1 a2 a3 a4 a5 a6 a7 ... a92 a93 a94 a95 a96 a97 a98 a99
0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0
1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1
2 2 2 2 2 2 2 2 2 ... 2 2 2 2 2 2 2 2
[3 rows x 100 columns]""" # noqa: E501
)
assert got == expect
def test_dataframe_empty_to_string():
# Test for printing empty dataframe
df = cudf.DataFrame()
got = df.to_string()
expect = "Empty DataFrame\nColumns: []\nIndex: []"
assert got == expect
def test_dataframe_emptycolumns_to_string():
# Test for printing dataframe having empty columns
df = cudf.DataFrame()
df["a"] = []
df["b"] = []
got = df.to_string()
expect = "Empty DataFrame\nColumns: [a, b]\nIndex: []"
assert got == expect
def test_dataframe_copy():
# Test for copying the dataframe using python copy pkg
df = cudf.DataFrame()
df["a"] = [1, 2, 3]
df2 = copy(df)
df2["b"] = [4, 5, 6]
got = df.to_string()
expect = textwrap.dedent(
"""\
a
0 1
1 2
2 3"""
)
assert got == expect
def test_dataframe_copy_shallow():
# Test for copy dataframe using class method
df = cudf.DataFrame()
df["a"] = [1, 2, 3]
df2 = df.copy()
df2["b"] = [4, 2, 3]
got = df.to_string()
expect = textwrap.dedent(
"""\
a
0 1
1 2
2 3"""
)
assert got == expect
def test_dataframe_dtypes():
dtypes = pd.Series(
[np.int32, np.float32, np.float64], index=["c", "a", "b"]
)
df = cudf.DataFrame({k: np.ones(10, dtype=v) for k, v in dtypes.items()})
assert df.dtypes.equals(dtypes)
def test_dataframe_add_col_to_object_dataframe():
# Test for adding column to an empty object dataframe
cols = ["a", "b", "c"]
df = pd.DataFrame(columns=cols, dtype="str")
data = {k: v for (k, v) in zip(cols, [["a"] for _ in cols])}
gdf = cudf.DataFrame(data)
gdf = gdf[:0]
assert gdf.dtypes.equals(df.dtypes)
gdf["a"] = [1]
df["a"] = [10]
assert gdf.dtypes.equals(df.dtypes)
gdf["b"] = [1.0]
df["b"] = [10.0]
assert gdf.dtypes.equals(df.dtypes)
def test_dataframe_dir_and_getattr():
df = cudf.DataFrame(
{
"a": np.ones(10),
"b": np.ones(10),
"not an id": np.ones(10),
"oop$": np.ones(10),
}
)
o = dir(df)
assert {"a", "b"}.issubset(o)
assert "not an id" not in o
assert "oop$" not in o
# Getattr works
assert df.a.equals(df["a"])
assert df.b.equals(df["b"])
with pytest.raises(AttributeError):
df.not_a_column
def test_empty_dataframe_to_cupy():
df = cudf.DataFrame()
# Check fully empty dataframe.
mat = df.to_cupy()
assert mat.shape == (0, 0)
mat = df.to_numpy()
assert mat.shape == (0, 0)
df = cudf.DataFrame()
nelem = 123
for k in "abc":
df[k] = np.random.random(nelem)
# Check all columns in empty dataframe.
mat = df.head(0).to_cupy()
assert mat.shape == (0, 3)
def test_dataframe_to_cupy():
df = cudf.DataFrame()
nelem = 123
for k in "abcd":
df[k] = np.random.random(nelem)
# Check all columns
mat = df.to_cupy()
assert mat.shape == (nelem, 4)
assert mat.strides == (8, 984)
mat = df.to_numpy()
assert mat.shape == (nelem, 4)
assert mat.strides == (8, 984)
for i, k in enumerate(df.columns):
np.testing.assert_array_equal(df[k].to_numpy(), mat[:, i])
# Check column subset
mat = df[["a", "c"]].to_cupy().get()
assert mat.shape == (nelem, 2)
for i, k in enumerate("ac"):
np.testing.assert_array_equal(df[k].to_numpy(), mat[:, i])
def test_dataframe_to_cupy_null_values():
df = cudf.DataFrame()
nelem = 123
na = -10000
refvalues = {}
for k in "abcd":
df[k] = data = np.random.random(nelem)
bitmask = utils.random_bitmask(nelem)
df[k] = df[k]._column.set_mask(bitmask)
boolmask = np.asarray(
utils.expand_bits_to_bytes(bitmask)[:nelem], dtype=np.bool_
)
data[~boolmask] = na
refvalues[k] = data
# Check null value causes error
with pytest.raises(ValueError):
df.to_cupy()
with pytest.raises(ValueError):
df.to_numpy()
for k in df.columns:
df[k] = df[k].fillna(na)
mat = df.to_numpy()
for i, k in enumerate(df.columns):
np.testing.assert_array_equal(refvalues[k], mat[:, i])
def test_dataframe_append_empty():
pdf = pd.DataFrame(
{
"key": [1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4],
"value": [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
}
)
gdf = cudf.DataFrame.from_pandas(pdf)
gdf["newcol"] = 100
pdf["newcol"] = 100
assert len(gdf["newcol"]) == len(pdf)
assert len(pdf["newcol"]) == len(pdf)
assert_eq(gdf, pdf)
def test_dataframe_setitem_from_masked_object():
ary = np.random.randn(100)
mask = np.zeros(100, dtype=bool)
mask[:20] = True
np.random.shuffle(mask)
ary[mask] = np.nan
test1_null = cudf.Series(ary, nan_as_null=True)
assert test1_null.nullable
assert test1_null.null_count == 20
test1_nan = cudf.Series(ary, nan_as_null=False)
assert test1_nan.null_count == 0
test2_null = cudf.DataFrame.from_pandas(
pd.DataFrame({"a": ary}), nan_as_null=True
)
assert test2_null["a"].nullable
assert test2_null["a"].null_count == 20
test2_nan = cudf.DataFrame.from_pandas(
pd.DataFrame({"a": ary}), nan_as_null=False
)
assert test2_nan["a"].null_count == 0
gpu_ary = cupy.asarray(ary)
test3_null = cudf.Series(gpu_ary, nan_as_null=True)
assert test3_null.nullable
assert test3_null.null_count == 20
test3_nan = cudf.Series(gpu_ary, nan_as_null=False)
assert test3_nan.null_count == 0
test4 = cudf.DataFrame()
lst = [1, 2, None, 4, 5, 6, None, 8, 9]
test4["lst"] = lst
assert test4["lst"].nullable
assert test4["lst"].null_count == 2
def test_dataframe_append_to_empty():
pdf = pd.DataFrame()
pdf["a"] = []
pdf["b"] = [1, 2, 3]
gdf = cudf.DataFrame()
gdf["a"] = []
gdf["b"] = [1, 2, 3]
assert_eq(gdf, pdf)
def test_dataframe_setitem_index_len1():
gdf = cudf.DataFrame()
gdf["a"] = [1]
gdf["b"] = gdf.index._values
np.testing.assert_equal(gdf.b.to_numpy(), [0])
def test_empty_dataframe_setitem_df():
gdf1 = cudf.DataFrame()
gdf2 = cudf.DataFrame({"a": [1, 2, 3, 4, 5]})
gdf1["a"] = gdf2["a"]
assert_eq(gdf1, gdf2)
def test_assign():
gdf = cudf.DataFrame({"x": [1, 2, 3]})
gdf2 = gdf.assign(y=gdf.x + 1)
assert list(gdf.columns) == ["x"]
assert list(gdf2.columns) == ["x", "y"]
np.testing.assert_equal(gdf2.y.to_numpy(), [2, 3, 4])
@pytest.mark.parametrize(
"mapping",
[
{"y": 1, "z": lambda df: df["x"] + df["y"]},
{
"x": lambda df: df["x"] * 2,
"y": lambda df: 2,
"z": lambda df: df["x"] / df["y"],
},
],
)
def test_assign_callable(mapping):
df = pd.DataFrame({"x": [1, 2, 3]})
cdf = cudf.from_pandas(df)
expect = df.assign(**mapping)
actual = cdf.assign(**mapping)
assert_eq(expect, actual)
@pytest.mark.parametrize("nrows", [1, 8, 100, 1000])
@pytest.mark.parametrize("method", ["murmur3", "md5"])
@pytest.mark.parametrize("seed", [None, 42])
def test_dataframe_hash_values(nrows, method, seed):
gdf = cudf.DataFrame()
data = np.arange(nrows)
data[0] = data[-1] # make first and last the same
gdf["a"] = data
gdf["b"] = gdf.a + 100
out = gdf.hash_values()
assert isinstance(out, cudf.Series)
assert len(out) == nrows
assert out.dtype == np.uint32
warning_expected = (
True if seed is not None and method not in {"murmur3"} else False
)
# Check single column
if warning_expected:
with pytest.warns(
UserWarning, match="Provided seed value has no effect*"
):
out_one = gdf[["a"]].hash_values(method=method, seed=seed)
else:
out_one = gdf[["a"]].hash_values(method=method, seed=seed)
# First matches last
assert out_one.iloc[0] == out_one.iloc[-1]
# Equivalent to the cudf.Series.hash_values()
if warning_expected:
with pytest.warns(
UserWarning, match="Provided seed value has no effect*"
):
assert_eq(gdf["a"].hash_values(method=method, seed=seed), out_one)
else:
assert_eq(gdf["a"].hash_values(method=method, seed=seed), out_one)
@pytest.mark.parametrize("method", ["murmur3"])
def test_dataframe_hash_values_seed(method):
gdf = cudf.DataFrame()
data = np.arange(10)
data[0] = data[-1] # make first and last the same
gdf["a"] = data
gdf["b"] = gdf.a + 100
out_one = gdf.hash_values(method=method, seed=0)
out_two = gdf.hash_values(method=method, seed=1)
assert out_one.iloc[0] == out_one.iloc[-1]
assert out_two.iloc[0] == out_two.iloc[-1]
assert_neq(out_one, out_two)
@pytest.mark.parametrize("nrows", [3, 10, 100, 1000])
@pytest.mark.parametrize("nparts", [1, 2, 8, 13])
@pytest.mark.parametrize("nkeys", [1, 2])
def test_dataframe_hash_partition(nrows, nparts, nkeys):
np.random.seed(123)
gdf = cudf.DataFrame()
keycols = []
for i in range(nkeys):
keyname = f"key{i}"
gdf[keyname] = np.random.randint(0, 7 - i, nrows)
keycols.append(keyname)
gdf["val1"] = np.random.randint(0, nrows * 2, nrows)
got = gdf.partition_by_hash(keycols, nparts=nparts)
# Must return a list
assert isinstance(got, list)
# Must have correct number of partitions
assert len(got) == nparts
# All partitions must be DataFrame type
assert all(isinstance(p, cudf.DataFrame) for p in got)
# Check that all partitions have unique keys
part_unique_keys = set()
for p in got:
if len(p):
# Take rows of the keycolumns and build a set of the key-values
unique_keys = set(map(tuple, p[keycols].values_host))
# Ensure that none of the key-values have occurred in other groups
assert not (unique_keys & part_unique_keys)
part_unique_keys |= unique_keys
assert len(part_unique_keys)
@pytest.mark.parametrize("nrows", [3, 10, 50])
def test_dataframe_hash_partition_masked_value(nrows):
gdf = cudf.DataFrame()
gdf["key"] = np.arange(nrows)
gdf["val"] = np.arange(nrows) + 100
bitmask = utils.random_bitmask(nrows)
bytemask = utils.expand_bits_to_bytes(bitmask)
gdf["val"] = gdf["val"]._column.set_mask(bitmask)
parted = gdf.partition_by_hash(["key"], nparts=3)
# Verify that the valid mask is correct
for p in parted:
df = p.to_pandas()
for row in df.itertuples():
valid = bool(bytemask[row.key])
expected_value = row.key + 100 if valid else np.nan
got_value = row.val
assert (expected_value == got_value) or (
np.isnan(expected_value) and np.isnan(got_value)
)
@pytest.mark.parametrize("nrows", [3, 10, 50])
def test_dataframe_hash_partition_masked_keys(nrows):
gdf = cudf.DataFrame()
gdf["key"] = np.arange(nrows)
gdf["val"] = np.arange(nrows) + 100
bitmask = utils.random_bitmask(nrows)
bytemask = utils.expand_bits_to_bytes(bitmask)
gdf["key"] = gdf["key"]._column.set_mask(bitmask)
parted = gdf.partition_by_hash(["key"], nparts=3, keep_index=False)
# Verify that the valid mask is correct
for p in parted:
df = p.to_pandas()
for row in df.itertuples():
valid = bool(bytemask[row.val - 100])
# val is key + 100
expected_value = row.val - 100 if valid else np.nan
got_value = row.key
assert (expected_value == got_value) or (
np.isnan(expected_value) and np.isnan(got_value)
)
@pytest.mark.parametrize("keep_index", [True, False])
def test_dataframe_hash_partition_keep_index(keep_index):
gdf = cudf.DataFrame(
{"val": [1, 2, 3, 4, 5], "key": [3, 2, 1, 4, 5]}, index=[5, 4, 3, 2, 1]
)
expected_df1 = cudf.DataFrame(
{"val": [1, 5], "key": [3, 5]}, index=[5, 1] if keep_index else None
)
expected_df2 = cudf.DataFrame(
{"val": [2, 3, 4], "key": [2, 1, 4]},
index=[4, 3, 2] if keep_index else None,
)
expected = [expected_df1, expected_df2]
parts = gdf.partition_by_hash(["key"], nparts=2, keep_index=keep_index)
for exp, got in zip(expected, parts):
assert_eq(exp, got)
def test_dataframe_hash_partition_empty():
gdf = cudf.DataFrame({"val": [1, 2], "key": [3, 2]}, index=["a", "b"])
parts = gdf.iloc[:0].partition_by_hash(["key"], nparts=3)
assert len(parts) == 3
for part in parts:
assert_eq(gdf.iloc[:0], part)
@pytest.mark.parametrize("dtype1", utils.supported_numpy_dtypes)
@pytest.mark.parametrize("dtype2", utils.supported_numpy_dtypes)
def test_dataframe_concat_different_numerical_columns(dtype1, dtype2):
df1 = pd.DataFrame(dict(x=pd.Series(np.arange(5)).astype(dtype1)))
df2 = pd.DataFrame(dict(x=pd.Series(np.arange(5)).astype(dtype2)))
if dtype1 != dtype2 and "datetime" in dtype1 or "datetime" in dtype2:
with pytest.raises(TypeError):
cudf.concat([df1, df2])
else:
pres = pd.concat([df1, df2])
gres = cudf.concat([cudf.from_pandas(df1), cudf.from_pandas(df2)])
assert_eq(pres, gres, check_dtype=False, check_index_type=True)
def test_dataframe_concat_different_column_types():
df1 = cudf.Series([42], dtype=np.float64)
df2 = cudf.Series(["a"], dtype="category")
with pytest.raises(ValueError):
cudf.concat([df1, df2])
df2 = cudf.Series(["a string"])
with pytest.raises(TypeError):
cudf.concat([df1, df2])
@pytest.mark.parametrize(
"df_1", [cudf.DataFrame({"a": [1, 2], "b": [1, 3]}), cudf.DataFrame({})]
)
@pytest.mark.parametrize(
"df_2", [cudf.DataFrame({"a": [], "b": []}), cudf.DataFrame({})]
)
def test_concat_empty_dataframe(df_1, df_2):
got = cudf.concat([df_1, df_2])
expect = pd.concat([df_1.to_pandas(), df_2.to_pandas()], sort=False)
# ignoring dtypes as pandas upcasts int to float
# on concatenation with empty dataframes
assert_eq(got, expect, check_dtype=False, check_index_type=True)
@pytest.mark.parametrize(
"df1_d",
[
{"a": [1, 2], "b": [1, 2], "c": ["s1", "s2"], "d": [1.0, 2.0]},
{"b": [1.9, 10.9], "c": ["s1", "s2"]},
{"c": ["s1"], "b": pd.Series([None], dtype="float"), "a": [False]},
],
)
@pytest.mark.parametrize(
"df2_d",
[
{"a": [1, 2, 3]},
{"a": [1, None, 3], "b": [True, True, False], "c": ["s3", None, "s4"]},
{"a": [], "b": []},
{},
],
)
def test_concat_different_column_dataframe(df1_d, df2_d):
got = cudf.concat(
[cudf.DataFrame(df1_d), cudf.DataFrame(df2_d), cudf.DataFrame(df1_d)],
sort=False,
)
pdf1 = pd.DataFrame(df1_d)
pdf2 = pd.DataFrame(df2_d)
# pandas warns when trying to concatenate any empty float columns (or float
# columns with all None values) with any non-empty bool columns.
def is_invalid_concat(left, right):
return (
pd.api.types.is_bool_dtype(left.dtype)
and pd.api.types.is_float_dtype(right.dtype)
and right.count() == 0
)
cond = any(
is_invalid_concat(pdf1[colname], pdf2[colname])
or is_invalid_concat(pdf2[colname], pdf1[colname])
for colname in set(pdf1) & set(pdf2)
)
with expect_warning_if(cond):
expect = pd.concat([pdf1, pdf2, pdf1], sort=False)
# numerical columns are upcasted to float in cudf.DataFrame.to_pandas()
# casts nan to 0 in non-float numerical columns
numeric_cols = got.dtypes[got.dtypes != "object"].index
for col in numeric_cols:
got[col] = got[col].astype(np.float64).fillna(np.nan)
assert_eq(got, expect, check_dtype=False, check_index_type=True)
@pytest.mark.parametrize(
"ser_1", [pd.Series([1, 2, 3]), pd.Series([], dtype="float64")]
)
@pytest.mark.parametrize("ser_2", [pd.Series([], dtype="float64")])
def test_concat_empty_series(ser_1, ser_2):
got = cudf.concat([cudf.Series(ser_1), cudf.Series(ser_2)])
expect = pd.concat([ser_1, ser_2])
assert_eq(got, expect, check_index_type=True)
def test_concat_with_axis():
df1 = pd.DataFrame(dict(x=np.arange(5), y=np.arange(5)))
df2 = pd.DataFrame(dict(a=np.arange(5), b=np.arange(5)))
concat_df = pd.concat([df1, df2], axis=1)
cdf1 = cudf.from_pandas(df1)
cdf2 = cudf.from_pandas(df2)
# concat only dataframes
concat_cdf = cudf.concat([cdf1, cdf2], axis=1)
assert_eq(concat_cdf, concat_df, check_index_type=True)
# concat only series
concat_s = pd.concat([df1.x, df1.y], axis=1)
cs1 = cudf.Series.from_pandas(df1.x)
cs2 = cudf.Series.from_pandas(df1.y)
concat_cdf_s = cudf.concat([cs1, cs2], axis=1)
assert_eq(concat_cdf_s, concat_s, check_index_type=True)
# concat series and dataframes
s3 = pd.Series(np.random.random(5))
cs3 = cudf.Series.from_pandas(s3)
concat_cdf_all = cudf.concat([cdf1, cs3, cdf2], axis=1)
concat_df_all = pd.concat([df1, s3, df2], axis=1)
assert_eq(concat_cdf_all, concat_df_all, check_index_type=True)
# concat manual multi index
midf1 = cudf.from_pandas(df1)
midf1.index = cudf.MultiIndex(
levels=[[0, 1, 2, 3], [0, 1]], codes=[[0, 1, 2, 3, 2], [0, 1, 0, 1, 0]]
)
midf2 = midf1[2:]
midf2.index = cudf.MultiIndex(
levels=[[3, 4, 5], [2, 0]], codes=[[0, 1, 2], [1, 0, 1]]
)
mipdf1 = midf1.to_pandas()
mipdf2 = midf2.to_pandas()
assert_eq(
cudf.concat([midf1, midf2]),
pd.concat([mipdf1, mipdf2]),
check_index_type=True,
)
assert_eq(
cudf.concat([midf2, midf1]),
pd.concat([mipdf2, mipdf1]),
check_index_type=True,
)
assert_eq(
cudf.concat([midf1, midf2, midf1]),
pd.concat([mipdf1, mipdf2, mipdf1]),
check_index_type=True,
)
# concat groupby multi index
gdf1 = cudf.DataFrame(
{
"x": np.random.randint(0, 10, 10),
"y": np.random.randint(0, 10, 10),
"z": np.random.randint(0, 10, 10),
"v": np.random.randint(0, 10, 10),
}
)
gdf2 = gdf1[5:]
gdg1 = gdf1.groupby(["x", "y"]).min()
gdg2 = gdf2.groupby(["x", "y"]).min()
pdg1 = gdg1.to_pandas()
pdg2 = gdg2.to_pandas()
assert_eq(
cudf.concat([gdg1, gdg2]),
pd.concat([pdg1, pdg2]),
check_index_type=True,
)
assert_eq(
cudf.concat([gdg2, gdg1]),
pd.concat([pdg2, pdg1]),
check_index_type=True,
)
# series multi index concat
gdgz1 = gdg1.z
gdgz2 = gdg2.z
pdgz1 = gdgz1.to_pandas()
pdgz2 = gdgz2.to_pandas()
assert_eq(
cudf.concat([gdgz1, gdgz2]),
pd.concat([pdgz1, pdgz2]),
check_index_type=True,
)
assert_eq(
cudf.concat([gdgz2, gdgz1]),
pd.concat([pdgz2, pdgz1]),
check_index_type=True,
)
@pytest.mark.parametrize("nrows", [0, 3, 10, 100, 1000])
def test_nonmatching_index_setitem(nrows):
np.random.seed(0)
gdf = cudf.DataFrame()
gdf["a"] = np.random.randint(2147483647, size=nrows)
gdf["b"] = np.random.randint(2147483647, size=nrows)
gdf = gdf.set_index("b")
test_values = np.random.randint(2147483647, size=nrows)
gdf["c"] = test_values
assert len(test_values) == len(gdf["c"])
gdf_series = cudf.Series(test_values, index=gdf.index, name="c")
assert_eq(gdf["c"].to_pandas(), gdf_series.to_pandas())
@pytest.mark.parametrize(
"dtype",
[
"int",
pytest.param(
"int64[pyarrow]",
marks=pytest.mark.skipif(
not PANDAS_GE_150, reason="pyarrow support only in >=1.5"
),
),
],
)
def test_from_pandas(dtype):
df = pd.DataFrame({"x": [1, 2, 3]}, index=[4.0, 5.0, 6.0], dtype=dtype)
df.columns.name = "custom_column_name"
gdf = cudf.DataFrame.from_pandas(df)
assert isinstance(gdf, cudf.DataFrame)
assert_eq(df, gdf, check_dtype="pyarrow" not in dtype)
s = df.x
gs = cudf.Series.from_pandas(s)
assert isinstance(gs, cudf.Series)
assert_eq(s, gs, check_dtype="pyarrow" not in dtype)
@pytest.mark.parametrize("dtypes", [int, float])
def test_from_records(dtypes):
h_ary = np.ndarray(shape=(10, 4), dtype=dtypes)
rec_ary = h_ary.view(np.recarray)
gdf = cudf.DataFrame.from_records(rec_ary, columns=["a", "b", "c", "d"])
df = pd.DataFrame.from_records(rec_ary, columns=["a", "b", "c", "d"])
assert isinstance(gdf, cudf.DataFrame)
assert_eq(df, gdf)
gdf = cudf.DataFrame.from_records(rec_ary)
df = pd.DataFrame.from_records(rec_ary)
assert isinstance(gdf, cudf.DataFrame)
assert_eq(df, gdf)
@pytest.mark.parametrize("columns", [None, ["first", "second", "third"]])
@pytest.mark.parametrize(
"index",
[
None,
["first", "second"],
"name",
"age",
"weight",
[10, 11],
["abc", "xyz"],
],
)
def test_from_records_index(columns, index):
rec_ary = np.array(
[("Rex", 9, 81.0), ("Fido", 3, 27.0)],
dtype=[("name", "U10"), ("age", "i4"), ("weight", "f4")],
)
gdf = cudf.DataFrame.from_records(rec_ary, columns=columns, index=index)
df = pd.DataFrame.from_records(rec_ary, columns=columns, index=index)
assert isinstance(gdf, cudf.DataFrame)
assert_eq(df, gdf)
def test_dataframe_construction_from_cupy_arrays():
h_ary = np.array([[1, 2, 3], [4, 5, 6]], np.int32)
d_ary = cupy.asarray(h_ary)
gdf = cudf.DataFrame(d_ary, columns=["a", "b", "c"])
df = pd.DataFrame(h_ary, columns=["a", "b", "c"])
assert isinstance(gdf, cudf.DataFrame)
assert_eq(df, gdf)
gdf = cudf.DataFrame(d_ary)
df = pd.DataFrame(h_ary)
assert isinstance(gdf, cudf.DataFrame)
assert_eq(df, gdf)
gdf = cudf.DataFrame(d_ary, index=["a", "b"])
df = pd.DataFrame(h_ary, index=["a", "b"])
assert isinstance(gdf, cudf.DataFrame)
assert_eq(df, gdf)
gdf = cudf.DataFrame(d_ary)
gdf = gdf.set_index(keys=0, drop=False)
df = pd.DataFrame(h_ary)
df = df.set_index(keys=0, drop=False)
assert isinstance(gdf, cudf.DataFrame)
assert_eq(df, gdf)
gdf = cudf.DataFrame(d_ary)
gdf = gdf.set_index(keys=1, drop=False)
df = pd.DataFrame(h_ary)
df = df.set_index(keys=1, drop=False)
assert isinstance(gdf, cudf.DataFrame)
assert_eq(df, gdf)
def test_dataframe_cupy_wrong_dimensions():
d_ary = cupy.empty((2, 3, 4), dtype=np.int32)
with pytest.raises(
ValueError, match="records dimension expected 1 or 2 but found: 3"
):
cudf.DataFrame(d_ary)
def test_dataframe_cupy_array_wrong_index():
d_ary = cupy.empty((2, 3), dtype=np.int32)
with pytest.raises(ValueError):
cudf.DataFrame(d_ary, index=["a"])
with pytest.raises(ValueError):
cudf.DataFrame(d_ary, index="a")
def test_index_in_dataframe_constructor():
a = pd.DataFrame({"x": [1, 2, 3]}, index=[4.0, 5.0, 6.0])
b = cudf.DataFrame({"x": [1, 2, 3]}, index=[4.0, 5.0, 6.0])
assert_eq(a, b)
assert_eq(a.loc[4:], b.loc[4:])
dtypes = NUMERIC_TYPES + DATETIME_TYPES + ["bool"]
@pytest.mark.parametrize("nelem", [0, 2, 3, 100, 1000])
@pytest.mark.parametrize("data_type", dtypes)
def test_from_arrow(nelem, data_type):
df = pd.DataFrame(
{
"a": np.random.randint(0, 1000, nelem).astype(data_type),
"b": np.random.randint(0, 1000, nelem).astype(data_type),
}
)
padf = pa.Table.from_pandas(
df, preserve_index=False
).replace_schema_metadata(None)
gdf = cudf.DataFrame.from_arrow(padf)
assert isinstance(gdf, cudf.DataFrame)
assert_eq(df, gdf)
s = pa.Array.from_pandas(df.a)
gs = cudf.Series.from_arrow(s)
assert isinstance(gs, cudf.Series)
# For some reason PyArrow to_pandas() converts to numpy array and has
# better type compatibility
np.testing.assert_array_equal(s.to_pandas(), gs.to_numpy())
@pytest.mark.parametrize("nelem", [0, 2, 3, 100, 1000])
@pytest.mark.parametrize("data_type", dtypes)
def test_to_arrow(nelem, data_type):
df = pd.DataFrame(
{
"a": np.random.randint(0, 1000, nelem).astype(data_type),
"b": np.random.randint(0, 1000, nelem).astype(data_type),
}
)
gdf = cudf.DataFrame.from_pandas(df)
pa_df = pa.Table.from_pandas(
df, preserve_index=False
).replace_schema_metadata(None)
pa_gdf = gdf.to_arrow(preserve_index=False).replace_schema_metadata(None)
assert isinstance(pa_gdf, pa.Table)
assert pa.Table.equals(pa_df, pa_gdf)
pa_s = pa.Array.from_pandas(df.a)
pa_gs = gdf["a"].to_arrow()
assert isinstance(pa_gs, pa.Array)
assert pa.Array.equals(pa_s, pa_gs)
pa_i = pa.Array.from_pandas(df.index)
pa_gi = gdf.index.to_arrow()
assert isinstance(pa_gi, pa.Array)
assert pa.Array.equals(pa_i, pa_gi)
@pytest.mark.parametrize("data_type", dtypes)
def test_to_from_arrow_nulls(data_type):
if data_type == "longlong":
data_type = "int64"
if data_type == "bool":
s1 = pa.array([True, None, False, None, True], type=data_type)
else:
dtype = np.dtype(data_type)
if dtype.type == np.datetime64:
time_unit, _ = np.datetime_data(dtype)
data_type = pa.timestamp(unit=time_unit)
s1 = pa.array([1, None, 3, None, 5], type=data_type)
gs1 = cudf.Series.from_arrow(s1)
assert isinstance(gs1, cudf.Series)
# We have 64B padded buffers for nulls whereas Arrow returns a minimal
# number of bytes, so only check the first byte in this case
np.testing.assert_array_equal(
np.asarray(s1.buffers()[0]).view("u1")[0],
gs1._column.mask_array_view(mode="read").copy_to_host().view("u1")[0],
)
assert pa.Array.equals(s1, gs1.to_arrow())
s2 = pa.array([None, None, None, None, None], type=data_type)
gs2 = cudf.Series.from_arrow(s2)
assert isinstance(gs2, cudf.Series)
# We have 64B padded buffers for nulls whereas Arrow returns a minimal
# number of bytes, so only check the first byte in this case
np.testing.assert_array_equal(
np.asarray(s2.buffers()[0]).view("u1")[0],
gs2._column.mask_array_view(mode="read").copy_to_host().view("u1")[0],
)
assert pa.Array.equals(s2, gs2.to_arrow())
def test_to_arrow_categorical():
df = pd.DataFrame()
df["a"] = pd.Series(["a", "b", "c"], dtype="category")
gdf = cudf.DataFrame.from_pandas(df)
pa_df = pa.Table.from_pandas(
df, preserve_index=False
).replace_schema_metadata(None)
pa_gdf = gdf.to_arrow(preserve_index=False).replace_schema_metadata(None)
assert isinstance(pa_gdf, pa.Table)
assert pa.Table.equals(pa_df, pa_gdf)
pa_s = pa.Array.from_pandas(df.a)
pa_gs = gdf["a"].to_arrow()
assert isinstance(pa_gs, pa.Array)
assert pa.Array.equals(pa_s, pa_gs)
def test_from_arrow_missing_categorical():
pd_cat = pd.Categorical(["a", "b", "c"], categories=["a", "b"])
pa_cat = pa.array(pd_cat, from_pandas=True)
gd_cat = cudf.Series(pa_cat)
assert isinstance(gd_cat, cudf.Series)
assert_eq(
pd.Series(pa_cat.to_pandas()), # PyArrow returns a pd.Categorical
gd_cat.to_pandas(),
)
def test_to_arrow_missing_categorical():
pd_cat = pd.Categorical(["a", "b", "c"], categories=["a", "b"])
pa_cat = pa.array(pd_cat, from_pandas=True)
gd_cat = cudf.Series(pa_cat)
assert isinstance(gd_cat, cudf.Series)
assert pa.Array.equals(pa_cat, gd_cat.to_arrow())
@pytest.mark.parametrize("data_type", dtypes)
def test_from_scalar_typing(data_type):
if data_type == "datetime64[ms]":
scalar = (
np.dtype("int64")
.type(np.random.randint(0, 5))
.astype("datetime64[ms]")
)
elif data_type.startswith("datetime64"):
scalar = np.datetime64(datetime.date.today()).astype("datetime64[ms]")
data_type = "datetime64[ms]"
else:
scalar = np.dtype(data_type).type(np.random.randint(0, 5))
gdf = cudf.DataFrame()
gdf["a"] = [1, 2, 3, 4, 5]
gdf["b"] = scalar
assert gdf["b"].dtype == np.dtype(data_type)
assert len(gdf["b"]) == len(gdf["a"])
@pytest.mark.parametrize("data_type", NUMERIC_TYPES)
def test_from_python_array(data_type):
np_arr = np.random.randint(0, 100, 10).astype(data_type)
data = memoryview(np_arr)
data = arr.array(data.format, data)
gs = cudf.Series(data)
np.testing.assert_equal(gs.to_numpy(), np_arr)
def test_series_shape():
ps = pd.Series([1, 2, 3, 4])
cs = cudf.Series([1, 2, 3, 4])
assert ps.shape == cs.shape
def test_series_shape_empty():
ps = pd.Series([], dtype="float64")
cs = cudf.Series([], dtype="float64")
assert ps.shape == cs.shape
def test_dataframe_shape():
pdf = pd.DataFrame({"a": [0, 1, 2, 3], "b": [0.1, 0.2, None, 0.3]})
gdf = cudf.DataFrame.from_pandas(pdf)
assert pdf.shape == gdf.shape
def test_dataframe_shape_empty():
pdf = pd.DataFrame()
gdf = cudf.DataFrame()
assert pdf.shape == gdf.shape
@pytest.mark.parametrize("num_cols", [1, 2, 10])
@pytest.mark.parametrize("num_rows", [1, 2, 20])
@pytest.mark.parametrize("dtype", dtypes + ["object"])
@pytest.mark.parametrize("nulls", ["none", "some", "all"])
def test_dataframe_transpose(nulls, num_cols, num_rows, dtype):
# In case of `bool` dtype: pandas <= 1.2.5 type-casts
# a boolean series to `float64` series if a `np.nan` is assigned to it:
# >>> s = pd.Series([True, False, True])
# >>> s
# 0 True
# 1 False
# 2 True
# dtype: bool
# >>> s[[2]] = np.nan
# >>> s
# 0 1.0
# 1 0.0
# 2 NaN
# dtype: float64
# In pandas >= 1.3.2 this behavior is fixed:
# >>> s = pd.Series([True, False, True])
# >>> s
# 0
# True
# 1
# False
# 2
# True
# dtype: bool
# >>> s[[2]] = np.nan
# >>> s
# 0
# True
# 1
# False
# 2
# NaN
# dtype: object
# In cudf we change `object` dtype to `str` type - for which there
# is no transpose implemented yet. Hence we need to test transpose
# against pandas nullable types as they are the ones that closely
# resemble `cudf` dtypes behavior.
pdf = pd.DataFrame()
null_rep = np.nan if dtype in ["float32", "float64"] else None
np_dtype = dtype
dtype = np.dtype(dtype)
dtype = cudf.utils.dtypes.np_dtypes_to_pandas_dtypes.get(dtype, dtype)
for i in range(num_cols):
colname = string.ascii_lowercase[i]
data = pd.Series(
np.random.randint(0, 26, num_rows).astype(np_dtype),
dtype=dtype,
)
if nulls == "some":
idx = np.random.choice(
num_rows, size=int(num_rows / 2), replace=False
)
if len(idx):
data[idx] = null_rep
elif nulls == "all":
data[:] = null_rep
pdf[colname] = data
gdf = cudf.DataFrame.from_pandas(pdf)
got_function = gdf.transpose()
got_property = gdf.T
expect = pdf.transpose()
assert_eq(expect, got_function.to_pandas(nullable=True))
assert_eq(expect, got_property.to_pandas(nullable=True))
@pytest.mark.parametrize("num_cols", [1, 2, 10])
@pytest.mark.parametrize("num_rows", [1, 2, 20])
def test_dataframe_transpose_category(num_cols, num_rows):
pdf = pd.DataFrame()
for i in range(num_cols):
colname = string.ascii_lowercase[i]
data = pd.Series(list(string.ascii_lowercase), dtype="category")
data = data.sample(num_rows, replace=True).reset_index(drop=True)
pdf[colname] = data
gdf = cudf.DataFrame.from_pandas(pdf)
got_function = gdf.transpose()
got_property = gdf.T
expect = pdf.transpose()
assert_eq(expect, got_function.to_pandas())
assert_eq(expect, got_property.to_pandas())
def test_generated_column():
gdf = cudf.DataFrame({"a": (i for i in range(5))})
assert len(gdf) == 5
@pytest.fixture
def pdf():
return pd.DataFrame({"x": range(10), "y": range(10)})
@pytest.fixture
def gdf(pdf):
return cudf.DataFrame.from_pandas(pdf)
@pytest_unmark_spilling
@pytest.mark.parametrize(
"data",
[
{
"x": [np.nan, 2, 3, 4, 100, np.nan],
"y": [4, 5, 6, 88, 99, np.nan],
"z": [7, 8, 9, 66, np.nan, 77],
},
{"x": [1, 2, 3], "y": [4, 5, 6], "z": [7, 8, 9]},
{
"x": [np.nan, np.nan, np.nan],
"y": [np.nan, np.nan, np.nan],
"z": [np.nan, np.nan, np.nan],
},
pytest.param(
{"x": [], "y": [], "z": []},
marks=pytest_xfail(
condition=version.parse("11")
<= version.parse(cupy.__version__)
< version.parse("11.1"),
reason="Zero-sized array passed to cupy reduction, "
"https://github.com/cupy/cupy/issues/6937",
),
),
pytest.param(
{"x": []},
marks=pytest_xfail(
condition=version.parse("11")
<= version.parse(cupy.__version__)
< version.parse("11.1"),
reason="Zero-sized array passed to cupy reduction, "
"https://github.com/cupy/cupy/issues/6937",
),
),
],
)
@pytest.mark.parametrize("axis", [0, 1])
@pytest.mark.parametrize(
"func",
[
"min",
"max",
"sum",
"prod",
"product",
"cummin",
"cummax",
"cumsum",
"cumprod",
"mean",
"median",
"sum",
"std",
"var",
"kurt",
"skew",
"all",
"any",
],
)
@pytest.mark.parametrize("skipna", [True, False])
def test_dataframe_reductions(data, axis, func, skipna):
pdf = pd.DataFrame(data=data)
gdf = cudf.DataFrame.from_pandas(pdf)
# Reductions can fail in numerous possible ways when attempting row-wise
# reductions, which are only partially supported. Catching the appropriate
# exception here allows us to detect API breakage in the form of changing
# exceptions.
expected_exception = None
if axis == 1:
if func in ("kurt", "skew"):
expected_exception = NotImplementedError
elif func not in cudf.core.dataframe._cupy_nan_methods_map:
if skipna is False:
expected_exception = NotImplementedError
elif any(col.nullable for name, col in gdf.items()):
expected_exception = ValueError
elif func in ("cummin", "cummax"):
expected_exception = AttributeError
# Test different degrees of freedom for var and std.
all_kwargs = [{"ddof": 1}, {"ddof": 2}] if func in ("var", "std") else [{}]
for kwargs in all_kwargs:
if expected_exception is not None:
with pytest.raises(expected_exception):
getattr(gdf, func)(axis=axis, skipna=skipna, **kwargs),
else:
expect = getattr(pdf, func)(axis=axis, skipna=skipna, **kwargs)
with expect_warning_if(
skipna
and func in {"min", "max"}
and axis == 1
and any(gdf.T[col].isna().all() for col in gdf.T),
RuntimeWarning,
):
got = getattr(gdf, func)(axis=axis, skipna=skipna, **kwargs)
assert_eq(got, expect, check_dtype=False)
@pytest.mark.parametrize(
"data",
[
{"x": [np.nan, 2, 3, 4, 100, np.nan], "y": [4, 5, 6, 88, 99, np.nan]},
{"x": [1, 2, 3], "y": [4, 5, 6]},
{"x": [np.nan, np.nan, np.nan], "y": [np.nan, np.nan, np.nan]},
{"x": [], "y": []},
{"x": []},
],
)
@pytest.mark.parametrize("func", [lambda df: df.count()])
def test_dataframe_count_reduction(data, func):
pdf = pd.DataFrame(data=data)
gdf = cudf.DataFrame.from_pandas(pdf)
assert_eq(func(pdf), func(gdf))
@pytest.mark.parametrize(
"data",
[
{"x": [np.nan, 2, 3, 4, 100, np.nan], "y": [4, 5, 6, 88, 99, np.nan]},
{"x": [1, 2, 3], "y": [4, 5, 6]},
{"x": [np.nan, np.nan, np.nan], "y": [np.nan, np.nan, np.nan]},
{"x": pd.Series([], dtype="float"), "y": pd.Series([], dtype="float")},
{"x": pd.Series([], dtype="int")},
],
)
@pytest.mark.parametrize("ops", ["sum", "product", "prod"])
@pytest.mark.parametrize("skipna", [True, False])
@pytest.mark.parametrize("min_count", [-10, -1, 0, 1, 2, 3, 10])
def test_dataframe_min_count_ops(data, ops, skipna, min_count):
psr = pd.DataFrame(data)
gsr = cudf.from_pandas(psr)
assert_eq(
getattr(psr, ops)(skipna=skipna, min_count=min_count),
getattr(gsr, ops)(skipna=skipna, min_count=min_count),
check_dtype=False,
)
@pytest_unmark_spilling
@pytest.mark.parametrize(
"binop",
[
operator.add,
operator.mul,
operator.floordiv,
operator.truediv,
operator.mod,
operator.pow,
],
)
@pytest.mark.parametrize(
"other",
[
1.0,
pd.Series([1.0]),
pd.Series([1.0, 2.0]),
pd.Series([1.0, 2.0, 3.0]),
pd.Series([1.0], index=["x"]),
pd.Series([1.0, 2.0], index=["x", "y"]),
pd.Series([1.0, 2.0, 3.0], index=["x", "y", "z"]),
pd.DataFrame({"x": [1.0]}),
pd.DataFrame({"x": [1.0], "y": [2.0]}),
pd.DataFrame({"x": [1.0], "y": [2.0], "z": [3.0]}),
],
)
def test_arithmetic_binops_df(pdf, gdf, binop, other):
# Avoid 1**NA cases: https://github.com/pandas-dev/pandas/issues/29997
pdf[pdf == 1.0] = 2
gdf[gdf == 1.0] = 2
try:
d = binop(pdf, other)
except Exception:
if isinstance(other, (pd.Series, pd.DataFrame)):
cudf_other = cudf.from_pandas(other)
# that returns before we enter this try-except.
assert_exceptions_equal(
lfunc=binop,
rfunc=binop,
lfunc_args_and_kwargs=([pdf, other], {}),
rfunc_args_and_kwargs=([gdf, cudf_other], {}),
)
else:
if isinstance(other, (pd.Series, pd.DataFrame)):
other = cudf.from_pandas(other)
g = binop(gdf, other)
assert_eq(d, g)
@pytest_unmark_spilling
@pytest.mark.parametrize(
"binop",
[
operator.eq,
operator.lt,
operator.le,
operator.gt,
operator.ge,
operator.ne,
],
)
@pytest.mark.parametrize(
"other",
[
1.0,
pd.Series([1.0, 2.0], index=["x", "y"]),
pd.DataFrame({"x": [1.0]}),
pd.DataFrame({"x": [1.0], "y": [2.0]}),
pd.DataFrame({"x": [1.0], "y": [2.0], "z": [3.0]}),
],
)
def test_comparison_binops_df(pdf, gdf, binop, other):
# Avoid 1**NA cases: https://github.com/pandas-dev/pandas/issues/29997
pdf[pdf == 1.0] = 2
gdf[gdf == 1.0] = 2
try:
d = binop(pdf, other)
except Exception:
if isinstance(other, (pd.Series, pd.DataFrame)):
cudf_other = cudf.from_pandas(other)
# that returns before we enter this try-except.
assert_exceptions_equal(
lfunc=binop,
rfunc=binop,
lfunc_args_and_kwargs=([pdf, other], {}),
rfunc_args_and_kwargs=([gdf, cudf_other], {}),
)
else:
if isinstance(other, (pd.Series, pd.DataFrame)):
other = cudf.from_pandas(other)
g = binop(gdf, other)
assert_eq(d, g)
@pytest_unmark_spilling
@pytest.mark.parametrize(
"binop",
[
operator.eq,
operator.lt,
operator.le,
operator.gt,
operator.ge,
operator.ne,
],
)
@pytest.mark.parametrize(
"other",
[
pd.Series([1.0]),
pd.Series([1.0, 2.0]),
pd.Series([1.0, 2.0, 3.0]),
pd.Series([1.0], index=["x"]),
pd.Series([1.0, 2.0, 3.0], index=["x", "y", "z"]),
],
)
def test_comparison_binops_df_reindexing(request, pdf, gdf, binop, other):
# Avoid 1**NA cases: https://github.com/pandas-dev/pandas/issues/29997
pdf[pdf == 1.0] = 2
gdf[gdf == 1.0] = 2
try:
with pytest.warns(FutureWarning):
d = binop(pdf, other)
except Exception:
if isinstance(other, (pd.Series, pd.DataFrame)):
cudf_other = cudf.from_pandas(other)
# that returns before we enter this try-except.
assert_exceptions_equal(
lfunc=binop,
rfunc=binop,
lfunc_args_and_kwargs=([pdf, other], {}),
rfunc_args_and_kwargs=([gdf, cudf_other], {}),
)
else:
request.applymarker(
pytest.mark.xfail(
condition=pdf.columns.difference(other.index).size > 0,
reason="""
Currently we will not match pandas for equality/inequality
operators when there are columns that exist in a Series but not
the DataFrame because pandas returns True/False values whereas
we return NA. However, this reindexing is deprecated in pandas
so we opt not to add support. This test should start passing
once pandas removes the deprecated behavior in 2.0. When that
happens, this test can be merged with the two tests above into
a single test with common parameters.
""",
)
)
if isinstance(other, (pd.Series, pd.DataFrame)):
other = cudf.from_pandas(other)
g = binop(gdf, other)
assert_eq(d, g)
def test_binops_df_invalid(gdf):
with pytest.raises(TypeError):
gdf + np.array([1, 2])
@pytest.mark.parametrize("binop", [operator.and_, operator.or_, operator.xor])
def test_bitwise_binops_df(pdf, gdf, binop):
d = binop(pdf, pdf + 1)
g = binop(gdf, gdf + 1)
assert_eq(d, g)
@pytest_unmark_spilling
@pytest.mark.parametrize(
"binop",
[
operator.add,
operator.mul,
operator.floordiv,
operator.truediv,
operator.mod,
operator.pow,
operator.eq,
operator.lt,
operator.le,
operator.gt,
operator.ge,
operator.ne,
],
)
def test_binops_series(pdf, gdf, binop):
pdf = pdf + 1.0
gdf = gdf + 1.0
d = binop(pdf.x, pdf.y)
g = binop(gdf.x, gdf.y)
assert_eq(d, g)
@pytest.mark.parametrize("binop", [operator.and_, operator.or_, operator.xor])
def test_bitwise_binops_series(pdf, gdf, binop):
d = binop(pdf.x, pdf.y + 1)
g = binop(gdf.x, gdf.y + 1)
assert_eq(d, g)
@pytest.mark.parametrize("unaryop", [operator.neg, operator.inv, operator.abs])
@pytest.mark.parametrize(
"col_name,assign_col_name", [(None, False), (None, True), ("abc", True)]
)
def test_unaryops_df(pdf, unaryop, col_name, assign_col_name):
pd_df = pdf.copy()
if assign_col_name:
pd_df.columns.name = col_name
gdf = cudf.from_pandas(pd_df)
d = unaryop(pd_df - 5)
g = unaryop(gdf - 5)
assert_eq(d, g)
def test_df_abs(pdf):
np.random.seed(0)
disturbance = pd.Series(np.random.rand(10))
pdf = pdf - 5 + disturbance
d = pdf.apply(np.abs)
g = cudf.from_pandas(pdf).abs()
assert_eq(d, g)
def test_scale_df(gdf):
got = (gdf - 5).scale()
expect = cudf.DataFrame(
{"x": np.linspace(0.0, 1.0, 10), "y": np.linspace(0.0, 1.0, 10)}
)
assert_eq(expect, got)
@pytest.mark.parametrize(
"func",
[
lambda df: df.empty,
lambda df: df.x.empty,
lambda df: df.x.fillna(123, limit=None, method=None, axis=None),
lambda df: df.drop("x", axis=1, errors="raise"),
],
)
def test_unary_operators(func, pdf, gdf):
p = func(pdf)
g = func(gdf)
assert_eq(p, g)
def test_is_monotonic(gdf):
pdf = pd.DataFrame({"x": [1, 2, 3]}, index=[3, 1, 2])
gdf = cudf.DataFrame.from_pandas(pdf)
with pytest.warns(FutureWarning):
assert not gdf.index.is_monotonic
assert not gdf.index.is_monotonic_increasing
assert not gdf.index.is_monotonic_decreasing
def test_iter(pdf, gdf):
assert list(pdf) == list(gdf)
def test_iteritems(gdf):
for k, v in gdf.items():
assert k in gdf.columns
assert isinstance(v, cudf.Series)
assert_eq(v, gdf[k])
@pytest.mark.parametrize("q", [0.5, 1, 0.001, [0.5], [], [0.005, 0.5, 1]])
@pytest.mark.parametrize("numeric_only", [True, False])
def test_quantile(q, numeric_only):
ts = pd.date_range("2018-08-24", periods=5, freq="D")
td = pd.to_timedelta(np.arange(5), unit="h")
pdf = pd.DataFrame(
{"date": ts, "delta": td, "val": np.random.randn(len(ts))}
)
gdf = cudf.DataFrame.from_pandas(pdf)
assert_eq(pdf["date"].quantile(q), gdf["date"].quantile(q))
assert_eq(pdf["delta"].quantile(q), gdf["delta"].quantile(q))
assert_eq(pdf["val"].quantile(q), gdf["val"].quantile(q))
q = q if isinstance(q, list) else [q]
assert_eq(
pdf.quantile(q, numeric_only=numeric_only),
gdf.quantile(q, numeric_only=numeric_only),
)
@pytest.mark.parametrize("q", [0.2, 1, 0.001, [0.5], [], [0.005, 0.8, 0.03]])
@pytest.mark.parametrize("interpolation", ["higher", "lower", "nearest"])
@pytest.mark.parametrize(
"decimal_type",
[cudf.Decimal32Dtype, cudf.Decimal64Dtype, cudf.Decimal128Dtype],
)
def test_decimal_quantile(q, interpolation, decimal_type):
data = ["244.8", "32.24", "2.22", "98.14", "453.23", "5.45"]
gdf = cudf.DataFrame(
{"id": np.random.randint(0, 10, size=len(data)), "val": data}
)
gdf["id"] = gdf["id"].astype("float64")
gdf["val"] = gdf["val"].astype(decimal_type(7, 2))
pdf = gdf.to_pandas()
got = gdf.quantile(q, numeric_only=False, interpolation=interpolation)
expected = pdf.quantile(
q if isinstance(q, list) else [q],
numeric_only=False,
interpolation=interpolation,
)
assert_eq(got, expected)
def test_empty_quantile():
pdf = pd.DataFrame({"x": []})
df = cudf.DataFrame({"x": []})
actual = df.quantile()
expected = pdf.quantile()
assert_eq(actual, expected)
def test_from_pandas_function(pdf):
gdf = cudf.from_pandas(pdf)
assert isinstance(gdf, cudf.DataFrame)
assert_eq(pdf, gdf)
gdf = cudf.from_pandas(pdf.x)
assert isinstance(gdf, cudf.Series)
assert_eq(pdf.x, gdf)
with pytest.raises(TypeError):
cudf.from_pandas(123)
@pytest.mark.parametrize("preserve_index", [True, False])
def test_arrow_pandas_compat(pdf, gdf, preserve_index):
pdf["z"] = range(10)
pdf = pdf.set_index("z")
gdf["z"] = range(10)
gdf = gdf.set_index("z")
pdf_arrow_table = pa.Table.from_pandas(pdf, preserve_index=preserve_index)
gdf_arrow_table = gdf.to_arrow(preserve_index=preserve_index)
assert pa.Table.equals(pdf_arrow_table, gdf_arrow_table)
gdf2 = cudf.DataFrame.from_arrow(pdf_arrow_table)
pdf2 = pdf_arrow_table.to_pandas()
assert_eq(pdf2, gdf2)
pdf.columns.name = "abc"
pdf_arrow_table = pa.Table.from_pandas(pdf, preserve_index=preserve_index)
gdf2 = cudf.DataFrame.from_arrow(pdf_arrow_table)
pdf2 = pdf_arrow_table.to_pandas()
assert_eq(pdf2, gdf2)
@pytest.mark.parametrize("dtype", NUMERIC_TYPES + ["bool"])
def test_cuda_array_interface(dtype):
np_data = np.arange(10).astype(dtype)
cupy_data = cupy.array(np_data)
pd_data = pd.Series(np_data)
cudf_data = cudf.Series(cupy_data)
assert_eq(pd_data, cudf_data)
gdf = cudf.DataFrame()
gdf["test"] = cupy_data
pd_data.name = "test"
assert_eq(pd_data, gdf["test"])
@pytest.mark.parametrize("nelem", [0, 2, 3, 100])
@pytest.mark.parametrize("nchunks", [1, 2, 5, 10])
@pytest.mark.parametrize("data_type", dtypes)
def test_from_arrow_chunked_arrays(nelem, nchunks, data_type):
np_list_data = [
np.random.randint(0, 100, nelem).astype(data_type)
for i in range(nchunks)
]
pa_chunk_array = pa.chunked_array(np_list_data)
expect = pd.Series(pa_chunk_array.to_pandas())
got = cudf.Series(pa_chunk_array)
assert_eq(expect, got)
np_list_data2 = [
np.random.randint(0, 100, nelem).astype(data_type)
for i in range(nchunks)
]
pa_chunk_array2 = pa.chunked_array(np_list_data2)
pa_table = pa.Table.from_arrays(
[pa_chunk_array, pa_chunk_array2], names=["a", "b"]
)
expect = pa_table.to_pandas()
got = cudf.DataFrame.from_arrow(pa_table)
assert_eq(expect, got)
@pytest.mark.skip(reason="Test was designed to be run in isolation")
def test_gpu_memory_usage_with_boolmask():
ctx = cuda.current_context()
def query_GPU_memory(note=""):
memInfo = ctx.get_memory_info()
usedMemoryGB = (memInfo.total - memInfo.free) / 1e9
return usedMemoryGB
cuda.current_context().deallocations.clear()
nRows = int(1e8)
nCols = 2
dataNumpy = np.asfortranarray(np.random.rand(nRows, nCols))
colNames = ["col" + str(iCol) for iCol in range(nCols)]
pandasDF = pd.DataFrame(data=dataNumpy, columns=colNames, dtype=np.float32)
cudaDF = cudf.core.DataFrame.from_pandas(pandasDF)
boolmask = cudf.Series(np.random.randint(1, 2, len(cudaDF)).astype("bool"))
memory_used = query_GPU_memory()
cudaDF = cudaDF[boolmask]
assert (
cudaDF.index._values.data_array_view(mode="read").device_ctypes_pointer
== cudaDF["col0"].index._values.data_array_view.device_ctypes_pointer
)
assert (
cudaDF.index._values.data_array_view(mode="read").device_ctypes_pointer
== cudaDF["col1"].index._values.data_array_view.device_ctypes_pointer
)
assert memory_used == query_GPU_memory()
def test_boolmask(pdf, gdf):
boolmask = np.random.randint(0, 2, len(pdf)) > 0
gdf = gdf[boolmask]
pdf = pdf[boolmask]
assert_eq(pdf, gdf)
@pytest.mark.parametrize(
"mask_shape",
[
(2, "ab"),
(2, "abc"),
(3, "ab"),
(3, "abc"),
(3, "abcd"),
(4, "abc"),
(4, "abcd"),
],
)
def test_dataframe_boolmask(mask_shape):
pdf = pd.DataFrame()
for col in "abc":
pdf[col] = np.random.randint(0, 10, 3)
pdf_mask = pd.DataFrame()
for col in mask_shape[1]:
pdf_mask[col] = np.random.randint(0, 2, mask_shape[0]) > 0
gdf = cudf.DataFrame.from_pandas(pdf)
gdf_mask = cudf.DataFrame.from_pandas(pdf_mask)
gdf = gdf[gdf_mask]
pdf = pdf[pdf_mask]
assert np.array_equal(gdf.columns, pdf.columns)
for col in gdf.columns:
assert np.array_equal(
gdf[col].fillna(-1).to_pandas().values, pdf[col].fillna(-1).values
)
@pytest.mark.parametrize(
"mask",
[
[True, False, True],
pytest.param(
cudf.Series([True, False, True]),
marks=pytest_xfail(
reason="Pandas can't index a multiindex with a Series"
),
),
],
)
def test_dataframe_multiindex_boolmask(mask):
gdf = cudf.DataFrame(
{"w": [3, 2, 1], "x": [1, 2, 3], "y": [0, 1, 0], "z": [1, 1, 1]}
)
gdg = gdf.groupby(["w", "x"]).count()
pdg = gdg.to_pandas()
assert_eq(gdg[mask], pdg[mask])
def test_dataframe_assignment():
pdf = pd.DataFrame()
for col in "abc":
pdf[col] = np.array([0, 1, 1, -2, 10])
gdf = cudf.DataFrame.from_pandas(pdf)
gdf[gdf < 0] = 999
pdf[pdf < 0] = 999
assert_eq(gdf, pdf)
def test_1row_arrow_table():
data = [pa.array([0]), pa.array([1])]
batch = pa.RecordBatch.from_arrays(data, ["f0", "f1"])
table = pa.Table.from_batches([batch])
expect = table.to_pandas()
got = cudf.DataFrame.from_arrow(table)
assert_eq(expect, got)
def test_arrow_handle_no_index_name(pdf, gdf):
gdf_arrow = gdf.to_arrow()
pdf_arrow = pa.Table.from_pandas(pdf)
assert pa.Table.equals(pdf_arrow, gdf_arrow)
got = cudf.DataFrame.from_arrow(gdf_arrow)
expect = pdf_arrow.to_pandas()
assert_eq(expect, got)
def test_pandas_non_contiguious():
arr1 = np.random.sample([5000, 10])
assert arr1.flags["C_CONTIGUOUS"] is True
df = pd.DataFrame(arr1)
for col in df.columns:
assert df[col].values.flags["C_CONTIGUOUS"] is False
gdf = cudf.DataFrame.from_pandas(df)
assert_eq(gdf.to_pandas(), df)
@pytest.mark.parametrize("num_elements", [0, 2, 10, 100])
@pytest.mark.parametrize("null_type", [np.nan, None, "mixed"])
def test_series_all_null(num_elements, null_type):
if null_type == "mixed":
data = []
data1 = [np.nan] * int(num_elements / 2)
data2 = [None] * int(num_elements / 2)
for idx in range(len(data1)):
data.append(data1[idx])
data.append(data2[idx])
else:
data = [null_type] * num_elements
# Typecast Pandas because None will return `object` dtype
expect = pd.Series(data, dtype="float64")
got = cudf.Series(data, dtype="float64")
assert_eq(expect, got)
@pytest.mark.parametrize("num_elements", [0, 2, 10, 100])
def test_series_all_valid_nan(num_elements):
data = [np.nan] * num_elements
sr = _create_cudf_series_float64_default(data, nan_as_null=False)
np.testing.assert_equal(sr.null_count, 0)
def test_series_rename():
pds = pd.Series([1, 2, 3], name="asdf")
gds = cudf.Series([1, 2, 3], name="asdf")
expect = pds.rename("new_name")
got = gds.rename("new_name")
assert_eq(expect, got)
pds = pd.Series(expect)
gds = cudf.Series(got)
assert_eq(pds, gds)
pds = pd.Series(expect, name="name name")
gds = cudf.Series(got, name="name name")
assert_eq(pds, gds)
@pytest.mark.parametrize("data_type", dtypes)
@pytest.mark.parametrize("nelem", [0, 100])
def test_head_tail(nelem, data_type):
def check_index_equality(left, right):
assert left.index.equals(right.index)
def check_values_equality(left, right):
if len(left) == 0 and len(right) == 0:
return None
np.testing.assert_array_equal(left.to_pandas(), right.to_pandas())
def check_frame_series_equality(left, right):
check_index_equality(left, right)
check_values_equality(left, right)
gdf = cudf.DataFrame(
{
"a": np.random.randint(0, 1000, nelem).astype(data_type),
"b": np.random.randint(0, 1000, nelem).astype(data_type),
}
)
check_frame_series_equality(gdf.head(), gdf[:5])
check_frame_series_equality(gdf.head(3), gdf[:3])
check_frame_series_equality(gdf.head(-2), gdf[:-2])
check_frame_series_equality(gdf.head(0), gdf[0:0])
check_frame_series_equality(gdf["a"].head(), gdf["a"][:5])
check_frame_series_equality(gdf["a"].head(3), gdf["a"][:3])
check_frame_series_equality(gdf["a"].head(-2), gdf["a"][:-2])
check_frame_series_equality(gdf.tail(), gdf[-5:])
check_frame_series_equality(gdf.tail(3), gdf[-3:])
check_frame_series_equality(gdf.tail(-2), gdf[2:])
check_frame_series_equality(gdf.tail(0), gdf[0:0])
check_frame_series_equality(gdf["a"].tail(), gdf["a"][-5:])
check_frame_series_equality(gdf["a"].tail(3), gdf["a"][-3:])
check_frame_series_equality(gdf["a"].tail(-2), gdf["a"][2:])
def test_tail_for_string():
gdf = cudf.DataFrame()
gdf["id"] = cudf.Series(["a", "b"], dtype=np.object_)
gdf["v"] = cudf.Series([1, 2])
assert_eq(gdf.tail(3), gdf.to_pandas().tail(3))
@pytest_unmark_spilling
@pytest.mark.parametrize("level", [None, 0, "l0", 1, ["l0", 1]])
@pytest.mark.parametrize("drop", [True, False])
@pytest.mark.parametrize(
"column_names",
[
["v0", "v1"],
["v0", "index"],
pd.MultiIndex.from_tuples([("x0", "x1"), ("y0", "y1")]),
pd.MultiIndex.from_tuples([(1, 2), (10, 11)], names=["ABC", "DEF"]),
],
)
@pytest.mark.parametrize("inplace", [True, False])
@pytest.mark.parametrize("col_level", [0, 1])
@pytest.mark.parametrize("col_fill", ["", "some_lv"])
def test_reset_index(level, drop, column_names, inplace, col_level, col_fill):
midx = pd.MultiIndex.from_tuples(
[("a", 1), ("a", 2), ("b", 1), ("b", 2)], names=["l0", None]
)
pdf = pd.DataFrame(
[[1, 2], [3, 4], [5, 6], [7, 8]], index=midx, columns=column_names
)
gdf = cudf.from_pandas(pdf)
expect = pdf.reset_index(
level=level,
drop=drop,
inplace=inplace,
col_level=col_level,
col_fill=col_fill,
)
got = gdf.reset_index(
level=level,
drop=drop,
inplace=inplace,
col_level=col_level,
col_fill=col_fill,
)
if inplace:
expect = pdf
got = gdf
assert_eq(expect, got)
@pytest_unmark_spilling
@pytest.mark.parametrize("level", [None, 0, 1, [None]])
@pytest.mark.parametrize("drop", [False, True])
@pytest.mark.parametrize("inplace", [False, True])
@pytest.mark.parametrize("col_level", [0, 1])
@pytest.mark.parametrize("col_fill", ["", "some_lv"])
def test_reset_index_dup_level_name(level, drop, inplace, col_level, col_fill):
# midx levels are named [None, None]
midx = pd.MultiIndex.from_tuples([("a", 1), ("a", 2), ("b", 1), ("b", 2)])
pdf = pd.DataFrame([[1, 2], [3, 4], [5, 6], [7, 8]], index=midx)
gdf = cudf.from_pandas(pdf)
if level == [None]:
assert_exceptions_equal(
lfunc=pdf.reset_index,
rfunc=gdf.reset_index,
lfunc_args_and_kwargs=(
[],
{"level": level, "drop": drop, "inplace": inplace},
),
rfunc_args_and_kwargs=(
[],
{"level": level, "drop": drop, "inplace": inplace},
),
)
return
expect = pdf.reset_index(
level=level,
drop=drop,
inplace=inplace,
col_level=col_level,
col_fill=col_fill,
)
got = gdf.reset_index(
level=level,
drop=drop,
inplace=inplace,
col_level=col_level,
col_fill=col_fill,
)
if inplace:
expect = pdf
got = gdf
assert_eq(expect, got)
@pytest.mark.parametrize("drop", [True, False])
@pytest.mark.parametrize("inplace", [False, True])
@pytest.mark.parametrize("col_level", [0, 1])
@pytest.mark.parametrize("col_fill", ["", "some_lv"])
def test_reset_index_named(pdf, gdf, drop, inplace, col_level, col_fill):
pdf.index.name = "cudf"
gdf.index.name = "cudf"
expect = pdf.reset_index(
drop=drop, inplace=inplace, col_level=col_level, col_fill=col_fill
)
got = gdf.reset_index(
drop=drop, inplace=inplace, col_level=col_level, col_fill=col_fill
)
if inplace:
expect = pdf
got = gdf
assert_eq(expect, got)
@pytest.mark.parametrize("drop", [True, False])
@pytest.mark.parametrize("inplace", [False, True])
@pytest.mark.parametrize("column_names", [["x", "y"], ["index", "y"]])
@pytest.mark.parametrize("col_level", [0, 1])
@pytest.mark.parametrize("col_fill", ["", "some_lv"])
def test_reset_index_unnamed(
pdf, gdf, drop, inplace, column_names, col_level, col_fill
):
pdf.columns = column_names
gdf.columns = column_names
expect = pdf.reset_index(
drop=drop, inplace=inplace, col_level=col_level, col_fill=col_fill
)
got = gdf.reset_index(
drop=drop, inplace=inplace, col_level=col_level, col_fill=col_fill
)
if inplace:
expect = pdf
got = gdf
assert_eq(expect, got)
@pytest.mark.parametrize(
"data",
[
{
"a": [1, 2, 3, 4, 5],
"b": ["a", "b", "c", "d", "e"],
"c": [1.0, 2.0, 3.0, 4.0, 5.0],
}
],
)
@pytest.mark.parametrize(
"index",
[
"a",
["a", "b"],
pd.CategoricalIndex(["I", "II", "III", "IV", "V"]),
pd.Series(["h", "i", "k", "l", "m"]),
["b", pd.Index(["I", "II", "III", "IV", "V"])],
["c", [11, 12, 13, 14, 15]],
pd.MultiIndex(
levels=[
["I", "II", "III", "IV", "V"],
["one", "two", "three", "four", "five"],
],
codes=[[0, 1, 2, 3, 4], [4, 3, 2, 1, 0]],
names=["col1", "col2"],
),
pd.RangeIndex(0, 5), # corner case
[pd.Series(["h", "i", "k", "l", "m"]), pd.RangeIndex(0, 5)],
[
pd.MultiIndex(
levels=[
["I", "II", "III", "IV", "V"],
["one", "two", "three", "four", "five"],
],
codes=[[0, 1, 2, 3, 4], [4, 3, 2, 1, 0]],
names=["col1", "col2"],
),
pd.RangeIndex(0, 5),
],
],
)
@pytest.mark.parametrize("drop", [True, False])
@pytest.mark.parametrize("append", [True, False])
@pytest.mark.parametrize("inplace", [True, False])
def test_set_index(data, index, drop, append, inplace):
gdf = cudf.DataFrame(data)
pdf = gdf.to_pandas()
expected = pdf.set_index(index, inplace=inplace, drop=drop, append=append)
actual = gdf.set_index(index, inplace=inplace, drop=drop, append=append)
if inplace:
expected = pdf
actual = gdf
assert_eq(expected, actual)
@pytest.mark.parametrize(
"data",
[
{
"a": [1, 1, 2, 2, 5],
"b": ["a", "b", "c", "d", "e"],
"c": [1.0, 2.0, 3.0, 4.0, 5.0],
}
],
)
@pytest.mark.parametrize("index", ["a", pd.Index([1, 1, 2, 2, 3])])
@pytest.mark.parametrize("verify_integrity", [True])
@pytest_xfail
def test_set_index_verify_integrity(data, index, verify_integrity):
gdf = cudf.DataFrame(data)
gdf.set_index(index, verify_integrity=verify_integrity)
@pytest.mark.parametrize("drop", [True, False])
@pytest.mark.parametrize("nelem", [10, 200, 1333])
def test_set_index_multi(drop, nelem):
np.random.seed(0)
a = np.arange(nelem)
np.random.shuffle(a)
df = pd.DataFrame(
{
"a": a,
"b": np.random.randint(0, 4, size=nelem),
"c": np.random.uniform(low=0, high=4, size=nelem),
"d": np.random.choice(["green", "black", "white"], nelem),
}
)
df["e"] = df["d"].astype("category")
gdf = cudf.DataFrame.from_pandas(df)
assert_eq(gdf.set_index("a", drop=drop), gdf.set_index(["a"], drop=drop))
assert_eq(
df.set_index(["b", "c"], drop=drop),
gdf.set_index(["b", "c"], drop=drop),
)
assert_eq(
df.set_index(["d", "b"], drop=drop),
gdf.set_index(["d", "b"], drop=drop),
)
assert_eq(
df.set_index(["b", "d", "e"], drop=drop),
gdf.set_index(["b", "d", "e"], drop=drop),
)
@pytest.fixture()
def reindex_data():
return cudf.datasets.randomdata(
nrows=6,
dtypes={
"a": "category",
"c": float,
"d": str,
},
)
@pytest.fixture()
def reindex_data_numeric():
return cudf.datasets.randomdata(
nrows=6,
dtypes={"a": float, "b": float, "c": float},
)
@pytest_unmark_spilling
@pytest.mark.parametrize("copy", [True, False])
@pytest.mark.parametrize(
"args,gd_kwargs",
[
([], {}),
([[-3, 0, 3, 0, -2, 1, 3, 4, 6]], {}),
([[-3, 0, 3, 0, -2, 1, 3, 4, 6]], {}),
([[-3, 0, 3, 0, -2, 1, 3, 4, 6]], {"axis": 0}),
([["a", "b", "c", "d", "e"]], {"axis": 1}),
([], {"labels": [-3, 0, 3, 0, -2, 1, 3, 4, 6], "axis": 0}),
([], {"labels": ["a", "b", "c", "d", "e"], "axis": 1}),
([], {"labels": [-3, 0, 3, 0, -2, 1, 3, 4, 6], "axis": "index"}),
([], {"labels": ["a", "b", "c", "d", "e"], "axis": "columns"}),
([], {"index": [-3, 0, 3, 0, -2, 1, 3, 4, 6]}),
([], {"columns": ["a", "b", "c", "d", "e"]}),
(
[],
{
"index": [-3, 0, 3, 0, -2, 1, 3, 4, 6],
"columns": ["a", "b", "c", "d", "e"],
},
),
],
)
def test_dataframe_reindex(copy, reindex_data, args, gd_kwargs):
pdf, gdf = reindex_data.to_pandas(), reindex_data
gd_kwargs["copy"] = copy
pd_kwargs = gd_kwargs.copy()
pd_kwargs["copy"] = True
assert_eq(pdf.reindex(*args, **pd_kwargs), gdf.reindex(*args, **gd_kwargs))
@pytest.mark.parametrize("fill_value", [-1.0, 0.0, 1.5])
@pytest.mark.parametrize(
"args,kwargs",
[
([], {}),
([[-3, 0, 3, 0, -2, 1, 3, 4, 6]], {}),
([[-3, 0, 3, 0, -2, 1, 3, 4, 6]], {}),
([[-3, 0, 3, 0, -2, 1, 3, 4, 6]], {"axis": 0}),
([["a", "b", "c", "d", "e"]], {"axis": 1}),
([], {"labels": [-3, 0, 3, 0, -2, 1, 3, 4, 6], "axis": 0}),
([], {"labels": ["a", "b", "c", "d", "e"], "axis": 1}),
([], {"labels": [-3, 0, 3, 0, -2, 1, 3, 4, 6], "axis": "index"}),
([], {"labels": ["a", "b", "c", "d", "e"], "axis": "columns"}),
([], {"index": [-3, 0, 3, 0, -2, 1, 3, 4, 6]}),
([], {"columns": ["a", "b", "c", "d", "e"]}),
(
[],
{
"index": [-3, 0, 3, 0, -2, 1, 3, 4, 6],
"columns": ["a", "b", "c", "d", "e"],
},
),
],
)
def test_dataframe_reindex_fill_value(
reindex_data_numeric, args, kwargs, fill_value
):
pdf, gdf = reindex_data_numeric.to_pandas(), reindex_data_numeric
kwargs["fill_value"] = fill_value
assert_eq(pdf.reindex(*args, **kwargs), gdf.reindex(*args, **kwargs))
@pytest.mark.parametrize("copy", [True, False])
def test_dataframe_reindex_change_dtype(copy):
index = pd.date_range("12/29/2009", periods=10, freq="D")
columns = ["a", "b", "c", "d", "e"]
gdf = cudf.datasets.randomdata(
nrows=6, dtypes={"a": "category", "c": float, "d": str}
)
pdf = gdf.to_pandas()
# Validate reindexes both labels and column names when
# index=index_labels and columns=column_labels
assert_eq(
pdf.reindex(index=index, columns=columns, copy=True),
gdf.reindex(index=index, columns=columns, copy=copy),
check_freq=False,
)
@pytest.mark.parametrize("copy", [True, False])
def test_series_categorical_reindex(copy):
index = [-3, 0, 3, 0, -2, 1, 3, 4, 6]
gdf = cudf.datasets.randomdata(nrows=6, dtypes={"a": "category"})
pdf = gdf.to_pandas()
assert_eq(pdf["a"].reindex(copy=True), gdf["a"].reindex(copy=copy))
assert_eq(
pdf["a"].reindex(index, copy=True), gdf["a"].reindex(index, copy=copy)
)
assert_eq(
pdf["a"].reindex(index=index, copy=True),
gdf["a"].reindex(index=index, copy=copy),
)
@pytest.mark.parametrize("copy", [True, False])
def test_series_float_reindex(copy):
index = [-3, 0, 3, 0, -2, 1, 3, 4, 6]
gdf = cudf.datasets.randomdata(nrows=6, dtypes={"c": float})
pdf = gdf.to_pandas()
assert_eq(pdf["c"].reindex(copy=True), gdf["c"].reindex(copy=copy))
assert_eq(
pdf["c"].reindex(index, copy=True), gdf["c"].reindex(index, copy=copy)
)
assert_eq(
pdf["c"].reindex(index=index, copy=True),
gdf["c"].reindex(index=index, copy=copy),
)
@pytest.mark.parametrize("copy", [True, False])
def test_series_string_reindex(copy):
index = [-3, 0, 3, 0, -2, 1, 3, 4, 6]
gdf = cudf.datasets.randomdata(nrows=6, dtypes={"d": str})
pdf = gdf.to_pandas()
assert_eq(pdf["d"].reindex(copy=True), gdf["d"].reindex(copy=copy))
assert_eq(
pdf["d"].reindex(index, copy=True), gdf["d"].reindex(index, copy=copy)
)
assert_eq(
pdf["d"].reindex(index=index, copy=True),
gdf["d"].reindex(index=index, copy=copy),
)
def test_to_frame(pdf, gdf):
assert_eq(pdf.x.to_frame(), gdf.x.to_frame())
name = "foo"
gdf_new_name = gdf.x.to_frame(name=name)
pdf_new_name = pdf.x.to_frame(name=name)
assert_eq(pdf.x.to_frame(), gdf.x.to_frame())
name = False
gdf_new_name = gdf.x.to_frame(name=name)
pdf_new_name = pdf.x.to_frame(name=name)
assert_eq(gdf_new_name, pdf_new_name)
assert gdf_new_name.columns[0] == name
def test_dataframe_empty_sort_index():
pdf = pd.DataFrame({"x": []})
gdf = cudf.DataFrame.from_pandas(pdf)
expect = pdf.sort_index()
got = gdf.sort_index()
assert_eq(expect, got, check_index_type=True)
@pytest_unmark_spilling
@pytest.mark.parametrize(
"index",
[
pd.RangeIndex(0, 3, 1),
[3.0, 1.0, np.nan],
# Test for single column MultiIndex
pd.MultiIndex.from_arrays(
[
[2, 0, 1],
]
),
pytest.param(
pd.RangeIndex(2, -1, -1),
marks=[
pytest_xfail(
condition=PANDAS_LT_140,
reason="https://github.com/pandas-dev/pandas/issues/43591",
)
],
),
],
)
@pytest.mark.parametrize("axis", [0, 1, "index", "columns"])
@pytest.mark.parametrize("ascending", [True, False])
@pytest.mark.parametrize("ignore_index", [True, False])
@pytest.mark.parametrize("inplace", [True, False])
@pytest.mark.parametrize("na_position", ["first", "last"])
def test_dataframe_sort_index(
index, axis, ascending, inplace, ignore_index, na_position
):
pdf = pd.DataFrame(
{"b": [1, 3, 2], "a": [1, 4, 3], "c": [4, 1, 5]},
index=index,
)
gdf = cudf.DataFrame.from_pandas(pdf)
expected = pdf.sort_index(
axis=axis,
ascending=ascending,
ignore_index=ignore_index,
inplace=inplace,
na_position=na_position,
)
got = gdf.sort_index(
axis=axis,
ascending=ascending,
ignore_index=ignore_index,
inplace=inplace,
na_position=na_position,
)
if inplace is True:
assert_eq(pdf, gdf, check_index_type=True)
else:
assert_eq(expected, got, check_index_type=True)
@pytest_unmark_spilling
@pytest.mark.parametrize("axis", [0, 1, "index", "columns"])
@pytest.mark.parametrize(
"level",
[
0,
"b",
1,
["b"],
"a",
["a", "b"],
["b", "a"],
[0, 1],
[1, 0],
[0, 2],
None,
],
)
@pytest.mark.parametrize("ascending", [True, False])
@pytest.mark.parametrize("ignore_index", [True, False])
@pytest.mark.parametrize("inplace", [True, False])
@pytest.mark.parametrize("na_position", ["first", "last"])
def test_dataframe_mulitindex_sort_index(
axis, level, ascending, inplace, ignore_index, na_position
):
pdf = pd.DataFrame(
{
"b": [1.0, 3.0, np.nan],
"a": [1, 4, 3],
1: ["a", "b", "c"],
"e": [3, 1, 4],
"d": [1, 2, 8],
}
).set_index(["b", "a", 1])
gdf = cudf.DataFrame.from_pandas(pdf)
# ignore_index is supported in v.1.0
expected = pdf.sort_index(
axis=axis,
level=level,
ascending=ascending,
inplace=inplace,
na_position=na_position,
)
if ignore_index is True:
expected = expected
got = gdf.sort_index(
axis=axis,
level=level,
ascending=ascending,
ignore_index=ignore_index,
inplace=inplace,
na_position=na_position,
)
if inplace is True:
if ignore_index is True:
pdf = pdf.reset_index(drop=True)
assert_eq(pdf, gdf)
else:
if ignore_index is True:
expected = expected.reset_index(drop=True)
assert_eq(expected, got)
@pytest.mark.parametrize("dtype", dtypes + ["category"])
def test_dataframe_0_row_dtype(dtype):
if dtype == "category":
data = pd.Series(["a", "b", "c", "d", "e"], dtype="category")
else:
data = np.array([1, 2, 3, 4, 5], dtype=dtype)
expect = cudf.DataFrame()
expect["x"] = data
expect["y"] = data
got = expect.head(0)
for col_name in got.columns:
assert expect[col_name].dtype == got[col_name].dtype
expect = cudf.Series(data)
got = expect.head(0)
assert expect.dtype == got.dtype
@pytest.mark.parametrize("nan_as_null", [True, False])
def test_series_list_nanasnull(nan_as_null):
data = [1.0, 2.0, 3.0, np.nan, None]
expect = pa.array(data, from_pandas=nan_as_null)
got = cudf.Series(data, nan_as_null=nan_as_null).to_arrow()
# Bug in Arrow 0.14.1 where NaNs aren't handled
expect = expect.cast("int64", safe=False)
got = got.cast("int64", safe=False)
assert pa.Array.equals(expect, got)
def test_column_assignment():
gdf = cudf.datasets.randomdata(
nrows=20, dtypes={"a": "category", "b": int, "c": float}
)
new_cols = ["q", "r", "s"]
gdf.columns = new_cols
assert list(gdf.columns) == new_cols
def test_select_dtype():
gdf = cudf.datasets.randomdata(
nrows=20, dtypes={"a": "category", "b": int, "c": float, "d": str}
)
pdf = gdf.to_pandas()
assert_eq(pdf.select_dtypes("float64"), gdf.select_dtypes("float64"))
assert_eq(pdf.select_dtypes(np.float64), gdf.select_dtypes(np.float64))
assert_eq(
pdf.select_dtypes(include=["float64"]),
gdf.select_dtypes(include=["float64"]),
)
assert_eq(
pdf.select_dtypes(include=["object", "int", "category"]),
gdf.select_dtypes(include=["object", "int", "category"]),
)
assert_eq(
pdf.select_dtypes(include=["int64", "float64"]),
gdf.select_dtypes(include=["int64", "float64"]),
)
assert_eq(
pdf.select_dtypes(include=np.number),
gdf.select_dtypes(include=np.number),
)
assert_eq(
pdf.select_dtypes(include=[np.int64, np.float64]),
gdf.select_dtypes(include=[np.int64, np.float64]),
)
assert_eq(
pdf.select_dtypes(include=["category"]),
gdf.select_dtypes(include=["category"]),
)
assert_eq(
pdf.select_dtypes(exclude=np.number),
gdf.select_dtypes(exclude=np.number),
)
assert_exceptions_equal(
lfunc=pdf.select_dtypes,
rfunc=gdf.select_dtypes,
lfunc_args_and_kwargs=([], {"includes": ["Foo"]}),
rfunc_args_and_kwargs=([], {"includes": ["Foo"]}),
)
assert_exceptions_equal(
lfunc=pdf.select_dtypes,
rfunc=gdf.select_dtypes,
lfunc_args_and_kwargs=(
[],
{"exclude": np.number, "include": np.number},
),
rfunc_args_and_kwargs=(
[],
{"exclude": np.number, "include": np.number},
),
)
gdf = cudf.DataFrame(
{"A": [3, 4, 5], "C": [1, 2, 3], "D": ["a", "b", "c"]}
)
pdf = gdf.to_pandas()
assert_eq(
pdf.select_dtypes(include=["object", "int", "category"]),
gdf.select_dtypes(include=["object", "int", "category"]),
)
assert_eq(
pdf.select_dtypes(include=["object"], exclude=["category"]),
gdf.select_dtypes(include=["object"], exclude=["category"]),
)
gdf = cudf.DataFrame({"a": range(10), "b": range(10, 20)})
pdf = gdf.to_pandas()
assert_eq(
pdf.select_dtypes(include=["category"]),
gdf.select_dtypes(include=["category"]),
)
assert_eq(
pdf.select_dtypes(include=["float"]),
gdf.select_dtypes(include=["float"]),
)
assert_eq(
pdf.select_dtypes(include=["object"]),
gdf.select_dtypes(include=["object"]),
)
assert_eq(
pdf.select_dtypes(include=["int"]), gdf.select_dtypes(include=["int"])
)
assert_eq(
pdf.select_dtypes(exclude=["float"]),
gdf.select_dtypes(exclude=["float"]),
)
assert_eq(
pdf.select_dtypes(exclude=["object"]),
gdf.select_dtypes(exclude=["object"]),
)
assert_eq(
pdf.select_dtypes(include=["int"], exclude=["object"]),
gdf.select_dtypes(include=["int"], exclude=["object"]),
)
assert_exceptions_equal(
lfunc=pdf.select_dtypes,
rfunc=gdf.select_dtypes,
)
gdf = cudf.DataFrame(
{"a": cudf.Series([], dtype="int"), "b": cudf.Series([], dtype="str")}
)
pdf = gdf.to_pandas()
assert_eq(
pdf.select_dtypes(exclude=["object"]),
gdf.select_dtypes(exclude=["object"]),
)
assert_eq(
pdf.select_dtypes(include=["int"], exclude=["object"]),
gdf.select_dtypes(include=["int"], exclude=["object"]),
)
gdf = cudf.DataFrame(
{"int_col": [0, 1, 2], "list_col": [[1, 2], [3, 4], [5, 6]]}
)
pdf = gdf.to_pandas()
assert_eq(
pdf.select_dtypes("int64"),
gdf.select_dtypes("int64"),
)
def test_select_dtype_datetime():
gdf = cudf.datasets.timeseries(
start="2000-01-01", end="2000-01-02", freq="3600s", dtypes={"x": int}
)
gdf = gdf.reset_index()
pdf = gdf.to_pandas()
assert_eq(pdf.select_dtypes("datetime64"), gdf.select_dtypes("datetime64"))
assert_eq(
pdf.select_dtypes(np.dtype("datetime64")),
gdf.select_dtypes(np.dtype("datetime64")),
)
assert_eq(
pdf.select_dtypes(include="datetime64"),
gdf.select_dtypes(include="datetime64"),
)
def test_select_dtype_datetime_with_frequency():
gdf = cudf.datasets.timeseries(
start="2000-01-01", end="2000-01-02", freq="3600s", dtypes={"x": int}
)
gdf = gdf.reset_index()
pdf = gdf.to_pandas()
assert_exceptions_equal(
pdf.select_dtypes,
gdf.select_dtypes,
(["datetime64[ms]"],),
(["datetime64[ms]"],),
)
def test_dataframe_describe_exclude():
np.random.seed(12)
data_length = 10000
df = cudf.DataFrame()
df["x"] = np.random.normal(10, 1, data_length)
df["x"] = df.x.astype("int64")
df["y"] = np.random.normal(10, 1, data_length)
pdf = df.to_pandas()
with pytest.warns(FutureWarning):
gdf_results = df.describe(exclude=["float"])
pdf_results = pdf.describe(exclude=["float"])
assert_eq(gdf_results, pdf_results)
def test_dataframe_describe_include():
np.random.seed(12)
data_length = 10000
df = cudf.DataFrame()
df["x"] = np.random.normal(10, 1, data_length)
df["x"] = df.x.astype("int64")
df["y"] = np.random.normal(10, 1, data_length)
pdf = df.to_pandas()
with pytest.warns(FutureWarning):
gdf_results = df.describe(include=["int"])
pdf_results = pdf.describe(include=["int"])
assert_eq(gdf_results, pdf_results)
def test_dataframe_describe_default():
np.random.seed(12)
data_length = 10000
df = cudf.DataFrame()
df["x"] = np.random.normal(10, 1, data_length)
df["y"] = np.random.normal(10, 1, data_length)
pdf = df.to_pandas()
with pytest.warns(FutureWarning):
gdf_results = df.describe()
pdf_results = pdf.describe()
assert_eq(pdf_results, gdf_results)
def test_series_describe_include_all():
np.random.seed(12)
data_length = 10000
df = cudf.DataFrame()
df["x"] = np.random.normal(10, 1, data_length)
df["x"] = df.x.astype("int64")
df["y"] = np.random.normal(10, 1, data_length)
df["animal"] = np.random.choice(["dog", "cat", "bird"], data_length)
pdf = df.to_pandas()
with pytest.warns(FutureWarning):
gdf_results = df.describe(include="all")
pdf_results = pdf.describe(include="all")
assert_eq(gdf_results[["x", "y"]], pdf_results[["x", "y"]])
assert_eq(gdf_results.index, pdf_results.index)
assert_eq(gdf_results.columns, pdf_results.columns)
assert_eq(
gdf_results[["animal"]].fillna(-1).astype("str"),
pdf_results[["animal"]].fillna(-1).astype("str"),
)
def test_dataframe_describe_percentiles():
np.random.seed(12)
data_length = 10000
sample_percentiles = [0.0, 0.1, 0.33, 0.84, 0.4, 0.99]
df = cudf.DataFrame()
df["x"] = np.random.normal(10, 1, data_length)
df["y"] = np.random.normal(10, 1, data_length)
pdf = df.to_pandas()
with pytest.warns(FutureWarning):
gdf_results = df.describe(percentiles=sample_percentiles)
pdf_results = pdf.describe(percentiles=sample_percentiles)
assert_eq(pdf_results, gdf_results)
def test_get_numeric_data():
pdf = pd.DataFrame(
{"x": [1, 2, 3], "y": [1.0, 2.0, 3.0], "z": ["a", "b", "c"]}
)
gdf = cudf.from_pandas(pdf)
assert_eq(pdf._get_numeric_data(), gdf._get_numeric_data())
@pytest.mark.parametrize("dtype", NUMERIC_TYPES)
@pytest.mark.parametrize("period", [-15, -1, 0, 1, 15])
@pytest.mark.parametrize("data_empty", [False, True])
def test_shift(dtype, period, data_empty):
# TODO : this function currently tests for series.shift()
# but should instead test for dataframe.shift()
if data_empty:
data = None
else:
if dtype == np.int8:
# to keep data in range
data = gen_rand(dtype, 10, low=-2, high=2)
else:
data = gen_rand(dtype, 10)
gs = cudf.DataFrame({"a": cudf.Series(data, dtype=dtype)})
ps = pd.DataFrame({"a": pd.Series(data, dtype=dtype)})
shifted_outcome = gs.a.shift(period)
expected_outcome = ps.a.shift(period)
# pandas uses NaNs to signal missing value and force converts the
# results columns to float types
if data_empty:
assert_eq(
shifted_outcome,
expected_outcome,
check_index_type=False,
check_dtype=False,
)
else:
assert_eq(shifted_outcome, expected_outcome, check_dtype=False)
@pytest.mark.parametrize("dtype", NUMERIC_TYPES)
@pytest.mark.parametrize("period", [-1, -5, -10, -20, 0, 1, 5, 10, 20])
@pytest.mark.parametrize("data_empty", [False, True])
def test_diff(dtype, period, data_empty):
if data_empty:
data = None
else:
if dtype == np.int8:
# to keep data in range
data = gen_rand(dtype, 100000, low=-2, high=2)
else:
data = gen_rand(dtype, 100000)
gdf = cudf.DataFrame({"a": cudf.Series(data, dtype=dtype)})
pdf = pd.DataFrame({"a": pd.Series(data, dtype=dtype)})
expected_outcome = pdf.a.diff(period)
diffed_outcome = gdf.a.diff(period).astype(expected_outcome.dtype)
if data_empty:
assert_eq(diffed_outcome, expected_outcome, check_index_type=False)
else:
assert_eq(diffed_outcome, expected_outcome)
@pytest.mark.parametrize("df", _dataframe_na_data())
@pytest.mark.parametrize("nan_as_null", [True, False, None])
def test_dataframe_isnull_isna(df, nan_as_null):
gdf = cudf.DataFrame.from_pandas(df, nan_as_null=nan_as_null)
assert_eq(df.isnull(), gdf.isnull())
assert_eq(df.isna(), gdf.isna())
# Test individual columns
for col in df:
assert_eq(df[col].isnull(), gdf[col].isnull())
assert_eq(df[col].isna(), gdf[col].isna())
@pytest.mark.parametrize("df", _dataframe_na_data())
@pytest.mark.parametrize("nan_as_null", [True, False, None])
def test_dataframe_notna_notnull(df, nan_as_null):
gdf = cudf.DataFrame.from_pandas(df, nan_as_null=nan_as_null)
assert_eq(df.notnull(), gdf.notnull())
assert_eq(df.notna(), gdf.notna())
# Test individual columns
for col in df:
assert_eq(df[col].notnull(), gdf[col].notnull())
assert_eq(df[col].notna(), gdf[col].notna())
def test_ndim():
pdf = pd.DataFrame({"x": range(5), "y": range(5, 10)})
gdf = cudf.DataFrame.from_pandas(pdf)
assert pdf.ndim == gdf.ndim
assert pdf.x.ndim == gdf.x.ndim
s = pd.Series(dtype="float64")
gs = cudf.Series()
assert s.ndim == gs.ndim
@pytest.mark.parametrize(
"decimals",
[
-3,
0,
5,
pd.Series(
[1, 4, 3, -6],
index=["floats", "ints", "floats_with_nan", "floats_same"],
),
cudf.Series(
[-4, -2, 12], index=["ints", "floats_with_nan", "floats_same"]
),
{"floats": -1, "ints": 15, "floats_will_nan": 2},
],
)
def test_dataframe_round(decimals):
gdf = cudf.DataFrame(
{
"floats": np.arange(0.5, 10.5, 1),
"ints": np.random.normal(-100, 100, 10),
"floats_with_na": np.array(
[
14.123,
2.343,
np.nan,
0.0,
-8.302,
np.nan,
94.313,
None,
-8.029,
np.nan,
]
),
"floats_same": np.repeat([-0.6459412758761901], 10),
"bools": np.random.choice([True, None, False], 10),
"strings": np.random.choice(["abc", "xyz", None], 10),
"struct": np.random.choice([{"abc": 1}, {"xyz": 2}, None], 10),
"list": [[1], [2], None, [4], [3]] * 2,
}
)
pdf = gdf.to_pandas()
if isinstance(decimals, cudf.Series):
pdecimals = decimals.to_pandas()
else:
pdecimals = decimals
result = gdf.round(decimals)
expected = pdf.round(pdecimals)
assert_eq(result, expected)
def test_dataframe_round_dict_decimal_validation():
df = cudf.DataFrame({"A": [0.12], "B": [0.13]})
with pytest.raises(TypeError):
df.round({"A": 1, "B": 0.5})
@pytest.mark.parametrize(
"data",
[
[0, 1, 2, 3],
[-2, -1, 2, 3, 5],
[-2, -1, 0, 3, 5],
[True, False, False],
[True],
[False],
[],
[True, None, False],
[True, True, None],
[None, None],
[[0, 5], [1, 6], [2, 7], [3, 8], [4, 9]],
[[1, True], [2, False], [3, False]],
pytest.param(
[["a", True], ["b", False], ["c", False]],
marks=[
pytest_xfail(
reason="NotImplementedError: all does not "
"support columns of object dtype."
)
],
),
],
)
def test_all(data):
# Provide a dtype when data is empty to avoid future pandas changes.
dtype = None if data else float
# Pandas treats `None` in object type columns as True for some reason, so
# replacing with `False`
if np.array(data).ndim <= 1:
pdata = pd.Series(data=data, dtype=dtype).replace([None], False)
gdata = cudf.Series.from_pandas(pdata)
else:
pdata = pd.DataFrame(data, columns=["a", "b"], dtype=dtype).replace(
[None], False
)
gdata = cudf.DataFrame.from_pandas(pdata)
# test bool_only
if pdata["b"].dtype == "bool":
got = gdata.all(bool_only=True)
expected = pdata.all(bool_only=True)
assert_eq(got, expected)
else:
with pytest.raises(NotImplementedError):
gdata.all(level="a")
got = gdata.all()
expected = pdata.all()
assert_eq(got, expected)
@pytest.mark.parametrize(
"data",
[
[0, 1, 2, 3],
[-2, -1, 2, 3, 5],
[-2, -1, 0, 3, 5],
[0, 0, 0, 0, 0],
[0, 0, None, 0],
[True, False, False],
[True],
[False],
[],
[True, None, False],
[True, True, None],
[None, None],
[[0, 5], [1, 6], [2, 7], [3, 8], [4, 9]],
[[1, True], [2, False], [3, False]],
pytest.param(
[["a", True], ["b", False], ["c", False]],
marks=[
pytest_xfail(
reason="NotImplementedError: any does not "
"support columns of object dtype."
)
],
),
],
)
@pytest.mark.parametrize("axis", [0, 1])
def test_any(data, axis):
# Provide a dtype when data is empty to avoid future pandas changes.
dtype = None if data else float
if np.array(data).ndim <= 1:
pdata = pd.Series(data=data, dtype=dtype)
gdata = cudf.Series(data=data, dtype=dtype)
if axis == 1:
with pytest.raises(NotImplementedError):
gdata.any(axis=axis)
else:
got = gdata.any(axis=axis)
expected = pdata.any(axis=axis)
assert_eq(got, expected)
else:
pdata = pd.DataFrame(data, columns=["a", "b"])
gdata = cudf.DataFrame.from_pandas(pdata)
# test bool_only
if pdata["b"].dtype == "bool":
got = gdata.any(bool_only=True)
expected = pdata.any(bool_only=True)
assert_eq(got, expected)
else:
with pytest.raises(NotImplementedError):
gdata.any(level="a")
got = gdata.any(axis=axis)
expected = pdata.any(axis=axis)
assert_eq(got, expected)
@pytest.mark.parametrize("axis", [0, 1])
def test_empty_dataframe_any(axis):
pdf = pd.DataFrame({}, columns=["a", "b"], dtype=float)
gdf = cudf.DataFrame.from_pandas(pdf)
got = gdf.any(axis=axis)
expected = pdf.any(axis=axis)
assert_eq(got, expected, check_index_type=False)
@pytest_unmark_spilling
@pytest.mark.parametrize("a", [[], ["123"]])
@pytest.mark.parametrize("b", ["123", ["123"]])
@pytest.mark.parametrize(
"misc_data",
["123", ["123"] * 20, 123, [1, 2, 0.8, 0.9] * 50, 0.9, 0.00001],
)
@pytest.mark.parametrize("non_list_data", [123, "abc", "zyx", "rapids", 0.8])
def test_create_dataframe_cols_empty_data(a, b, misc_data, non_list_data):
expected = pd.DataFrame({"a": a})
actual = cudf.DataFrame.from_pandas(expected)
expected["b"] = b
actual["b"] = b
assert_eq(actual, expected)
expected = pd.DataFrame({"a": []})
actual = cudf.DataFrame.from_pandas(expected)
expected["b"] = misc_data
actual["b"] = misc_data
assert_eq(actual, expected)
expected = pd.DataFrame({"a": a})
actual = cudf.DataFrame.from_pandas(expected)
expected["b"] = non_list_data
actual["b"] = non_list_data
assert_eq(actual, expected)
def test_empty_dataframe_describe():
pdf = pd.DataFrame({"a": [], "b": []})
gdf = cudf.from_pandas(pdf)
expected = pdf.describe()
with pytest.warns(FutureWarning):
actual = gdf.describe()
assert_eq(expected, actual)
def test_as_column_types():
col = column.as_column(cudf.Series([], dtype="float64"))
assert_eq(col.dtype, np.dtype("float64"))
gds = cudf.Series(col)
pds = pd.Series(pd.Series([], dtype="float64"))
assert_eq(pds, gds)
col = column.as_column(cudf.Series([], dtype="float64"), dtype="float32")
assert_eq(col.dtype, np.dtype("float32"))
gds = cudf.Series(col)
pds = pd.Series(pd.Series([], dtype="float32"))
assert_eq(pds, gds)
col = column.as_column(cudf.Series([], dtype="float64"), dtype="str")
assert_eq(col.dtype, np.dtype("object"))
gds = cudf.Series(col)
pds = pd.Series(pd.Series([], dtype="str"))
assert_eq(pds, gds)
col = column.as_column(cudf.Series([], dtype="float64"), dtype="object")
assert_eq(col.dtype, np.dtype("object"))
gds = cudf.Series(col)
pds = pd.Series(pd.Series([], dtype="object"))
assert_eq(pds, gds)
pds = pd.Series(np.array([1, 2, 3]), dtype="float32")
gds = cudf.Series(column.as_column(np.array([1, 2, 3]), dtype="float32"))
assert_eq(pds, gds)
pds = pd.Series([1, 2, 3], dtype="float32")
gds = cudf.Series([1, 2, 3], dtype="float32")
assert_eq(pds, gds)
pds = pd.Series([], dtype="float64")
gds = cudf.Series(column.as_column(pds))
assert_eq(pds, gds)
pds = pd.Series([1, 2, 4], dtype="int64")
gds = cudf.Series(column.as_column(cudf.Series([1, 2, 4]), dtype="int64"))
assert_eq(pds, gds)
pds = pd.Series([1.2, 18.0, 9.0], dtype="float32")
gds = cudf.Series(
column.as_column(cudf.Series([1.2, 18.0, 9.0]), dtype="float32")
)
assert_eq(pds, gds)
pds = pd.Series([1.2, 18.0, 9.0], dtype="str")
gds = cudf.Series(
column.as_column(cudf.Series([1.2, 18.0, 9.0]), dtype="str")
)
assert_eq(pds, gds)
pds = pd.Series(pd.Index(["1", "18", "9"]), dtype="int")
gds = cudf.Series(cudf.Index(["1", "18", "9"]), dtype="int")
assert_eq(pds, gds)
def test_one_row_head():
gdf = cudf.DataFrame({"name": ["carl"], "score": [100]}, index=[123])
pdf = gdf.to_pandas()
head_gdf = gdf.head()
head_pdf = pdf.head()
assert_eq(head_pdf, head_gdf)
@pytest.mark.parametrize("dtype", ALL_TYPES)
@pytest.mark.parametrize(
"np_dtype,pd_dtype",
[
tuple(item)
for item in cudf.utils.dtypes.np_dtypes_to_pandas_dtypes.items()
],
)
def test_series_astype_pandas_nullable(dtype, np_dtype, pd_dtype):
source = cudf.Series([0, 1, None], dtype=dtype)
expect = source.astype(np_dtype)
got = source.astype(pd_dtype)
assert_eq(expect, got)
@pytest.mark.parametrize("dtype", NUMERIC_TYPES)
@pytest.mark.parametrize("as_dtype", NUMERIC_TYPES)
def test_series_astype_numeric_to_numeric(dtype, as_dtype):
psr = pd.Series([1, 2, 4, 3], dtype=dtype)
gsr = cudf.from_pandas(psr)
assert_eq(psr.astype(as_dtype), gsr.astype(as_dtype))
@pytest.mark.parametrize("dtype", NUMERIC_TYPES)
@pytest.mark.parametrize("as_dtype", NUMERIC_TYPES)
def test_series_astype_numeric_to_numeric_nulls(dtype, as_dtype):
data = [1, 2, None, 3]
sr = cudf.Series(data, dtype=dtype)
got = sr.astype(as_dtype)
expect = cudf.Series([1, 2, None, 3], dtype=as_dtype)
assert_eq(expect, got)
@pytest.mark.parametrize("dtype", NUMERIC_TYPES)
@pytest.mark.parametrize(
"as_dtype",
[
"str",
"category",
"datetime64[s]",
"datetime64[ms]",
"datetime64[us]",
"datetime64[ns]",
],
)
def test_series_astype_numeric_to_other(dtype, as_dtype):
psr = pd.Series([1, 2, 3], dtype=dtype)
gsr = cudf.from_pandas(psr)
assert_eq(psr.astype(as_dtype), gsr.astype(as_dtype))
@pytest.mark.parametrize(
"as_dtype",
[
"str",
"int32",
"uint32",
"float32",
"category",
"datetime64[s]",
"datetime64[ms]",
"datetime64[us]",
"datetime64[ns]",
],
)
def test_series_astype_string_to_other(as_dtype):
if "datetime64" in as_dtype:
data = ["2001-01-01", "2002-02-02", "2000-01-05"]
else:
data = ["1", "2", "3"]
psr = pd.Series(data)
gsr = cudf.from_pandas(psr)
assert_eq(psr.astype(as_dtype), gsr.astype(as_dtype))
@pytest.mark.parametrize(
"as_dtype",
[
"category",
"datetime64[s]",
"datetime64[ms]",
"datetime64[us]",
"datetime64[ns]",
],
)
def test_series_astype_datetime_to_other(as_dtype):
data = ["2001-01-01", "2002-02-02", "2001-01-05"]
psr = pd.Series(data)
gsr = cudf.from_pandas(psr)
assert_eq(psr.astype(as_dtype), gsr.astype(as_dtype))
@pytest.mark.parametrize(
"inp",
[
("datetime64[ns]", "2011-01-01 00:00:00.000000000"),
("datetime64[us]", "2011-01-01 00:00:00.000000"),
("datetime64[ms]", "2011-01-01 00:00:00.000"),
("datetime64[s]", "2011-01-01 00:00:00"),
],
)
def test_series_astype_datetime_to_string(inp):
dtype, expect = inp
base_date = "2011-01-01"
sr = cudf.Series([base_date], dtype=dtype)
got = sr.astype(str)[0]
assert expect == got
@pytest.mark.parametrize(
"as_dtype",
[
"int32",
"uint32",
"float32",
"category",
"datetime64[s]",
"datetime64[ms]",
"datetime64[us]",
"datetime64[ns]",
"str",
],
)
def test_series_astype_categorical_to_other(as_dtype):
if "datetime64" in as_dtype:
data = ["2001-01-01", "2002-02-02", "2000-01-05", "2001-01-01"]
else:
data = [1, 2, 3, 1]
psr = pd.Series(data, dtype="category")
gsr = cudf.from_pandas(psr)
assert_eq(psr.astype(as_dtype), gsr.astype(as_dtype))
@pytest.mark.parametrize("ordered", [True, False])
def test_series_astype_to_categorical_ordered(ordered):
psr = pd.Series([1, 2, 3, 1], dtype="category")
gsr = cudf.from_pandas(psr)
ordered_dtype_pd = pd.CategoricalDtype(
categories=[1, 2, 3], ordered=ordered
)
ordered_dtype_gd = cudf.CategoricalDtype.from_pandas(ordered_dtype_pd)
assert_eq(
psr.astype("int32").astype(ordered_dtype_pd).astype("int32"),
gsr.astype("int32").astype(ordered_dtype_gd).astype("int32"),
)
@pytest.mark.parametrize("ordered", [True, False])
def test_series_astype_cat_ordered_to_unordered(ordered):
pd_dtype = pd.CategoricalDtype(categories=[1, 2, 3], ordered=ordered)
pd_to_dtype = pd.CategoricalDtype(
categories=[1, 2, 3], ordered=not ordered
)
gd_dtype = cudf.CategoricalDtype.from_pandas(pd_dtype)
gd_to_dtype = cudf.CategoricalDtype.from_pandas(pd_to_dtype)
psr = pd.Series([1, 2, 3], dtype=pd_dtype)
gsr = cudf.Series([1, 2, 3], dtype=gd_dtype)
expect = psr.astype(pd_to_dtype)
got = gsr.astype(gd_to_dtype)
assert_eq(expect, got)
def test_series_astype_null_cases():
data = [1, 2, None, 3]
# numerical to other
assert_eq(cudf.Series(data, dtype="str"), cudf.Series(data).astype("str"))
assert_eq(
cudf.Series(data, dtype="category"),
cudf.Series(data).astype("category"),
)
assert_eq(
cudf.Series(data, dtype="float32"),
cudf.Series(data, dtype="int32").astype("float32"),
)
assert_eq(
cudf.Series(data, dtype="float32"),
cudf.Series(data, dtype="uint32").astype("float32"),
)
assert_eq(
cudf.Series(data, dtype="datetime64[ms]"),
cudf.Series(data).astype("datetime64[ms]"),
)
# categorical to other
assert_eq(
cudf.Series(data, dtype="str"),
cudf.Series(data, dtype="category").astype("str"),
)
assert_eq(
cudf.Series(data, dtype="float32"),
cudf.Series(data, dtype="category").astype("float32"),
)
assert_eq(
cudf.Series(data, dtype="datetime64[ms]"),
cudf.Series(data, dtype="category").astype("datetime64[ms]"),
)
# string to other
assert_eq(
cudf.Series([1, 2, None, 3], dtype="int32"),
cudf.Series(["1", "2", None, "3"]).astype("int32"),
)
assert_eq(
cudf.Series(
["2001-01-01", "2001-02-01", None, "2001-03-01"],
dtype="datetime64[ms]",
),
cudf.Series(["2001-01-01", "2001-02-01", None, "2001-03-01"]).astype(
"datetime64[ms]"
),
)
assert_eq(
cudf.Series(["a", "b", "c", None], dtype="category").to_pandas(),
cudf.Series(["a", "b", "c", None]).astype("category").to_pandas(),
)
# datetime to other
data = [
"2001-01-01 00:00:00.000000",
"2001-02-01 00:00:00.000000",
None,
"2001-03-01 00:00:00.000000",
]
assert_eq(
cudf.Series(data),
cudf.Series(data, dtype="datetime64[us]").astype("str"),
)
assert_eq(
pd.Series(data, dtype="datetime64[ns]").astype("category"),
cudf.from_pandas(pd.Series(data, dtype="datetime64[ns]")).astype(
"category"
),
)
def test_series_astype_null_categorical():
sr = cudf.Series([None, None, None], dtype="category")
expect = cudf.Series([None, None, None], dtype="int32")
got = sr.astype("int32")
assert_eq(expect, got)
@pytest.mark.parametrize(
"data",
[
(
pd.Series([3, 3.0]),
pd.Series([2.3, 3.9]),
pd.Series([1.5, 3.9]),
pd.Series([1.0, 2]),
),
[
pd.Series([3, 3.0]),
pd.Series([2.3, 3.9]),
pd.Series([1.5, 3.9]),
pd.Series([1.0, 2]),
],
],
)
def test_create_dataframe_from_list_like(data):
pdf = pd.DataFrame(data, index=["count", "mean", "std", "min"])
gdf = cudf.DataFrame(data, index=["count", "mean", "std", "min"])
assert_eq(pdf, gdf)
pdf = pd.DataFrame(data)
gdf = cudf.DataFrame(data)
assert_eq(pdf, gdf)
def test_create_dataframe_column():
pdf = pd.DataFrame(columns=["a", "b", "c"], index=["A", "Z", "X"])
gdf = cudf.DataFrame(columns=["a", "b", "c"], index=["A", "Z", "X"])
assert_eq(pdf, gdf)
pdf = pd.DataFrame(
{"a": [1, 2, 3], "b": [2, 3, 5]},
columns=["a", "b", "c"],
index=["A", "Z", "X"],
)
gdf = cudf.DataFrame(
{"a": [1, 2, 3], "b": [2, 3, 5]},
columns=["a", "b", "c"],
index=["A", "Z", "X"],
)
assert_eq(pdf, gdf)
@pytest.mark.parametrize(
"data",
[
pd.DataFrame(np.eye(2)),
cudf.DataFrame(np.eye(2)),
np.eye(2),
cupy.eye(2),
None,
[[1, 0], [0, 1]],
[cudf.Series([0, 1]), cudf.Series([1, 0])],
],
)
@pytest.mark.parametrize(
"columns",
[None, range(2), pd.RangeIndex(2), cudf.RangeIndex(2)],
)
def test_dataframe_columns_returns_rangeindex(data, columns):
if data is None and columns is None:
pytest.skip(f"{data=} and {columns=} not relevant.")
result = cudf.DataFrame(data=data, columns=columns).columns
expected = pd.RangeIndex(range(2))
assert_eq(result, expected)
def test_dataframe_columns_returns_rangeindex_single_col():
result = cudf.DataFrame([1, 2, 3]).columns
expected = pd.RangeIndex(range(1))
assert_eq(result, expected)
@pytest.mark.parametrize("dtype", ["int64", "datetime64[ns]", "int8"])
@pytest.mark.parametrize("idx_data", [[], [1, 2]])
@pytest.mark.parametrize("data", [None, [], {}])
def test_dataframe_columns_empty_data_preserves_dtype(dtype, idx_data, data):
result = cudf.DataFrame(
data, columns=cudf.Index(idx_data, dtype=dtype)
).columns
expected = pd.Index(idx_data, dtype=dtype)
assert_eq(result, expected)
@pytest.mark.parametrize(
"data",
[
[1, 2, 4],
[],
[5.0, 7.0, 8.0],
pd.Categorical(["a", "b", "c"]),
["m", "a", "d", "v"],
],
)
def test_series_values_host_property(data):
pds = pd.Series(data=data, dtype=None if data else float)
gds = _create_cudf_series_float64_default(data)
np.testing.assert_array_equal(pds.values, gds.values_host)
@pytest.mark.parametrize(
"data",
[
[1, 2, 4],
[],
[5.0, 7.0, 8.0],
pytest.param(
pd.Categorical(["a", "b", "c"]),
marks=pytest_xfail(raises=NotImplementedError),
),
pytest.param(
["m", "a", "d", "v"],
marks=pytest_xfail(raises=TypeError),
),
],
)
def test_series_values_property(data):
pds = pd.Series(data=data, dtype=None if data else float)
gds = _create_cudf_series_float64_default(data)
gds_vals = gds.values
assert isinstance(gds_vals, cupy.ndarray)
np.testing.assert_array_equal(gds_vals.get(), pds.values)
@pytest.mark.parametrize(
"data",
[
{"A": [1, 2, 3], "B": [4, 5, 6]},
{"A": [1.0, 2.0, 3.0], "B": [4.0, 5.0, 6.0]},
{"A": [1, 2, 3], "B": [1.0, 2.0, 3.0]},
{"A": np.float32(np.arange(3)), "B": np.float64(np.arange(3))},
pytest.param(
{"A": [1, None, 3], "B": [1, 2, None]},
marks=pytest_xfail(
reason="Nulls not supported by values accessor"
),
),
pytest.param(
{"A": [None, None, None], "B": [None, None, None]},
marks=pytest_xfail(
reason="Nulls not supported by values accessor"
),
),
{"A": [], "B": []},
pytest.param(
{"A": [1, 2, 3], "B": ["a", "b", "c"]},
marks=pytest_xfail(
reason="str or categorical not supported by values accessor"
),
),
pytest.param(
{"A": pd.Categorical(["a", "b", "c"]), "B": ["d", "e", "f"]},
marks=pytest_xfail(
reason="str or categorical not supported by values accessor"
),
),
],
)
def test_df_values_property(data):
pdf = pd.DataFrame.from_dict(data)
gdf = cudf.DataFrame.from_pandas(pdf)
pmtr = pdf.values
gmtr = gdf.values.get()
np.testing.assert_array_equal(pmtr, gmtr)
def test_numeric_alpha_value_counts():
pdf = pd.DataFrame(
{
"numeric": [1, 2, 3, 4, 5, 6, 1, 2, 4] * 10,
"alpha": ["u", "h", "d", "a", "m", "u", "h", "d", "a"] * 10,
}
)
gdf = cudf.DataFrame(
{
"numeric": [1, 2, 3, 4, 5, 6, 1, 2, 4] * 10,
"alpha": ["u", "h", "d", "a", "m", "u", "h", "d", "a"] * 10,
}
)
assert_eq(
pdf.numeric.value_counts().sort_index(),
gdf.numeric.value_counts().sort_index(),
check_dtype=False,
)
assert_eq(
pdf.alpha.value_counts().sort_index(),
gdf.alpha.value_counts().sort_index(),
check_dtype=False,
)
@pytest.mark.parametrize(
"data",
[
pd.DataFrame(
{
"num_legs": [2, 4],
"num_wings": [2, 0],
"bird_cats": pd.Series(
["sparrow", "pigeon"],
dtype="category",
index=["falcon", "dog"],
),
},
index=["falcon", "dog"],
),
pd.DataFrame(
{"num_legs": [8, 2], "num_wings": [0, 2]},
index=["spider", "falcon"],
),
pd.DataFrame(
{
"num_legs": [8, 2, 1, 0, 2, 4, 5],
"num_wings": [2, 0, 2, 1, 2, 4, -1],
}
),
pd.DataFrame({"a": ["a", "b", "c"]}, dtype="category"),
pd.DataFrame({"a": ["a", "b", "c"]}),
],
)
@pytest.mark.parametrize(
"values",
[
[0, 2],
{"num_wings": [0, 3]},
pd.DataFrame(
{"num_legs": [8, 2], "num_wings": [0, 2]},
index=["spider", "falcon"],
),
pd.DataFrame(
{
"num_legs": [2, 4],
"num_wings": [2, 0],
"bird_cats": pd.Series(
["sparrow", "pigeon"],
dtype="category",
index=["falcon", "dog"],
),
},
index=["falcon", "dog"],
),
["sparrow", "pigeon"],
pd.Series(["sparrow", "pigeon"], dtype="category"),
pd.Series([1, 2, 3, 4, 5]),
"abc",
123,
pd.Series(["a", "b", "c"]),
pd.Series(["a", "b", "c"], dtype="category"),
pd.DataFrame({"a": ["a", "b", "c"]}, dtype="category"),
],
)
def test_isin_dataframe(data, values):
pdf = data
gdf = cudf.from_pandas(pdf)
if cudf.api.types.is_scalar(values):
assert_exceptions_equal(
lfunc=pdf.isin,
rfunc=gdf.isin,
lfunc_args_and_kwargs=([values],),
rfunc_args_and_kwargs=([values],),
)
else:
try:
expected = pdf.isin(values)
except TypeError as e:
# Can't do isin with different categories
if str(e) == (
"Categoricals can only be compared if 'categories' "
"are the same."
):
return
if isinstance(values, (pd.DataFrame, pd.Series)):
values = cudf.from_pandas(values)
got = gdf.isin(values)
assert_eq(got, expected)
def test_isin_axis_duplicated_error():
df = cudf.DataFrame(range(2))
with pytest.raises(ValueError):
df.isin(cudf.Series(range(2), index=[1, 1]))
with pytest.raises(ValueError):
df.isin(cudf.DataFrame(range(2), index=[1, 1]))
with pytest.raises(ValueError):
df.isin(cudf.DataFrame([[1, 2]], columns=[1, 1]))
def test_constructor_properties():
df = cudf.DataFrame()
key1 = "a"
key2 = "b"
val1 = np.array([123], dtype=np.float64)
val2 = np.array([321], dtype=np.float64)
df[key1] = val1
df[key2] = val2
# Correct use of _constructor_sliced (for DataFrame)
assert_eq(df[key1], df._constructor_sliced(val1, name=key1))
# Correct use of _constructor_expanddim (for cudf.Series)
assert_eq(df, df[key2]._constructor_expanddim({key1: val1, key2: val2}))
# Incorrect use of _constructor_sliced (Raises for cudf.Series)
with pytest.raises(NotImplementedError):
df[key1]._constructor_sliced
# Incorrect use of _constructor_expanddim (Raises for DataFrame)
with pytest.raises(NotImplementedError):
df._constructor_expanddim
@pytest.mark.parametrize("dtype", NUMERIC_TYPES)
@pytest.mark.parametrize("as_dtype", ALL_TYPES)
def test_df_astype_numeric_to_all(dtype, as_dtype):
if "uint" in dtype:
data = [1, 2, None, 4, 7]
elif "int" in dtype or "longlong" in dtype:
data = [1, 2, None, 4, -7]
elif "float" in dtype:
data = [1.0, 2.0, None, 4.0, np.nan, -7.0]
gdf = cudf.DataFrame()
gdf["foo"] = cudf.Series(data, dtype=dtype)
gdf["bar"] = cudf.Series(data, dtype=dtype)
insert_data = cudf.Series(data, dtype=dtype)
expect = cudf.DataFrame()
expect["foo"] = insert_data.astype(as_dtype)
expect["bar"] = insert_data.astype(as_dtype)
got = gdf.astype(as_dtype)
assert_eq(expect, got)
@pytest.mark.parametrize(
"as_dtype",
[
"int32",
"float32",
"category",
"datetime64[s]",
"datetime64[ms]",
"datetime64[us]",
"datetime64[ns]",
],
)
def test_df_astype_string_to_other(as_dtype):
if "datetime64" in as_dtype:
# change None to "NaT" after this issue is fixed:
# https://github.com/rapidsai/cudf/issues/5117
data = ["2001-01-01", "2002-02-02", "2000-01-05", None]
elif as_dtype == "int32":
data = [1, 2, 3]
elif as_dtype == "category":
data = ["1", "2", "3", None]
elif "float" in as_dtype:
data = [1.0, 2.0, 3.0, np.nan]
insert_data = cudf.Series.from_pandas(pd.Series(data, dtype="str"))
expect_data = cudf.Series(data, dtype=as_dtype)
gdf = cudf.DataFrame()
expect = cudf.DataFrame()
gdf["foo"] = insert_data
gdf["bar"] = insert_data
expect["foo"] = expect_data
expect["bar"] = expect_data
got = gdf.astype(as_dtype)
assert_eq(expect, got)
@pytest.mark.parametrize(
"as_dtype",
[
"int64",
"datetime64[s]",
"datetime64[us]",
"datetime64[ns]",
"str",
"category",
],
)
def test_df_astype_datetime_to_other(as_dtype):
data = [
"1991-11-20 00:00:00.000",
"2004-12-04 00:00:00.000",
"2016-09-13 00:00:00.000",
None,
]
gdf = cudf.DataFrame()
expect = cudf.DataFrame()
gdf["foo"] = cudf.Series(data, dtype="datetime64[ms]")
gdf["bar"] = cudf.Series(data, dtype="datetime64[ms]")
if as_dtype == "int64":
expect["foo"] = cudf.Series(
[690595200000, 1102118400000, 1473724800000, None], dtype="int64"
)
expect["bar"] = cudf.Series(
[690595200000, 1102118400000, 1473724800000, None], dtype="int64"
)
elif as_dtype == "str":
expect["foo"] = cudf.Series(data, dtype="str")
expect["bar"] = cudf.Series(data, dtype="str")
elif as_dtype == "category":
expect["foo"] = cudf.Series(gdf["foo"], dtype="category")
expect["bar"] = cudf.Series(gdf["bar"], dtype="category")
else:
expect["foo"] = cudf.Series(data, dtype=as_dtype)
expect["bar"] = cudf.Series(data, dtype=as_dtype)
got = gdf.astype(as_dtype)
assert_eq(expect, got)
@pytest.mark.parametrize(
"as_dtype",
[
"int32",
"float32",
"category",
"datetime64[s]",
"datetime64[ms]",
"datetime64[us]",
"datetime64[ns]",
"str",
],
)
def test_df_astype_categorical_to_other(as_dtype):
if "datetime64" in as_dtype:
data = ["2001-01-01", "2002-02-02", "2000-01-05", "2001-01-01"]
else:
data = [1, 2, 3, 1]
psr = pd.Series(data, dtype="category")
pdf = pd.DataFrame()
pdf["foo"] = psr
pdf["bar"] = psr
gdf = cudf.DataFrame.from_pandas(pdf)
assert_eq(pdf.astype(as_dtype), gdf.astype(as_dtype))
@pytest.mark.parametrize("ordered", [True, False])
def test_df_astype_to_categorical_ordered(ordered):
psr = pd.Series([1, 2, 3, 1], dtype="category")
pdf = pd.DataFrame()
pdf["foo"] = psr
pdf["bar"] = psr
gdf = cudf.DataFrame.from_pandas(pdf)
ordered_dtype_pd = pd.CategoricalDtype(
categories=[1, 2, 3], ordered=ordered
)
ordered_dtype_gd = cudf.CategoricalDtype.from_pandas(ordered_dtype_pd)
assert_eq(
pdf.astype(ordered_dtype_pd).astype("int32"),
gdf.astype(ordered_dtype_gd).astype("int32"),
)
@pytest.mark.parametrize(
"dtype,args",
[(dtype, {}) for dtype in ALL_TYPES]
+ [("category", {"ordered": True}), ("category", {"ordered": False})],
)
def test_empty_df_astype(dtype, args):
df = cudf.DataFrame()
kwargs = {}
kwargs.update(args)
assert_eq(df, df.astype(dtype=dtype, **kwargs))
@pytest.mark.parametrize(
"errors",
[
pytest.param(
"raise", marks=pytest_xfail(reason="should raise error here")
),
pytest.param("other", marks=pytest_xfail(raises=ValueError)),
"ignore",
],
)
def test_series_astype_error_handling(errors):
sr = cudf.Series(["random", "words"])
got = sr.astype("datetime64", errors=errors)
assert_eq(sr, got)
@pytest.mark.parametrize("dtype", ALL_TYPES)
def test_df_constructor_dtype(dtype):
if "datetime" in dtype:
data = ["1991-11-20", "2004-12-04", "2016-09-13", None]
elif dtype == "str":
data = ["a", "b", "c", None]
elif "float" in dtype:
data = [1.0, 0.5, -1.1, np.nan, None]
elif "bool" in dtype:
data = [True, False, None]
else:
data = [1, 2, 3, None]
sr = cudf.Series(data, dtype=dtype)
expect = cudf.DataFrame()
expect["foo"] = sr
expect["bar"] = sr
got = cudf.DataFrame({"foo": data, "bar": data}, dtype=dtype)
assert_eq(expect, got)
@pytest_unmark_spilling
@pytest.mark.parametrize(
"data",
[
cudf.datasets.randomdata(
nrows=10, dtypes={"a": "category", "b": int, "c": float, "d": int}
),
cudf.datasets.randomdata(
nrows=10, dtypes={"a": "category", "b": int, "c": float, "d": str}
),
cudf.datasets.randomdata(
nrows=10, dtypes={"a": bool, "b": int, "c": float, "d": str}
),
cudf.DataFrame(),
cudf.DataFrame({"a": [0, 1, 2], "b": [1, None, 3]}),
cudf.DataFrame(
{
"a": [1, 2, 3, 4],
"b": [7, np.NaN, 9, 10],
"c": [np.NaN, np.NaN, np.NaN, np.NaN],
"d": cudf.Series([None, None, None, None], dtype="int64"),
"e": [100, None, 200, None],
"f": cudf.Series([10, None, np.NaN, 11], nan_as_null=False),
}
),
cudf.DataFrame(
{
"a": [10, 11, 12, 13, 14, 15],
"b": cudf.Series(
[10, None, np.NaN, 2234, None, np.NaN], nan_as_null=False
),
}
),
],
)
@pytest.mark.parametrize(
"op", ["max", "min", "sum", "product", "mean", "var", "std"]
)
@pytest.mark.parametrize("skipna", [True, False])
def test_rowwise_ops(data, op, skipna):
gdf = data
pdf = gdf.to_pandas()
kwargs = {"axis": 1, "skipna": skipna}
if op in ("var", "std"):
kwargs["ddof"] = 0
with expect_warning_if(
not all(
(
(pdf[column].count() == 0)
if skipna
else (pdf[column].notna().count() == 0)
)
or cudf.api.types.is_numeric_dtype(pdf[column].dtype)
or cudf.api.types.is_bool_dtype(pdf[column].dtype)
for column in pdf
)
):
expected = getattr(pdf, op)(**kwargs)
with expect_warning_if(
not all(
cudf.api.types.is_numeric_dtype(gdf[column].dtype)
or cudf.api.types.is_bool_dtype(gdf[column].dtype)
for column in gdf
),
UserWarning,
):
got = getattr(gdf, op)(**kwargs)
assert_eq(expected, got, check_exact=False)
@pytest.mark.parametrize(
"op", ["max", "min", "sum", "product", "mean", "var", "std"]
)
def test_rowwise_ops_nullable_dtypes_all_null(op):
gdf = cudf.DataFrame(
{
"a": [1, 2, 3, 4],
"b": [7, np.NaN, 9, 10],
"c": cudf.Series([np.NaN, np.NaN, np.NaN, np.NaN], dtype=float),
"d": cudf.Series([None, None, None, None], dtype="int64"),
"e": [100, None, 200, None],
"f": cudf.Series([10, None, np.NaN, 11], nan_as_null=False),
}
)
expected = cudf.Series([None, None, None, None], dtype="float64")
if op in ("var", "std"):
got = getattr(gdf, op)(axis=1, ddof=0, skipna=False)
else:
got = getattr(gdf, op)(axis=1, skipna=False)
assert_eq(got.null_count, expected.null_count)
assert_eq(got, expected)
@pytest.mark.parametrize(
"op,expected",
[
(
"max",
cudf.Series(
[10.0, None, np.NaN, 2234.0, None, np.NaN],
dtype="float64",
nan_as_null=False,
),
),
(
"min",
cudf.Series(
[10.0, None, np.NaN, 13.0, None, np.NaN],
dtype="float64",
nan_as_null=False,
),
),
(
"sum",
cudf.Series(
[20.0, None, np.NaN, 2247.0, None, np.NaN],
dtype="float64",
nan_as_null=False,
),
),
(
"product",
cudf.Series(
[100.0, None, np.NaN, 29042.0, None, np.NaN],
dtype="float64",
nan_as_null=False,
),
),
(
"mean",
cudf.Series(
[10.0, None, np.NaN, 1123.5, None, np.NaN],
dtype="float64",
nan_as_null=False,
),
),
(
"var",
cudf.Series(
[0.0, None, np.NaN, 1233210.25, None, np.NaN],
dtype="float64",
nan_as_null=False,
),
),
(
"std",
cudf.Series(
[0.0, None, np.NaN, 1110.5, None, np.NaN],
dtype="float64",
nan_as_null=False,
),
),
],
)
def test_rowwise_ops_nullable_dtypes_partial_null(op, expected):
gdf = cudf.DataFrame(
{
"a": [10, 11, 12, 13, 14, 15],
"b": cudf.Series(
[10, None, np.NaN, 2234, None, np.NaN],
nan_as_null=False,
),
}
)
if op in ("var", "std"):
got = getattr(gdf, op)(axis=1, ddof=0, skipna=False)
else:
got = getattr(gdf, op)(axis=1, skipna=False)
assert_eq(got.null_count, expected.null_count)
assert_eq(got, expected)
@pytest.mark.parametrize(
"op,expected",
[
(
"max",
cudf.Series(
[10, None, None, 2234, None, 453],
dtype="int64",
),
),
(
"min",
cudf.Series(
[10, None, None, 13, None, 15],
dtype="int64",
),
),
(
"sum",
cudf.Series(
[20, None, None, 2247, None, 468],
dtype="int64",
),
),
(
"product",
cudf.Series(
[100, None, None, 29042, None, 6795],
dtype="int64",
),
),
(
"mean",
cudf.Series(
[10.0, None, None, 1123.5, None, 234.0],
dtype="float32",
),
),
(
"var",
cudf.Series(
[0.0, None, None, 1233210.25, None, 47961.0],
dtype="float32",
),
),
(
"std",
cudf.Series(
[0.0, None, None, 1110.5, None, 219.0],
dtype="float32",
),
),
],
)
def test_rowwise_ops_nullable_int_dtypes(op, expected):
gdf = cudf.DataFrame(
{
"a": [10, 11, None, 13, None, 15],
"b": cudf.Series(
[10, None, 323, 2234, None, 453],
nan_as_null=False,
),
}
)
if op in ("var", "std"):
got = getattr(gdf, op)(axis=1, ddof=0, skipna=False)
else:
got = getattr(gdf, op)(axis=1, skipna=False)
assert_eq(got.null_count, expected.null_count)
assert_eq(got, expected)
@pytest.mark.parametrize(
"data",
[
{
"t1": cudf.Series(
["2020-08-01 09:00:00", "1920-05-01 10:30:00"], dtype="<M8[ms]"
),
"t2": cudf.Series(
["1940-08-31 06:00:00", "2020-08-02 10:00:00"], dtype="<M8[ms]"
),
},
{
"t1": cudf.Series(
["2020-08-01 09:00:00", "1920-05-01 10:30:00"], dtype="<M8[ms]"
),
"t2": cudf.Series(
["1940-08-31 06:00:00", "2020-08-02 10:00:00"], dtype="<M8[ns]"
),
"t3": cudf.Series(
["1960-08-31 06:00:00", "2030-08-02 10:00:00"], dtype="<M8[s]"
),
},
{
"t1": cudf.Series(
["2020-08-01 09:00:00", "1920-05-01 10:30:00"], dtype="<M8[ms]"
),
"t2": cudf.Series(
["1940-08-31 06:00:00", "2020-08-02 10:00:00"], dtype="<M8[us]"
),
},
{
"t1": cudf.Series(
["2020-08-01 09:00:00", "1920-05-01 10:30:00"], dtype="<M8[ms]"
),
"t2": cudf.Series(
["1940-08-31 06:00:00", "2020-08-02 10:00:00"], dtype="<M8[ms]"
),
"i1": cudf.Series([1001, 2002], dtype="int64"),
},
{
"t1": cudf.Series(
["2020-08-01 09:00:00", "1920-05-01 10:30:00"], dtype="<M8[ms]"
),
"t2": cudf.Series(["1940-08-31 06:00:00", None], dtype="<M8[ms]"),
"i1": cudf.Series([1001, 2002], dtype="int64"),
},
{
"t1": cudf.Series(
["2020-08-01 09:00:00", "1920-05-01 10:30:00"], dtype="<M8[ms]"
),
"i1": cudf.Series([1001, 2002], dtype="int64"),
"f1": cudf.Series([-100.001, 123.456], dtype="float64"),
},
{
"t1": cudf.Series(
["2020-08-01 09:00:00", "1920-05-01 10:30:00"], dtype="<M8[ms]"
),
"i1": cudf.Series([1001, 2002], dtype="int64"),
"f1": cudf.Series([-100.001, 123.456], dtype="float64"),
"b1": cudf.Series([True, False], dtype="bool"),
},
],
)
@pytest.mark.parametrize("op", ["max", "min"])
@pytest.mark.parametrize("skipna", [True, False])
def test_rowwise_ops_datetime_dtypes(data, op, skipna):
gdf = cudf.DataFrame(data)
pdf = gdf.to_pandas()
with expect_warning_if(
not all(cudf.api.types.is_datetime64_dtype(dt) for dt in gdf.dtypes),
UserWarning,
):
got = getattr(gdf, op)(axis=1, skipna=skipna)
with expect_warning_if(
not all(pd.api.types.is_datetime64_dtype(dt) for dt in gdf.dtypes),
FutureWarning,
):
expected = getattr(pdf, op)(axis=1, skipna=skipna)
assert_eq(got, expected)
@pytest.mark.parametrize(
"data,op,skipna",
[
(
{
"t1": cudf.Series(
["2020-08-01 09:00:00", "1920-05-01 10:30:00"],
dtype="<M8[ms]",
),
"t2": cudf.Series(
["1940-08-31 06:00:00", None], dtype="<M8[ms]"
),
},
"max",
True,
),
(
{
"t1": cudf.Series(
["2020-08-01 09:00:00", "1920-05-01 10:30:00"],
dtype="<M8[ms]",
),
"t2": cudf.Series(
["1940-08-31 06:00:00", None], dtype="<M8[ms]"
),
},
"min",
False,
),
(
{
"t1": cudf.Series(
["2020-08-01 09:00:00", "1920-05-01 10:30:00"],
dtype="<M8[ms]",
),
"t2": cudf.Series(
["1940-08-31 06:00:00", None], dtype="<M8[ms]"
),
},
"min",
True,
),
],
)
def test_rowwise_ops_datetime_dtypes_2(data, op, skipna):
gdf = cudf.DataFrame(data)
pdf = gdf.to_pandas()
got = getattr(gdf, op)(axis=1, skipna=skipna)
expected = getattr(pdf, op)(axis=1, skipna=skipna)
assert_eq(got, expected)
@pytest.mark.parametrize(
"data",
[
(
{
"t1": pd.Series(
["2020-08-01 09:00:00", "1920-05-01 10:30:00"],
dtype="<M8[ns]",
),
"t2": pd.Series(
["1940-08-31 06:00:00", pd.NaT], dtype="<M8[ns]"
),
}
)
],
)
def test_rowwise_ops_datetime_dtypes_pdbug(data):
pdf = pd.DataFrame(data)
gdf = cudf.from_pandas(pdf)
expected = pdf.max(axis=1, skipna=False)
got = gdf.max(axis=1, skipna=False)
assert_eq(got, expected)
@pytest.mark.parametrize(
"data",
[
[5.0, 6.0, 7.0],
"single value",
np.array(1, dtype="int64"),
np.array(0.6273643, dtype="float64"),
],
)
def test_insert(data):
pdf = pd.DataFrame.from_dict({"A": [1, 2, 3], "B": ["a", "b", "c"]})
gdf = cudf.DataFrame.from_pandas(pdf)
# insertion by index
pdf.insert(0, "foo", data)
gdf.insert(0, "foo", data)
assert_eq(pdf, gdf)
pdf.insert(3, "bar", data)
gdf.insert(3, "bar", data)
assert_eq(pdf, gdf)
pdf.insert(1, "baz", data)
gdf.insert(1, "baz", data)
assert_eq(pdf, gdf)
# pandas insert doesn't support negative indexing
pdf.insert(len(pdf.columns), "qux", data)
gdf.insert(-1, "qux", data)
assert_eq(pdf, gdf)
@pytest.mark.parametrize(
"data",
[{"A": [1, 2, 3], "B": ["a", "b", "c"]}],
)
def test_insert_NA(data):
pdf = pd.DataFrame.from_dict(data)
gdf = cudf.DataFrame.from_pandas(pdf)
pdf["C"] = pd.NA
gdf["C"] = cudf.NA
assert_eq(pdf, gdf)
def test_cov():
gdf = cudf.datasets.randomdata(10)
pdf = gdf.to_pandas()
assert_eq(pdf.cov(), gdf.cov())
@pytest_xfail(reason="cupy-based cov does not support nulls")
def test_cov_nans():
pdf = pd.DataFrame()
pdf["a"] = [None, None, None, 2.00758632, None]
pdf["b"] = [0.36403686, None, None, None, None]
pdf["c"] = [None, None, None, 0.64882227, None]
pdf["d"] = [None, -1.46863125, None, 1.22477948, -0.06031689]
gdf = cudf.from_pandas(pdf)
assert_eq(pdf.cov(), gdf.cov())
@pytest_unmark_spilling
@pytest.mark.parametrize(
"gsr",
[
cudf.Series([4, 2, 3]),
cudf.Series([4, 2, 3], index=["a", "b", "c"]),
cudf.Series([4, 2, 3], index=["a", "b", "d"]),
cudf.Series([4, 2], index=["a", "b"]),
cudf.Series([4, 2, 3], index=cudf.core.index.RangeIndex(0, 3)),
cudf.Series([4, 2, 3, 4, 5], index=["a", "b", "d", "0", "12"]),
],
)
@pytest.mark.parametrize("colnames", [["a", "b", "c"], [0, 1, 2]])
@pytest.mark.parametrize(
"op",
[
operator.add,
operator.mul,
operator.floordiv,
operator.truediv,
operator.mod,
operator.pow,
operator.eq,
operator.lt,
operator.le,
operator.gt,
operator.ge,
operator.ne,
],
)
def test_df_sr_binop(gsr, colnames, op):
# Anywhere that the column names of the DataFrame don't match the index
# names of the Series will trigger a deprecated reindexing. Since this
# behavior is deprecated in pandas, this test is temporarily silencing
# those warnings until cudf updates to pandas 2.0 as its compatibility
# target, at which point a large number of the parametrizations can be
# removed altogether (along with this warnings filter).
with warnings.catch_warnings():
assert version.parse(pd.__version__) < version.parse("2.0.0")
warnings.filterwarnings(
action="ignore",
category=FutureWarning,
message=(
"Automatic reindexing on DataFrame vs Series comparisons is "
"deprecated"
),
)
data = [[3.0, 2.0, 5.0], [3.0, None, 5.0], [6.0, 7.0, np.nan]]
data = dict(zip(colnames, data))
gsr = gsr.astype("float64")
gdf = cudf.DataFrame(data)
pdf = gdf.to_pandas(nullable=True)
psr = gsr.to_pandas(nullable=True)
expect = op(pdf, psr)
got = op(gdf, gsr).to_pandas(nullable=True)
assert_eq(expect, got, check_dtype=False)
expect = op(psr, pdf)
got = op(gsr, gdf).to_pandas(nullable=True)
assert_eq(expect, got, check_dtype=False)
@pytest_unmark_spilling
@pytest.mark.parametrize(
"op",
[
operator.add,
operator.mul,
operator.floordiv,
operator.truediv,
operator.mod,
operator.pow,
# comparison ops will temporarily XFAIL
# see PR https://github.com/rapidsai/cudf/pull/7491
pytest.param(operator.eq, marks=pytest_xfail()),
pytest.param(operator.lt, marks=pytest_xfail()),
pytest.param(operator.le, marks=pytest_xfail()),
pytest.param(operator.gt, marks=pytest_xfail()),
pytest.param(operator.ge, marks=pytest_xfail()),
pytest.param(operator.ne, marks=pytest_xfail()),
],
)
@pytest.mark.parametrize(
"gsr", [cudf.Series([1, 2, 3, 4, 5], index=["a", "b", "d", "0", "12"])]
)
def test_df_sr_binop_col_order(gsr, op):
colnames = [0, 1, 2]
data = [[0, 2, 5], [3, None, 5], [6, 7, np.nan]]
data = dict(zip(colnames, data))
gdf = cudf.DataFrame(data)
pdf = pd.DataFrame.from_dict(data)
psr = gsr.to_pandas()
with expect_warning_if(
op
in {
operator.eq,
operator.lt,
operator.le,
operator.gt,
operator.ge,
operator.ne,
},
FutureWarning,
):
expect = op(pdf, psr).astype("float")
out = op(gdf, gsr).astype("float")
got = out[expect.columns]
assert_eq(expect, got)
@pytest.mark.parametrize("set_index", [None, "A", "C", "D"])
@pytest.mark.parametrize("index", [True, False])
@pytest.mark.parametrize("deep", [True, False])
def test_memory_usage(deep, index, set_index):
# Testing numerical/datetime by comparing with pandas
# (string and categorical columns will be different)
rows = int(100)
df = pd.DataFrame(
{
"A": np.arange(rows, dtype="int64"),
"B": np.arange(rows, dtype="int32"),
"C": np.arange(rows, dtype="float64"),
}
)
df["D"] = pd.to_datetime(df.A)
if set_index:
df = df.set_index(set_index)
gdf = cudf.from_pandas(df)
if index and set_index is None:
# Special Case: Assume RangeIndex size == 0
with expect_warning_if(deep, UserWarning):
assert gdf.index.memory_usage(deep=deep) == 0
else:
# Check for Series only
assert df["B"].memory_usage(index=index, deep=deep) == gdf[
"B"
].memory_usage(index=index, deep=deep)
# Check for entire DataFrame
assert_eq(
df.memory_usage(index=index, deep=deep).sort_index(),
gdf.memory_usage(index=index, deep=deep).sort_index(),
)
@pytest_xfail
def test_memory_usage_string():
rows = int(100)
df = pd.DataFrame(
{
"A": np.arange(rows, dtype="int32"),
"B": np.random.choice(["apple", "banana", "orange"], rows),
}
)
gdf = cudf.from_pandas(df)
# Check deep=False (should match pandas)
assert gdf.B.memory_usage(deep=False, index=False) == df.B.memory_usage(
deep=False, index=False
)
# Check string column
assert gdf.B.memory_usage(deep=True, index=False) == df.B.memory_usage(
deep=True, index=False
)
# Check string index
assert gdf.set_index("B").index.memory_usage(
deep=True
) == df.B.memory_usage(deep=True, index=False)
def test_memory_usage_cat():
rows = int(100)
df = pd.DataFrame(
{
"A": np.arange(rows, dtype="int32"),
"B": np.random.choice(["apple", "banana", "orange"], rows),
}
)
df["B"] = df.B.astype("category")
gdf = cudf.from_pandas(df)
expected = (
gdf.B._column.categories.memory_usage
+ gdf.B._column.codes.memory_usage
)
# Check cat column
assert gdf.B.memory_usage(deep=True, index=False) == expected
# Check cat index
assert gdf.set_index("B").index.memory_usage(deep=True) == expected
def test_memory_usage_list():
df = cudf.DataFrame({"A": [[0, 1, 2, 3], [4, 5, 6], [7, 8], [9]]})
expected = (
df.A._column.offsets.memory_usage + df.A._column.elements.memory_usage
)
assert expected == df.A.memory_usage()
@pytest.mark.parametrize("rows", [10, 100])
def test_memory_usage_multi(rows):
# We need to sample without replacement to guarantee that the size of the
# levels are always the same.
df = pd.DataFrame(
{
"A": np.arange(rows, dtype="int32"),
"B": np.random.choice(
np.arange(rows, dtype="int64"), rows, replace=False
),
"C": np.random.choice(
np.arange(rows, dtype="float64"), rows, replace=False
),
}
).set_index(["B", "C"])
gdf = cudf.from_pandas(df)
# Assume MultiIndex memory footprint is just that
# of the underlying columns, levels, and codes
expect = rows * 16 # Source Columns
expect += rows * 16 # Codes
expect += rows * 8 # Level 0
expect += rows * 8 # Level 1
assert expect == gdf.index.memory_usage(deep=True)
@pytest.mark.parametrize(
"list_input",
[
pytest.param([1, 2, 3, 4], id="smaller"),
pytest.param([1, 2, 3, 4, 5, 6], id="larger"),
],
)
@pytest.mark.parametrize(
"key",
[
pytest.param("list_test", id="new_column"),
pytest.param("id", id="existing_column"),
],
)
def test_setitem_diff_size_list(list_input, key):
gdf = cudf.datasets.randomdata(5)
with pytest.raises(
ValueError, match=("All columns must be of equal length")
):
gdf[key] = list_input
@pytest.mark.parametrize(
"series_input",
[
pytest.param(cudf.Series([1, 2, 3, 4]), id="smaller_cudf"),
pytest.param(cudf.Series([1, 2, 3, 4, 5, 6]), id="larger_cudf"),
pytest.param(cudf.Series([1, 2, 3], index=[4, 5, 6]), id="index_cudf"),
pytest.param(pd.Series([1, 2, 3, 4]), id="smaller_pandas"),
pytest.param(pd.Series([1, 2, 3, 4, 5, 6]), id="larger_pandas"),
pytest.param(pd.Series([1, 2, 3], index=[4, 5, 6]), id="index_pandas"),
],
)
@pytest.mark.parametrize(
"key",
[
pytest.param("list_test", id="new_column"),
pytest.param("id", id="existing_column"),
],
)
def test_setitem_diff_size_series(series_input, key):
gdf = cudf.datasets.randomdata(5)
pdf = gdf.to_pandas()
pandas_input = series_input
if isinstance(pandas_input, cudf.Series):
pandas_input = pandas_input.to_pandas()
expect = pdf
expect[key] = pandas_input
got = gdf
got[key] = series_input
# Pandas uses NaN and typecasts to float64 if there's missing values on
# alignment, so need to typecast to float64 for equality comparison
expect = expect.astype("float64")
got = got.astype("float64")
assert_eq(expect, got)
def test_tupleize_cols_False_set():
pdf = pd.DataFrame()
gdf = cudf.DataFrame()
pdf[("a", "b")] = [1]
gdf[("a", "b")] = [1]
assert_eq(pdf, gdf)
assert_eq(pdf.columns, gdf.columns)
def test_init_multiindex_from_dict():
pdf = pd.DataFrame({("a", "b"): [1]})
gdf = cudf.DataFrame({("a", "b"): [1]})
assert_eq(pdf, gdf)
assert_eq(pdf.columns, gdf.columns)
def test_change_column_dtype_in_empty():
pdf = pd.DataFrame({"a": [], "b": []})
gdf = cudf.from_pandas(pdf)
assert_eq(pdf, gdf)
pdf["b"] = pdf["b"].astype("int64")
gdf["b"] = gdf["b"].astype("int64")
assert_eq(pdf, gdf)
@pytest.mark.parametrize("dtype", ["int64", "str"])
def test_dataframe_from_dictionary_series_same_name_index(dtype):
pd_idx1 = pd.Index([1, 2, 0], name="test_index").astype(dtype)
pd_idx2 = pd.Index([2, 0, 1], name="test_index").astype(dtype)
pd_series1 = pd.Series([1, 2, 3], index=pd_idx1)
pd_series2 = pd.Series([1, 2, 3], index=pd_idx2)
gd_idx1 = cudf.from_pandas(pd_idx1)
gd_idx2 = cudf.from_pandas(pd_idx2)
gd_series1 = cudf.Series([1, 2, 3], index=gd_idx1)
gd_series2 = cudf.Series([1, 2, 3], index=gd_idx2)
expect = pd.DataFrame({"a": pd_series1, "b": pd_series2})
got = cudf.DataFrame({"a": gd_series1, "b": gd_series2})
if dtype == "str":
# Pandas actually loses its index name erroneously here...
expect.index.name = "test_index"
assert_eq(expect, got)
assert expect.index.names == got.index.names
@pytest.mark.parametrize(
"arg", [slice(2, 8, 3), slice(1, 20, 4), slice(-2, -6, -2)]
)
def test_dataframe_strided_slice(arg):
mul = pd.DataFrame(
{
"Index": [1, 2, 3, 4, 5, 6, 7, 8, 9],
"AlphaIndex": ["a", "b", "c", "d", "e", "f", "g", "h", "i"],
}
)
pdf = pd.DataFrame(
{"Val": [10, 9, 8, 7, 6, 5, 4, 3, 2]},
index=pd.MultiIndex.from_frame(mul),
)
gdf = cudf.DataFrame.from_pandas(pdf)
expect = pdf[arg]
got = gdf[arg]
assert_eq(expect, got)
@pytest.mark.parametrize(
"data,condition,other,error",
[
(pd.Series(range(5)), pd.Series(range(5)) > 0, None, None),
(pd.Series(range(5)), pd.Series(range(5)) > 1, None, None),
(pd.Series(range(5)), pd.Series(range(5)) > 1, 10, None),
(
pd.Series(range(5)),
pd.Series(range(5)) > 1,
pd.Series(range(5, 10)),
None,
),
(
pd.DataFrame(np.arange(10).reshape(-1, 2), columns=["A", "B"]),
(
pd.DataFrame(np.arange(10).reshape(-1, 2), columns=["A", "B"])
% 3
)
== 0,
-pd.DataFrame(np.arange(10).reshape(-1, 2), columns=["A", "B"]),
None,
),
(
pd.DataFrame({"a": [1, 2, np.nan], "b": [4, np.nan, 6]}),
pd.DataFrame({"a": [1, 2, np.nan], "b": [4, np.nan, 6]}) == 4,
None,
None,
),
(
pd.DataFrame({"a": [1, 2, np.nan], "b": [4, np.nan, 6]}),
pd.DataFrame({"a": [1, 2, np.nan], "b": [4, np.nan, 6]}) != 4,
None,
None,
),
(
pd.DataFrame({"p": [-2, 3, -4, -79], "k": [9, 10, 11, 12]}),
[True, True, True],
None,
ValueError,
),
(
pd.DataFrame({"p": [-2, 3, -4, -79], "k": [9, 10, 11, 12]}),
[True, True, True, False],
None,
ValueError,
),
(
pd.DataFrame({"p": [-2, 3, -4, -79], "k": [9, 10, 11, 12]}),
[[True, True, True, False], [True, True, True, False]],
None,
ValueError,
),
(
pd.DataFrame({"p": [-2, 3, -4, -79], "k": [9, 10, 11, 12]}),
[[True, True], [False, True], [True, False], [False, True]],
None,
None,
),
(
pd.DataFrame({"p": [-2, 3, -4, -79], "k": [9, 10, 11, 12]}),
cuda.to_device(
np.array(
[[True, True], [False, True], [True, False], [False, True]]
)
),
None,
None,
),
(
pd.DataFrame({"p": [-2, 3, -4, -79], "k": [9, 10, 11, 12]}),
cupy.array(
[[True, True], [False, True], [True, False], [False, True]]
),
17,
None,
),
(
pd.DataFrame({"p": [-2, 3, -4, -79], "k": [9, 10, 11, 12]}),
[[True, True], [False, True], [True, False], [False, True]],
17,
None,
),
(
pd.DataFrame({"p": [-2, 3, -4, -79], "k": [9, 10, 11, 12]}),
[
[True, True, False, True],
[True, True, False, True],
[True, True, False, True],
[True, True, False, True],
],
None,
ValueError,
),
(
pd.Series([1, 2, np.nan]),
pd.Series([1, 2, np.nan]) == 4,
None,
None,
),
(
pd.Series([1, 2, np.nan]),
pd.Series([1, 2, np.nan]) != 4,
None,
None,
),
(
pd.Series([4, np.nan, 6]),
pd.Series([4, np.nan, 6]) == 4,
None,
None,
),
(
pd.Series([4, np.nan, 6]),
pd.Series([4, np.nan, 6]) != 4,
None,
None,
),
(
pd.Series([4, np.nan, 6], dtype="category"),
pd.Series([4, np.nan, 6], dtype="category") != 4,
None,
None,
),
(
pd.Series(["a", "b", "b", "d", "c", "s"], dtype="category"),
pd.Series(["a", "b", "b", "d", "c", "s"], dtype="category") == "b",
None,
None,
),
(
pd.Series(["a", "b", "b", "d", "c", "s"], dtype="category"),
pd.Series(["a", "b", "b", "d", "c", "s"], dtype="category") == "b",
"s",
None,
),
(
pd.Series([1, 2, 3, 2, 5]),
pd.Series([1, 2, 3, 2, 5]) == 2,
pd.DataFrame(
{
"a": pd.Series([1, 2, 3, 2, 5]),
"b": pd.Series([1, 2, 3, 2, 5]),
}
),
NotImplementedError,
),
],
)
@pytest.mark.parametrize("inplace", [True, False])
def test_df_sr_mask_where(data, condition, other, error, inplace):
ps_where = data
gs_where = cudf.from_pandas(data)
ps_mask = ps_where.copy(deep=True)
gs_mask = gs_where.copy(deep=True)
if hasattr(condition, "__cuda_array_interface__"):
if type(condition).__module__.split(".")[0] == "cupy":
ps_condition = cupy.asnumpy(condition)
else:
ps_condition = np.array(condition).astype("bool")
else:
ps_condition = condition
if type(condition).__module__.split(".")[0] == "pandas":
gs_condition = cudf.from_pandas(condition)
else:
gs_condition = condition
ps_other = other
if type(other).__module__.split(".")[0] == "pandas":
gs_other = cudf.from_pandas(other)
else:
gs_other = other
if error is None:
expect_where = ps_where.where(
ps_condition, other=ps_other, inplace=inplace
)
got_where = gs_where.where(
gs_condition, other=gs_other, inplace=inplace
)
expect_mask = ps_mask.mask(
ps_condition, other=ps_other, inplace=inplace
)
got_mask = gs_mask.mask(gs_condition, other=gs_other, inplace=inplace)
if inplace:
expect_where = ps_where
got_where = gs_where
expect_mask = ps_mask
got_mask = gs_mask
if pd.api.types.is_categorical_dtype(expect_where):
np.testing.assert_array_equal(
expect_where.cat.codes,
got_where.cat.codes.astype(expect_where.cat.codes.dtype)
.fillna(-1)
.to_numpy(),
)
assert_eq(expect_where.cat.categories, got_where.cat.categories)
np.testing.assert_array_equal(
expect_mask.cat.codes,
got_mask.cat.codes.astype(expect_mask.cat.codes.dtype)
.fillna(-1)
.to_numpy(),
)
assert_eq(expect_mask.cat.categories, got_mask.cat.categories)
else:
assert_eq(
expect_where.fillna(-1),
got_where.fillna(-1),
check_dtype=False,
)
assert_eq(
expect_mask.fillna(-1), got_mask.fillna(-1), check_dtype=False
)
else:
assert_exceptions_equal(
lfunc=ps_where.where,
rfunc=gs_where.where,
lfunc_args_and_kwargs=(
[ps_condition],
{"other": ps_other, "inplace": inplace},
),
rfunc_args_and_kwargs=(
[gs_condition],
{"other": gs_other, "inplace": inplace},
),
)
assert_exceptions_equal(
lfunc=ps_mask.mask,
rfunc=gs_mask.mask,
lfunc_args_and_kwargs=(
[ps_condition],
{"other": ps_other, "inplace": inplace},
),
rfunc_args_and_kwargs=(
[gs_condition],
{"other": gs_other, "inplace": inplace},
),
)
@pytest.mark.parametrize(
"data,condition,other,has_cat",
[
(
pd.DataFrame(
{
"a": pd.Series(["a", "a", "b", "c", "a", "d", "d", "a"]),
"b": pd.Series(["o", "p", "q", "e", "p", "p", "a", "a"]),
}
),
pd.DataFrame(
{
"a": pd.Series(["a", "a", "b", "c", "a", "d", "d", "a"]),
"b": pd.Series(["o", "p", "q", "e", "p", "p", "a", "a"]),
}
)
!= "a",
None,
None,
),
(
pd.DataFrame(
{
"a": pd.Series(
["a", "a", "b", "c", "a", "d", "d", "a"],
dtype="category",
),
"b": pd.Series(
["o", "p", "q", "e", "p", "p", "a", "a"],
dtype="category",
),
}
),
pd.DataFrame(
{
"a": pd.Series(
["a", "a", "b", "c", "a", "d", "d", "a"],
dtype="category",
),
"b": pd.Series(
["o", "p", "q", "e", "p", "p", "a", "a"],
dtype="category",
),
}
)
!= "a",
None,
True,
),
(
pd.DataFrame(
{
"a": pd.Series(
["a", "a", "b", "c", "a", "d", "d", "a"],
dtype="category",
),
"b": pd.Series(
["o", "p", "q", "e", "p", "p", "a", "a"],
dtype="category",
),
}
),
pd.DataFrame(
{
"a": pd.Series(
["a", "a", "b", "c", "a", "d", "d", "a"],
dtype="category",
),
"b": pd.Series(
["o", "p", "q", "e", "p", "p", "a", "a"],
dtype="category",
),
}
)
== "a",
None,
True,
),
(
pd.DataFrame(
{
"a": pd.Series(
["a", "a", "b", "c", "a", "d", "d", "a"],
dtype="category",
),
"b": pd.Series(
["o", "p", "q", "e", "p", "p", "a", "a"],
dtype="category",
),
}
),
pd.DataFrame(
{
"a": pd.Series(
["a", "a", "b", "c", "a", "d", "d", "a"],
dtype="category",
),
"b": pd.Series(
["o", "p", "q", "e", "p", "p", "a", "a"],
dtype="category",
),
}
)
!= "a",
"a",
True,
),
(
pd.DataFrame(
{
"a": pd.Series(
["a", "a", "b", "c", "a", "d", "d", "a"],
dtype="category",
),
"b": pd.Series(
["o", "p", "q", "e", "p", "p", "a", "a"],
dtype="category",
),
}
),
pd.DataFrame(
{
"a": pd.Series(
["a", "a", "b", "c", "a", "d", "d", "a"],
dtype="category",
),
"b": pd.Series(
["o", "p", "q", "e", "p", "p", "a", "a"],
dtype="category",
),
}
)
== "a",
"a",
True,
),
],
)
def test_df_string_cat_types_mask_where(data, condition, other, has_cat):
ps = data
gs = cudf.from_pandas(data)
ps_condition = condition
if type(condition).__module__.split(".")[0] == "pandas":
gs_condition = cudf.from_pandas(condition)
else:
gs_condition = condition
ps_other = other
if type(other).__module__.split(".")[0] == "pandas":
gs_other = cudf.from_pandas(other)
else:
gs_other = other
expect_where = ps.where(ps_condition, other=ps_other)
got_where = gs.where(gs_condition, other=gs_other)
expect_mask = ps.mask(ps_condition, other=ps_other)
got_mask = gs.mask(gs_condition, other=gs_other)
if has_cat is None:
assert_eq(
expect_where.fillna(-1).astype("str"),
got_where.fillna(-1),
check_dtype=False,
)
assert_eq(
expect_mask.fillna(-1).astype("str"),
got_mask.fillna(-1),
check_dtype=False,
)
else:
assert_eq(expect_where, got_where, check_dtype=False)
assert_eq(expect_mask, got_mask, check_dtype=False)
@pytest.mark.parametrize(
"data,expected_upcast_type,error",
[
(
pd.Series([random.random() for _ in range(10)], dtype="float32"),
np.dtype("float32"),
None,
),
(
pd.Series([random.random() for _ in range(10)], dtype="float16"),
np.dtype("float32"),
None,
),
(
pd.Series([random.random() for _ in range(10)], dtype="float64"),
np.dtype("float64"),
None,
),
(
pd.Series([random.random() for _ in range(10)], dtype="float128"),
None,
TypeError,
),
],
)
def test_from_pandas_unsupported_types(data, expected_upcast_type, error):
pdf = pd.DataFrame({"one_col": data})
if error is not None:
with pytest.raises(ValueError):
cudf.from_pandas(data)
with pytest.raises(ValueError):
cudf.Series(data)
with pytest.raises(error):
cudf.from_pandas(pdf)
with pytest.raises(error):
cudf.DataFrame(pdf)
else:
df = cudf.from_pandas(data)
assert_eq(data, df, check_dtype=False)
assert df.dtype == expected_upcast_type
df = cudf.Series(data)
assert_eq(data, df, check_dtype=False)
assert df.dtype == expected_upcast_type
df = cudf.from_pandas(pdf)
assert_eq(pdf, df, check_dtype=False)
assert df["one_col"].dtype == expected_upcast_type
df = cudf.DataFrame(pdf)
assert_eq(pdf, df, check_dtype=False)
assert df["one_col"].dtype == expected_upcast_type
@pytest.mark.parametrize("nan_as_null", [True, False])
@pytest.mark.parametrize("index", [None, "a", ["a", "b"]])
def test_from_pandas_nan_as_null(nan_as_null, index):
data = [np.nan, 2.0, 3.0]
if index is None:
pdf = pd.DataFrame({"a": data, "b": data})
expected = cudf.DataFrame(
{
"a": column.as_column(data, nan_as_null=nan_as_null),
"b": column.as_column(data, nan_as_null=nan_as_null),
}
)
else:
pdf = pd.DataFrame({"a": data, "b": data}).set_index(index)
expected = cudf.DataFrame(
{
"a": column.as_column(data, nan_as_null=nan_as_null),
"b": column.as_column(data, nan_as_null=nan_as_null),
}
)
expected = cudf.DataFrame(
{
"a": column.as_column(data, nan_as_null=nan_as_null),
"b": column.as_column(data, nan_as_null=nan_as_null),
}
)
expected = expected.set_index(index)
got = cudf.from_pandas(pdf, nan_as_null=nan_as_null)
assert_eq(expected, got)
@pytest.mark.parametrize("nan_as_null", [True, False])
def test_from_pandas_for_series_nan_as_null(nan_as_null):
data = [np.nan, 2.0, 3.0]
psr = pd.Series(data)
expected = cudf.Series(column.as_column(data, nan_as_null=nan_as_null))
got = cudf.from_pandas(psr, nan_as_null=nan_as_null)
assert_eq(expected, got)
@pytest.mark.parametrize("copy", [True, False])
def test_df_series_dataframe_astype_copy(copy):
gdf = cudf.DataFrame({"col1": [1, 2], "col2": [3, 4]})
pdf = gdf.to_pandas()
assert_eq(
gdf.astype(dtype="float", copy=copy),
pdf.astype(dtype="float", copy=copy),
)
assert_eq(gdf, pdf)
gsr = cudf.Series([1, 2])
psr = gsr.to_pandas()
assert_eq(
gsr.astype(dtype="float", copy=copy),
psr.astype(dtype="float", copy=copy),
)
assert_eq(gsr, psr)
gsr = cudf.Series([1, 2])
psr = gsr.to_pandas()
actual = gsr.astype(dtype="int64", copy=copy)
expected = psr.astype(dtype="int64", copy=copy)
assert_eq(expected, actual)
assert_eq(gsr, psr)
actual[0] = 3
expected[0] = 3
assert_eq(gsr, psr)
@pytest.mark.parametrize("copy", [True, False])
def test_df_series_dataframe_astype_dtype_dict(copy):
gdf = cudf.DataFrame({"col1": [1, 2], "col2": [3, 4]})
pdf = gdf.to_pandas()
assert_eq(
gdf.astype(dtype={"col1": "float"}, copy=copy),
pdf.astype(dtype={"col1": "float"}, copy=copy),
)
assert_eq(gdf, pdf)
gsr = cudf.Series([1, 2])
psr = gsr.to_pandas()
assert_eq(
gsr.astype(dtype={None: "float"}, copy=copy),
psr.astype(dtype={None: "float"}, copy=copy),
)
assert_eq(gsr, psr)
assert_exceptions_equal(
lfunc=psr.astype,
rfunc=gsr.astype,
lfunc_args_and_kwargs=([], {"dtype": {"a": "float"}, "copy": copy}),
rfunc_args_and_kwargs=([], {"dtype": {"a": "float"}, "copy": copy}),
)
gsr = cudf.Series([1, 2])
psr = gsr.to_pandas()
actual = gsr.astype({None: "int64"}, copy=copy)
expected = psr.astype({None: "int64"}, copy=copy)
assert_eq(expected, actual)
assert_eq(gsr, psr)
actual[0] = 3
expected[0] = 3
assert_eq(gsr, psr)
@pytest.mark.parametrize(
"data,columns",
[
([1, 2, 3, 100, 112, 35464], ["a"]),
(range(100), None),
pytest.param(
[],
None,
marks=pytest.mark.xfail(
not PANDAS_GE_200, reason=".column returns Index[object]"
),
),
((-10, 21, 32, 32, 1, 2, 3), ["p"]),
pytest.param(
(),
None,
marks=pytest.mark.xfail(
not PANDAS_GE_200, reason=".column returns Index[object]"
),
),
([[1, 2, 3], [1, 2, 3]], ["col1", "col2", "col3"]),
([range(100), range(100)], ["range" + str(i) for i in range(100)]),
(((1, 2, 3), (1, 2, 3)), ["tuple0", "tuple1", "tuple2"]),
([[1, 2, 3]], ["list col1", "list col2", "list col3"]),
([[1, 2, 3]], pd.Index(["col1", "col2", "col3"], name="rapids")),
([range(100)], ["range" + str(i) for i in range(100)]),
(((1, 2, 3),), ["k1", "k2", "k3"]),
],
)
def test_dataframe_init_1d_list(data, columns):
expect = pd.DataFrame(data, columns=columns)
actual = cudf.DataFrame(data, columns=columns)
assert_eq(expect, actual, check_index_type=len(data) != 0)
expect = pd.DataFrame(data, columns=None)
actual = cudf.DataFrame(data, columns=None)
assert_eq(expect, actual, check_index_type=len(data) != 0)
@pytest.mark.parametrize(
"data,cols,index",
[
(
np.ndarray(shape=(4, 2), dtype=float, order="F"),
["a", "b"],
["a", "b", "c", "d"],
),
(
np.ndarray(shape=(4, 2), dtype=float, order="F"),
["a", "b"],
[0, 20, 30, 10],
),
(
np.ndarray(shape=(4, 2), dtype=float, order="F"),
["a", "b"],
[0, 1, 2, 3],
),
(np.array([11, 123, -2342, 232]), ["a"], [1, 2, 11, 12]),
(np.array([11, 123, -2342, 232]), ["a"], ["khsdjk", "a", "z", "kk"]),
(
cupy.ndarray(shape=(4, 2), dtype=float, order="F"),
["a", "z"],
["a", "z", "a", "z"],
),
(cupy.array([11, 123, -2342, 232]), ["z"], [0, 1, 1, 0]),
(cupy.array([11, 123, -2342, 232]), ["z"], [1, 2, 3, 4]),
(cupy.array([11, 123, -2342, 232]), ["z"], ["a", "z", "d", "e"]),
(np.random.randn(2, 4), ["a", "b", "c", "d"], ["a", "b"]),
(np.random.randn(2, 4), ["a", "b", "c", "d"], [1, 0]),
(cupy.random.randn(2, 4), ["a", "b", "c", "d"], ["a", "b"]),
(cupy.random.randn(2, 4), ["a", "b", "c", "d"], [1, 0]),
],
)
def test_dataframe_init_from_arrays_cols(data, cols, index):
gd_data = data
if isinstance(data, cupy.ndarray):
# pandas can't handle cupy arrays in general
pd_data = data.get()
# additional test for building DataFrame with gpu array whose
# cuda array interface has no `descr` attribute
numba_data = cuda.as_cuda_array(data)
else:
pd_data = data
numba_data = None
# verify with columns & index
pdf = pd.DataFrame(pd_data, columns=cols, index=index)
gdf = cudf.DataFrame(gd_data, columns=cols, index=index)
assert_eq(pdf, gdf, check_dtype=False)
# verify with columns
pdf = pd.DataFrame(pd_data, columns=cols)
gdf = cudf.DataFrame(gd_data, columns=cols)
assert_eq(pdf, gdf, check_dtype=False)
pdf = pd.DataFrame(pd_data)
gdf = cudf.DataFrame(gd_data)
assert_eq(pdf, gdf, check_dtype=False)
if numba_data is not None:
gdf = cudf.DataFrame(numba_data)
assert_eq(pdf, gdf, check_dtype=False)
@pytest_unmark_spilling
@pytest.mark.parametrize(
"col_data",
[
range(5),
["a", "b", "x", "y", "z"],
[1.0, 0.213, 0.34332],
["a"],
[1],
[0.2323],
[],
],
)
@pytest.mark.parametrize(
"assign_val",
[
1,
2,
np.array(2),
cupy.array(2),
0.32324,
np.array(0.34248),
cupy.array(0.34248),
"abc",
np.array("abc", dtype="object"),
np.array("abc", dtype="str"),
np.array("abc"),
None,
],
)
def test_dataframe_assign_scalar(col_data, assign_val):
pdf = pd.DataFrame({"a": col_data})
gdf = cudf.DataFrame({"a": col_data})
pdf["b"] = (
cupy.asnumpy(assign_val)
if isinstance(assign_val, cupy.ndarray)
else assign_val
)
gdf["b"] = assign_val
assert_eq(pdf, gdf)
@pytest_unmark_spilling
@pytest.mark.parametrize(
"col_data",
[
1,
2,
np.array(2),
cupy.array(2),
0.32324,
np.array(0.34248),
cupy.array(0.34248),
"abc",
np.array("abc", dtype="object"),
np.array("abc", dtype="str"),
np.array("abc"),
None,
],
)
@pytest.mark.parametrize(
"assign_val",
[
1,
2,
np.array(2),
cupy.array(2),
0.32324,
np.array(0.34248),
cupy.array(0.34248),
"abc",
np.array("abc", dtype="object"),
np.array("abc", dtype="str"),
np.array("abc"),
None,
],
)
def test_dataframe_assign_scalar_with_scalar_cols(col_data, assign_val):
pdf = pd.DataFrame(
{
"a": cupy.asnumpy(col_data)
if isinstance(col_data, cupy.ndarray)
else col_data
},
index=["dummy_mandatory_index"],
)
gdf = cudf.DataFrame({"a": col_data}, index=["dummy_mandatory_index"])
pdf["b"] = (
cupy.asnumpy(assign_val)
if isinstance(assign_val, cupy.ndarray)
else assign_val
)
gdf["b"] = assign_val
assert_eq(pdf, gdf)
def test_dataframe_info_basic():
buffer = io.StringIO()
str_cmp = textwrap.dedent(
"""\
<class 'cudf.core.dataframe.DataFrame'>
StringIndex: 10 entries, a to 1111
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 0 10 non-null float64
1 1 10 non-null float64
2 2 10 non-null float64
3 3 10 non-null float64
4 4 10 non-null float64
5 5 10 non-null float64
6 6 10 non-null float64
7 7 10 non-null float64
8 8 10 non-null float64
9 9 10 non-null float64
dtypes: float64(10)
memory usage: 859.0+ bytes
"""
)
df = pd.DataFrame(
np.random.randn(10, 10),
index=["a", "2", "3", "4", "5", "6", "7", "8", "100", "1111"],
)
cudf.from_pandas(df).info(buf=buffer, verbose=True)
s = buffer.getvalue()
assert str_cmp == s
def test_dataframe_info_verbose_mem_usage():
buffer = io.StringIO()
df = pd.DataFrame({"a": [1, 2, 3], "b": ["safdas", "assa", "asdasd"]})
str_cmp = textwrap.dedent(
"""\
<class 'cudf.core.dataframe.DataFrame'>
RangeIndex: 3 entries, 0 to 2
Data columns (total 2 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 a 3 non-null int64
1 b 3 non-null object
dtypes: int64(1), object(1)
memory usage: 56.0+ bytes
"""
)
cudf.from_pandas(df).info(buf=buffer, verbose=True)
s = buffer.getvalue()
assert str_cmp == s
buffer.truncate(0)
buffer.seek(0)
str_cmp = textwrap.dedent(
"""\
<class 'cudf.core.dataframe.DataFrame'>
RangeIndex: 3 entries, 0 to 2
Columns: 2 entries, a to b
dtypes: int64(1), object(1)
memory usage: 56.0+ bytes
"""
)
cudf.from_pandas(df).info(buf=buffer, verbose=False)
s = buffer.getvalue()
assert str_cmp == s
buffer.truncate(0)
buffer.seek(0)
df = pd.DataFrame(
{"a": [1, 2, 3], "b": ["safdas", "assa", "asdasd"]},
index=["sdfdsf", "sdfsdfds", "dsfdf"],
)
str_cmp = textwrap.dedent(
"""\
<class 'cudf.core.dataframe.DataFrame'>
StringIndex: 3 entries, sdfdsf to dsfdf
Data columns (total 2 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 a 3 non-null int64
1 b 3 non-null object
dtypes: int64(1), object(1)
memory usage: 91.0 bytes
"""
)
cudf.from_pandas(df).info(buf=buffer, verbose=True, memory_usage="deep")
s = buffer.getvalue()
assert str_cmp == s
buffer.truncate(0)
buffer.seek(0)
int_values = [1, 2, 3, 4, 5]
text_values = ["alpha", "beta", "gamma", "delta", "epsilon"]
float_values = [0.0, 0.25, 0.5, 0.75, 1.0]
df = cudf.DataFrame(
{
"int_col": int_values,
"text_col": text_values,
"float_col": float_values,
}
)
str_cmp = textwrap.dedent(
"""\
<class 'cudf.core.dataframe.DataFrame'>
RangeIndex: 5 entries, 0 to 4
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 int_col 5 non-null int64
1 text_col 5 non-null object
2 float_col 5 non-null float64
dtypes: float64(1), int64(1), object(1)
memory usage: 130.0 bytes
"""
)
df.info(buf=buffer, verbose=True, memory_usage="deep")
actual_string = buffer.getvalue()
assert str_cmp == actual_string
buffer.truncate(0)
buffer.seek(0)
def test_dataframe_info_null_counts():
int_values = [1, 2, 3, 4, 5]
text_values = ["alpha", "beta", "gamma", "delta", "epsilon"]
float_values = [0.0, 0.25, 0.5, 0.75, 1.0]
df = cudf.DataFrame(
{
"int_col": int_values,
"text_col": text_values,
"float_col": float_values,
}
)
buffer = io.StringIO()
str_cmp = textwrap.dedent(
"""\
<class 'cudf.core.dataframe.DataFrame'>
RangeIndex: 5 entries, 0 to 4
Data columns (total 3 columns):
# Column Dtype
--- ------ -----
0 int_col int64
1 text_col object
2 float_col float64
dtypes: float64(1), int64(1), object(1)
memory usage: 130.0+ bytes
"""
)
df.info(buf=buffer, verbose=True, null_counts=False)
actual_string = buffer.getvalue()
assert str_cmp == actual_string
buffer.truncate(0)
buffer.seek(0)
df.info(buf=buffer, verbose=True, max_cols=0)
actual_string = buffer.getvalue()
assert str_cmp == actual_string
buffer.truncate(0)
buffer.seek(0)
df = cudf.DataFrame()
str_cmp = textwrap.dedent(
"""\
<class 'cudf.core.dataframe.DataFrame'>
RangeIndex: 0 entries
Empty DataFrame"""
)
df.info(buf=buffer, verbose=True)
actual_string = buffer.getvalue()
assert str_cmp == actual_string
buffer.truncate(0)
buffer.seek(0)
df = cudf.DataFrame(
{
"a": [1, 2, 3, None, 10, 11, 12, None],
"b": ["a", "b", "c", "sd", "sdf", "sd", None, None],
}
)
str_cmp = textwrap.dedent(
"""\
<class 'cudf.core.dataframe.DataFrame'>
RangeIndex: 8 entries, 0 to 7
Data columns (total 2 columns):
# Column Dtype
--- ------ -----
0 a int64
1 b object
dtypes: int64(1), object(1)
memory usage: 238.0+ bytes
"""
)
pd.options.display.max_info_rows = 2
df.info(buf=buffer, max_cols=2, null_counts=None)
pd.reset_option("display.max_info_rows")
actual_string = buffer.getvalue()
assert str_cmp == actual_string
buffer.truncate(0)
buffer.seek(0)
str_cmp = textwrap.dedent(
"""\
<class 'cudf.core.dataframe.DataFrame'>
RangeIndex: 8 entries, 0 to 7
Data columns (total 2 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 a 6 non-null int64
1 b 6 non-null object
dtypes: int64(1), object(1)
memory usage: 238.0+ bytes
"""
)
df.info(buf=buffer, max_cols=2, null_counts=None)
actual_string = buffer.getvalue()
assert str_cmp == actual_string
buffer.truncate(0)
buffer.seek(0)
df.info(buf=buffer, null_counts=True)
actual_string = buffer.getvalue()
assert str_cmp == actual_string
@pytest_unmark_spilling
@pytest.mark.parametrize(
"data1",
[
[1, 2, 3, 4, 5, 6, 7],
[1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0],
[
1.9876543,
2.9876654,
3.9876543,
4.1234587,
5.23,
6.88918237,
7.00001,
],
[
-1.9876543,
-2.9876654,
-3.9876543,
-4.1234587,
-5.23,
-6.88918237,
-7.00001,
],
[
1.987654321,
2.987654321,
3.987654321,
0.1221,
2.1221,
0.112121,
-21.1212,
],
[
-1.987654321,
-2.987654321,
-3.987654321,
-0.1221,
-2.1221,
-0.112121,
21.1212,
],
],
)
@pytest.mark.parametrize(
"data2",
[
[1, 2, 3, 4, 5, 6, 7],
[1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0],
[
1.9876543,
2.9876654,
3.9876543,
4.1234587,
5.23,
6.88918237,
7.00001,
],
[
-1.9876543,
-2.9876654,
-3.9876543,
-4.1234587,
-5.23,
-6.88918237,
-7.00001,
],
[
1.987654321,
2.987654321,
3.987654321,
0.1221,
2.1221,
0.112121,
-21.1212,
],
[
-1.987654321,
-2.987654321,
-3.987654321,
-0.1221,
-2.1221,
-0.112121,
21.1212,
],
],
)
@pytest.mark.parametrize("rtol", [0, 0.01, 1e-05, 1e-08, 5e-1, 50.12])
@pytest.mark.parametrize("atol", [0, 0.01, 1e-05, 1e-08, 50.12])
def test_cudf_isclose(data1, data2, rtol, atol):
array1 = cupy.array(data1)
array2 = cupy.array(data2)
expected = cudf.Series(cupy.isclose(array1, array2, rtol=rtol, atol=atol))
actual = cudf.isclose(
cudf.Series(data1), cudf.Series(data2), rtol=rtol, atol=atol
)
assert_eq(expected, actual)
actual = cudf.isclose(data1, data2, rtol=rtol, atol=atol)
assert_eq(expected, actual)
actual = cudf.isclose(
cupy.array(data1), cupy.array(data2), rtol=rtol, atol=atol
)
assert_eq(expected, actual)
actual = cudf.isclose(
np.array(data1), np.array(data2), rtol=rtol, atol=atol
)
assert_eq(expected, actual)
actual = cudf.isclose(
pd.Series(data1), pd.Series(data2), rtol=rtol, atol=atol
)
assert_eq(expected, actual)
@pytest.mark.parametrize(
"data1",
[
[
-1.9876543,
-2.9876654,
np.nan,
-4.1234587,
-5.23,
-6.88918237,
-7.00001,
],
[
1.987654321,
2.987654321,
3.987654321,
0.1221,
2.1221,
np.nan,
-21.1212,
],
],
)
@pytest.mark.parametrize(
"data2",
[
[
-1.9876543,
-2.9876654,
-3.9876543,
-4.1234587,
-5.23,
-6.88918237,
-7.00001,
],
[
1.987654321,
2.987654321,
3.987654321,
0.1221,
2.1221,
0.112121,
-21.1212,
],
[
-1.987654321,
-2.987654321,
-3.987654321,
np.nan,
np.nan,
np.nan,
21.1212,
],
],
)
@pytest.mark.parametrize("equal_nan", [True, False])
def test_cudf_isclose_nulls(data1, data2, equal_nan):
array1 = cupy.array(data1)
array2 = cupy.array(data2)
expected = cudf.Series(cupy.isclose(array1, array2, equal_nan=equal_nan))
actual = cudf.isclose(
cudf.Series(data1), cudf.Series(data2), equal_nan=equal_nan
)
assert_eq(expected, actual, check_dtype=False)
actual = cudf.isclose(data1, data2, equal_nan=equal_nan)
assert_eq(expected, actual, check_dtype=False)
def test_cudf_isclose_different_index():
s1 = cudf.Series(
[-1.9876543, -2.9876654, -3.9876543, -4.1234587, -5.23, -7.00001],
index=[0, 1, 2, 3, 4, 5],
)
s2 = cudf.Series(
[-1.9876543, -2.9876654, -7.00001, -4.1234587, -5.23, -3.9876543],
index=[0, 1, 5, 3, 4, 2],
)
expected = cudf.Series([True] * 6, index=s1.index)
assert_eq(expected, cudf.isclose(s1, s2))
s1 = cudf.Series(
[-1.9876543, -2.9876654, -3.9876543, -4.1234587, -5.23, -7.00001],
index=[0, 1, 2, 3, 4, 5],
)
s2 = cudf.Series(
[-1.9876543, -2.9876654, -7.00001, -4.1234587, -5.23, -3.9876543],
index=[0, 1, 5, 10, 4, 2],
)
expected = cudf.Series(
[True, True, True, False, True, True], index=s1.index
)
assert_eq(expected, cudf.isclose(s1, s2))
s1 = cudf.Series(
[-1.9876543, -2.9876654, -3.9876543, -4.1234587, -5.23, -7.00001],
index=[100, 1, 2, 3, 4, 5],
)
s2 = cudf.Series(
[-1.9876543, -2.9876654, -7.00001, -4.1234587, -5.23, -3.9876543],
index=[0, 1, 100, 10, 4, 2],
)
expected = cudf.Series(
[False, True, True, False, True, False], index=s1.index
)
assert_eq(expected, cudf.isclose(s1, s2))
@pytest.mark.parametrize(
"orient", ["dict", "list", "split", "tight", "records", "index", "series"]
)
@pytest.mark.parametrize("into", [dict, OrderedDict, defaultdict(list)])
def test_dataframe_to_dict(orient, into):
df = cudf.DataFrame({"a": [1, 2, 3], "b": [9, 5, 3]}, index=[10, 11, 12])
pdf = df.to_pandas()
actual = df.to_dict(orient=orient, into=into)
expected = pdf.to_dict(orient=orient, into=into)
if orient == "series":
assert actual.keys() == expected.keys()
for key in actual.keys():
assert_eq(expected[key], actual[key])
else:
assert_eq(expected, actual)
@pytest.mark.parametrize(
"data, orient, dtype, columns",
[
(
{"col_1": [3, 2, 1, 0], "col_2": [3, 2, 1, 0]},
"columns",
None,
None,
),
({"col_1": [3, 2, 1, 0], "col_2": [3, 2, 1, 0]}, "index", None, None),
(
{"col_1": [None, 2, 1, 0], "col_2": [3, None, 1, 0]},
"index",
None,
["A", "B", "C", "D"],
),
(
{
"col_1": ["ab", "cd", "ef", "gh"],
"col_2": ["zx", "one", "two", "three"],
},
"index",
None,
["A", "B", "C", "D"],
),
(
{
"index": [("a", "b"), ("a", "c")],
"columns": [("x", 1), ("y", 2)],
"data": [[1, 3], [2, 4]],
"index_names": ["n1", "n2"],
"column_names": ["z1", "z2"],
},
"tight",
"float64",
None,
),
],
)
def test_dataframe_from_dict(data, orient, dtype, columns):
expected = pd.DataFrame.from_dict(
data=data, orient=orient, dtype=dtype, columns=columns
)
actual = cudf.DataFrame.from_dict(
data=data, orient=orient, dtype=dtype, columns=columns
)
assert_eq(expected, actual)
@pytest.mark.parametrize("dtype", ["int64", "str", None])
def test_dataframe_from_dict_transposed(dtype):
pd_data = {"a": [3, 2, 1, 0], "col_2": [3, 2, 1, 0]}
gd_data = {key: cudf.Series(val) for key, val in pd_data.items()}
expected = pd.DataFrame.from_dict(pd_data, orient="index", dtype=dtype)
actual = cudf.DataFrame.from_dict(gd_data, orient="index", dtype=dtype)
gd_data = {key: cupy.asarray(val) for key, val in pd_data.items()}
actual = cudf.DataFrame.from_dict(gd_data, orient="index", dtype=dtype)
assert_eq(expected, actual)
@pytest.mark.parametrize(
"pd_data, gd_data, orient, dtype, columns",
[
(
{"col_1": np.array([3, 2, 1, 0]), "col_2": np.array([3, 2, 1, 0])},
{
"col_1": cupy.array([3, 2, 1, 0]),
"col_2": cupy.array([3, 2, 1, 0]),
},
"columns",
None,
None,
),
(
{"col_1": np.array([3, 2, 1, 0]), "col_2": np.array([3, 2, 1, 0])},
{
"col_1": cupy.array([3, 2, 1, 0]),
"col_2": cupy.array([3, 2, 1, 0]),
},
"index",
None,
None,
),
(
{
"col_1": np.array([None, 2, 1, 0]),
"col_2": np.array([3, None, 1, 0]),
},
{
"col_1": cupy.array([np.nan, 2, 1, 0]),
"col_2": cupy.array([3, np.nan, 1, 0]),
},
"index",
None,
["A", "B", "C", "D"],
),
(
{
"col_1": np.array(["ab", "cd", "ef", "gh"]),
"col_2": np.array(["zx", "one", "two", "three"]),
},
{
"col_1": np.array(["ab", "cd", "ef", "gh"]),
"col_2": np.array(["zx", "one", "two", "three"]),
},
"index",
None,
["A", "B", "C", "D"],
),
(
{
"index": [("a", "b"), ("a", "c")],
"columns": [("x", 1), ("y", 2)],
"data": [np.array([1, 3]), np.array([2, 4])],
"index_names": ["n1", "n2"],
"column_names": ["z1", "z2"],
},
{
"index": [("a", "b"), ("a", "c")],
"columns": [("x", 1), ("y", 2)],
"data": [cupy.array([1, 3]), cupy.array([2, 4])],
"index_names": ["n1", "n2"],
"column_names": ["z1", "z2"],
},
"tight",
"float64",
None,
),
],
)
def test_dataframe_from_dict_cp_np_arrays(
pd_data, gd_data, orient, dtype, columns
):
expected = pd.DataFrame.from_dict(
data=pd_data, orient=orient, dtype=dtype, columns=columns
)
actual = cudf.DataFrame.from_dict(
data=gd_data, orient=orient, dtype=dtype, columns=columns
)
assert_eq(expected, actual, check_dtype=dtype is not None)
@pytest.mark.parametrize(
"df",
[
pd.DataFrame({"a": [1, 2, 3, 4, 5, 10, 11, 12, 33, 55, 19]}),
pd.DataFrame(
{
"one": [1, 2, 3, 4, 5, 10],
"two": ["abc", "def", "ghi", "xyz", "pqr", "abc"],
}
),
pd.DataFrame(
{
"one": [1, 2, 3, 4, 5, 10],
"two": ["abc", "def", "ghi", "xyz", "pqr", "abc"],
},
index=[10, 20, 30, 40, 50, 60],
),
pd.DataFrame(
{
"one": [1, 2, 3, 4, 5, 10],
"two": ["abc", "def", "ghi", "xyz", "pqr", "abc"],
},
index=["a", "b", "c", "d", "e", "f"],
),
pd.DataFrame(index=["a", "b", "c", "d", "e", "f"]),
pd.DataFrame(columns=["a", "b", "c", "d", "e", "f"]),
pd.DataFrame(index=[10, 11, 12]),
pd.DataFrame(columns=[10, 11, 12]),
pd.DataFrame(),
pd.DataFrame({"one": [], "two": []}),
pd.DataFrame({2: [], 1: []}),
pd.DataFrame(
{
0: [1, 2, 3, 4, 5, 10],
1: ["abc", "def", "ghi", "xyz", "pqr", "abc"],
100: ["a", "b", "b", "x", "z", "a"],
},
index=[10, 20, 30, 40, 50, 60],
),
],
)
def test_dataframe_keys(df):
gdf = cudf.from_pandas(df)
assert_eq(df.keys(), gdf.keys())
@pytest.mark.parametrize(
"ps",
[
pd.Series([1, 2, 3, 4, 5, 10, 11, 12, 33, 55, 19]),
pd.Series(["abc", "def", "ghi", "xyz", "pqr", "abc"]),
pd.Series(
[1, 2, 3, 4, 5, 10],
index=["abc", "def", "ghi", "xyz", "pqr", "abc"],
),
pd.Series(
["abc", "def", "ghi", "xyz", "pqr", "abc"],
index=[1, 2, 3, 4, 5, 10],
),
pd.Series(index=["a", "b", "c", "d", "e", "f"], dtype="float64"),
pd.Series(index=[10, 11, 12], dtype="float64"),
pd.Series(dtype="float64"),
pd.Series([], dtype="float64"),
],
)
def test_series_keys(ps):
gds = cudf.from_pandas(ps)
assert_eq(ps.keys(), gds.keys())
@pytest_unmark_spilling
@pytest.mark.parametrize(
"df",
[
pd.DataFrame(),
pd.DataFrame(index=[10, 20, 30]),
pd.DataFrame({"first_col": [], "second_col": [], "third_col": []}),
pd.DataFrame([[1, 2], [3, 4]], columns=list("AB")),
pd.DataFrame([[1, 2], [3, 4]], columns=list("AB"), index=[10, 20]),
pd.DataFrame([[1, 2], [3, 4]], columns=list("AB"), index=[7, 8]),
pd.DataFrame(
{
"a": [315.3324, 3243.32432, 3232.332, -100.32],
"z": [0.3223, 0.32, 0.0000232, 0.32224],
}
),
pd.DataFrame(
{
"a": [315.3324, 3243.32432, 3232.332, -100.32],
"z": [0.3223, 0.32, 0.0000232, 0.32224],
},
index=[7, 20, 11, 9],
),
pd.DataFrame({"l": [10]}),
pd.DataFrame({"l": [10]}, index=[100]),
pd.DataFrame({"f": [10.2, 11.2332, 0.22, 3.3, 44.23, 10.0]}),
pd.DataFrame(
{"f": [10.2, 11.2332, 0.22, 3.3, 44.23, 10.0]},
index=[100, 200, 300, 400, 500, 0],
),
],
)
@pytest.mark.parametrize(
"other",
[
pd.DataFrame([[5, 6], [7, 8]], columns=list("AB")),
pd.DataFrame([[5, 6], [7, 8]], columns=list("BD")),
pd.DataFrame([[5, 6], [7, 8]], columns=list("DE")),
pd.DataFrame(),
pd.DataFrame(
{"c": [10, 11, 22, 33, 44, 100]}, index=[7, 8, 9, 10, 11, 20]
),
pd.DataFrame({"f": [10.2, 11.2332, 0.22, 3.3, 44.23, 10.0]}),
pd.DataFrame({"l": [10]}),
pd.DataFrame({"l": [10]}, index=[200]),
pd.DataFrame([]),
pd.DataFrame({"first_col": [], "second_col": [], "third_col": []}),
pd.DataFrame([], index=[100]),
pd.DataFrame(
{
"a": [315.3324, 3243.32432, 3232.332, -100.32],
"z": [0.3223, 0.32, 0.0000232, 0.32224],
}
),
pd.DataFrame(
{
"a": [315.3324, 3243.32432, 3232.332, -100.32],
"z": [0.3223, 0.32, 0.0000232, 0.32224],
},
index=[0, 100, 200, 300],
),
],
)
@pytest.mark.parametrize("sort", [False, True])
@pytest.mark.parametrize("ignore_index", [True, False])
def test_dataframe_append_dataframe(df, other, sort, ignore_index):
pdf = df
other_pd = other
gdf = cudf.from_pandas(df)
other_gd = cudf.from_pandas(other)
with pytest.warns(FutureWarning, match="append method is deprecated"):
expected = pdf.append(other_pd, sort=sort, ignore_index=ignore_index)
with pytest.warns(FutureWarning, match="append method is deprecated"):
actual = gdf.append(other_gd, sort=sort, ignore_index=ignore_index)
if expected.shape != df.shape:
assert_eq(expected.fillna(-1), actual.fillna(-1), check_dtype=False)
else:
assert_eq(expected, actual, check_index_type=not gdf.empty)
@pytest_unmark_spilling
@pytest.mark.parametrize(
"df",
[
pd.DataFrame(),
pd.DataFrame(index=[10, 20, 30]),
pd.DataFrame({12: [], 22: []}),
pd.DataFrame([[1, 2], [3, 4]], columns=[10, 20]),
pd.DataFrame([[1, 2], [3, 4]], columns=[0, 1], index=[10, 20]),
pd.DataFrame([[1, 2], [3, 4]], columns=[1, 0], index=[7, 8]),
pd.DataFrame(
{
23: [315.3324, 3243.32432, 3232.332, -100.32],
33: [0.3223, 0.32, 0.0000232, 0.32224],
}
),
pd.DataFrame(
{
0: [315.3324, 3243.32432, 3232.332, -100.32],
1: [0.3223, 0.32, 0.0000232, 0.32224],
},
index=[7, 20, 11, 9],
),
],
)
@pytest.mark.parametrize(
"other",
[
pd.Series([10, 11, 23, 234, 13]),
pytest.param(
pd.Series([10, 11, 23, 234, 13], index=[11, 12, 13, 44, 33]),
marks=pytest.mark.xfail(
condition=not PANDAS_GE_150,
reason="pandas bug: "
"https://github.com/pandas-dev/pandas/issues/35092",
),
),
{1: 1},
{0: 10, 1: 100, 2: 102},
],
)
@pytest.mark.parametrize("sort", [False, True])
def test_dataframe_append_series_dict(df, other, sort):
pdf = df
other_pd = other
gdf = cudf.from_pandas(df)
if isinstance(other, pd.Series):
other_gd = cudf.from_pandas(other)
else:
other_gd = other
with pytest.warns(FutureWarning, match="append method is deprecated"):
expected = pdf.append(other_pd, ignore_index=True, sort=sort)
with pytest.warns(FutureWarning, match="append method is deprecated"):
actual = gdf.append(other_gd, ignore_index=True, sort=sort)
if expected.shape != df.shape:
# Ignore the column type comparison because pandas incorrectly
# returns pd.Index([1, 2, 3], dtype="object") instead
# of pd.Index([1, 2, 3], dtype="int64")
assert_eq(
expected.fillna(-1),
actual.fillna(-1),
check_dtype=False,
check_column_type=False,
check_index_type=True,
)
else:
assert_eq(expected, actual, check_index_type=not gdf.empty)
def test_dataframe_append_series_mixed_index():
df = cudf.DataFrame({"first": [], "d": []})
sr = cudf.Series([1, 2, 3, 4])
with pytest.raises(
TypeError,
match=re.escape(
"cudf does not support mixed types, please type-cast "
"the column index of dataframe and index of series "
"to same dtypes."
),
):
with pytest.warns(FutureWarning, match="append method is deprecated"):
df.append(sr, ignore_index=True)
@pytest_unmark_spilling
@pytest.mark.parametrize(
"df",
[
pd.DataFrame(),
pd.DataFrame(index=[10, 20, 30]),
pd.DataFrame({"first_col": [], "second_col": [], "third_col": []}),
pd.DataFrame([[1, 2], [3, 4]], columns=list("AB")),
pd.DataFrame([[1, 2], [3, 4]], columns=list("AB"), index=[10, 20]),
pd.DataFrame([[1, 2], [3, 4]], columns=list("AB"), index=[7, 8]),
pd.DataFrame(
{
"a": [315.3324, 3243.32432, 3232.332, -100.32],
"z": [0.3223, 0.32, 0.0000232, 0.32224],
}
),
pd.DataFrame(
{
"a": [315.3324, 3243.32432, 3232.332, -100.32],
"z": [0.3223, 0.32, 0.0000232, 0.32224],
},
index=[7, 20, 11, 9],
),
pd.DataFrame({"l": [10]}),
pd.DataFrame({"l": [10]}, index=[100]),
pd.DataFrame({"f": [10.2, 11.2332, 0.22, 3.3, 44.23, 10.0]}),
pd.DataFrame(
{"f": [10.2, 11.2332, 0.22, 3.3, 44.23, 10.0]},
index=[100, 200, 300, 400, 500, 0],
),
],
)
@pytest.mark.parametrize(
"other",
[
[pd.DataFrame([[5, 6], [7, 8]], columns=list("AB"))],
[
pd.DataFrame([[5, 6], [7, 8]], columns=list("AB")),
pd.DataFrame([[5, 6], [7, 8]], columns=list("BD")),
pd.DataFrame([[5, 6], [7, 8]], columns=list("DE")),
],
[pd.DataFrame(), pd.DataFrame(), pd.DataFrame(), pd.DataFrame()],
[
pd.DataFrame(
{"c": [10, 11, 22, 33, 44, 100]}, index=[7, 8, 9, 10, 11, 20]
),
pd.DataFrame(),
pd.DataFrame(),
pd.DataFrame([[5, 6], [7, 8]], columns=list("AB")),
],
[
pd.DataFrame({"f": [10.2, 11.2332, 0.22, 3.3, 44.23, 10.0]}),
pd.DataFrame({"l": [10]}),
pd.DataFrame({"l": [10]}, index=[200]),
],
[pd.DataFrame([]), pd.DataFrame([], index=[100])],
[
pd.DataFrame([]),
pd.DataFrame([], index=[100]),
pd.DataFrame({"first_col": [], "second_col": [], "third_col": []}),
],
[
pd.DataFrame(
{
"a": [315.3324, 3243.32432, 3232.332, -100.32],
"z": [0.3223, 0.32, 0.0000232, 0.32224],
}
),
pd.DataFrame(
{
"a": [315.3324, 3243.32432, 3232.332, -100.32],
"z": [0.3223, 0.32, 0.0000232, 0.32224],
},
index=[0, 100, 200, 300],
),
],
[
pd.DataFrame(
{
"a": [315.3324, 3243.32432, 3232.332, -100.32],
"z": [0.3223, 0.32, 0.0000232, 0.32224],
},
index=[0, 100, 200, 300],
),
],
[
pd.DataFrame(
{
"a": [315.3324, 3243.32432, 3232.332, -100.32],
"z": [0.3223, 0.32, 0.0000232, 0.32224],
},
index=[0, 100, 200, 300],
),
pd.DataFrame(
{
"a": [315.3324, 3243.32432, 3232.332, -100.32],
"z": [0.3223, 0.32, 0.0000232, 0.32224],
},
index=[0, 100, 200, 300],
),
],
[
pd.DataFrame(
{
"a": [315.3324, 3243.32432, 3232.332, -100.32],
"z": [0.3223, 0.32, 0.0000232, 0.32224],
},
index=[0, 100, 200, 300],
),
pd.DataFrame(
{
"a": [315.3324, 3243.32432, 3232.332, -100.32],
"z": [0.3223, 0.32, 0.0000232, 0.32224],
},
index=[0, 100, 200, 300],
),
pd.DataFrame({"first_col": [], "second_col": [], "third_col": []}),
],
],
)
@pytest.mark.parametrize("sort", [False, True])
@pytest.mark.parametrize("ignore_index", [True, False])
def test_dataframe_append_dataframe_lists(df, other, sort, ignore_index):
pdf = df
other_pd = other
gdf = cudf.from_pandas(df)
other_gd = [
cudf.from_pandas(o) if isinstance(o, pd.DataFrame) else o
for o in other
]
with pytest.warns(FutureWarning, match="append method is deprecated"):
expected = pdf.append(other_pd, sort=sort, ignore_index=ignore_index)
with pytest.warns(FutureWarning, match="append method is deprecated"):
actual = gdf.append(other_gd, sort=sort, ignore_index=ignore_index)
if expected.shape != df.shape:
assert_eq(expected.fillna(-1), actual.fillna(-1), check_dtype=False)
else:
assert_eq(expected, actual, check_index_type=not gdf.empty)
@pytest.mark.parametrize(
"df",
[
pd.DataFrame({"A": [1, 2, 3, np.nan, None, 6]}),
pd.Series([1, 2, 3, None, np.nan, 5, 6, np.nan]),
],
)
@pytest.mark.parametrize("alias", ["bfill", "backfill"])
def test_dataframe_bfill(df, alias):
gdf = cudf.from_pandas(df)
actual = getattr(df, alias)()
with expect_warning_if(alias == "backfill"):
expected = getattr(gdf, alias)()
assert_eq(expected, actual)
@pytest.mark.parametrize(
"df",
[
pd.DataFrame({"A": [1, 2, 3, np.nan, None, 6]}),
pd.Series([1, 2, 3, None, np.nan, 5, 6, np.nan]),
],
)
@pytest.mark.parametrize("alias", ["ffill", "pad"])
def test_dataframe_ffill(df, alias):
gdf = cudf.from_pandas(df)
actual = getattr(df, alias)()
with expect_warning_if(alias == "pad"):
expected = getattr(gdf, alias)()
assert_eq(expected, actual)
@pytest_unmark_spilling
@pytest.mark.parametrize(
"df",
[
pd.DataFrame(),
pd.DataFrame([[1, 2], [3, 4]], columns=list("AB")),
pd.DataFrame([[1, 2], [3, 4]], columns=list("AB"), index=[10, 20]),
pd.DataFrame([[1, 2], [3, 4]], columns=list("AB"), index=[7, 8]),
pd.DataFrame(
{
"a": [315.3324, 3243.32432, 3232.332, -100.32],
"z": [0.3223, 0.32, 0.0000232, 0.32224],
}
),
pd.DataFrame(
{
"a": [315.3324, 3243.32432, 3232.332, -100.32],
"z": [0.3223, 0.32, 0.0000232, 0.32224],
},
index=[7, 20, 11, 9],
),
pd.DataFrame({"l": [10]}),
pd.DataFrame({"l": [10]}, index=[100]),
pd.DataFrame({"f": [10.2, 11.2332, 0.22, 3.3, 44.23, 10.0]}),
pd.DataFrame(
{"f": [10.2, 11.2332, 0.22, 3.3, 44.23, 10.0]},
index=[100, 200, 300, 400, 500, 0],
),
pd.DataFrame({"first_col": [], "second_col": [], "third_col": []}),
],
)
@pytest.mark.parametrize(
"other",
[
[[1, 2], [10, 100]],
[[1, 2, 10, 100, 0.1, 0.2, 0.0021]],
[[]],
[[], [], [], []],
[[0.23, 0.00023, -10.00, 100, 200, 1000232, 1232.32323]],
],
)
@pytest.mark.parametrize("sort", [False, True])
@pytest.mark.parametrize("ignore_index", [True, False])
def test_dataframe_append_lists(df, other, sort, ignore_index):
pdf = df
other_pd = other
gdf = cudf.from_pandas(df)
other_gd = [
cudf.from_pandas(o) if isinstance(o, pd.DataFrame) else o
for o in other
]
with pytest.warns(FutureWarning, match="append method is deprecated"):
expected = pdf.append(other_pd, sort=sort, ignore_index=ignore_index)
with pytest.warns(FutureWarning, match="append method is deprecated"):
actual = gdf.append(other_gd, sort=sort, ignore_index=ignore_index)
if expected.shape != df.shape:
assert_eq(
expected.fillna(-1),
actual.fillna(-1),
check_dtype=False,
check_column_type=not gdf.empty,
)
else:
assert_eq(expected, actual, check_index_type=not gdf.empty)
def test_dataframe_append_error():
df = cudf.DataFrame({"a": [1, 2, 3]})
ps = cudf.Series([1, 2, 3])
with pytest.raises(
TypeError,
match="Can only append a Series if ignore_index=True "
"or if the Series has a name",
):
with pytest.warns(FutureWarning, match="append method is deprecated"):
df.append(ps)
def test_cudf_arrow_array_error():
df = cudf.DataFrame({"a": [1, 2, 3]})
with pytest.raises(
TypeError,
match="Implicit conversion to a host PyArrow object via "
"__arrow_array__ is not allowed. Consider using .to_arrow()",
):
df.__arrow_array__()
sr = cudf.Series([1, 2, 3])
with pytest.raises(
TypeError,
match="Implicit conversion to a host PyArrow object via "
"__arrow_array__ is not allowed. Consider using .to_arrow()",
):
sr.__arrow_array__()
sr = cudf.Series(["a", "b", "c"])
with pytest.raises(
TypeError,
match="Implicit conversion to a host PyArrow object via "
"__arrow_array__ is not allowed. Consider using .to_arrow()",
):
sr.__arrow_array__()
@pytest.mark.parametrize(
"make_weights_axis_1",
[lambda _: None, lambda s: [1] * s, lambda s: np.ones(s)],
)
def test_sample_axis_1(
sample_n_frac, random_state_tuple_axis_1, make_weights_axis_1
):
n, frac = sample_n_frac
pd_random_state, gd_random_state, checker = random_state_tuple_axis_1
pdf = pd.DataFrame(
{
"a": [1, 2, 3, 4, 5],
"float": [0.05, 0.2, 0.3, 0.2, 0.25],
"int": [1, 3, 5, 4, 2],
},
)
df = cudf.DataFrame.from_pandas(pdf)
weights = make_weights_axis_1(len(pdf.columns))
expected = pdf.sample(
n=n,
frac=frac,
replace=False,
random_state=pd_random_state,
weights=weights,
axis=1,
)
got = df.sample(
n=n,
frac=frac,
replace=False,
random_state=gd_random_state,
weights=weights,
axis=1,
)
checker(expected, got)
@pytest.mark.parametrize(
"pdf",
[
pd.DataFrame(
{
"a": [1, 2, 3, 4, 5],
"float": [0.05, 0.2, 0.3, 0.2, 0.25],
"int": [1, 3, 5, 4, 2],
},
),
pd.Series([1, 2, 3, 4, 5]),
],
)
@pytest.mark.parametrize("replace", [True, False])
def test_sample_axis_0(
pdf, sample_n_frac, replace, random_state_tuple_axis_0, make_weights_axis_0
):
n, frac = sample_n_frac
pd_random_state, gd_random_state, checker = random_state_tuple_axis_0
df = cudf.from_pandas(pdf)
pd_weights, gd_weights = make_weights_axis_0(
len(pdf), isinstance(gd_random_state, np.random.RandomState)
)
if (
not replace
and not isinstance(gd_random_state, np.random.RandomState)
and gd_weights is not None
):
pytest.skip(
"`cupy.random.RandomState` doesn't support weighted sampling "
"without replacement."
)
expected = pdf.sample(
n=n,
frac=frac,
replace=replace,
random_state=pd_random_state,
weights=pd_weights,
axis=0,
)
got = df.sample(
n=n,
frac=frac,
replace=replace,
random_state=gd_random_state,
weights=gd_weights,
axis=0,
)
checker(expected, got)
@pytest.mark.parametrize("replace", [True, False])
@pytest.mark.parametrize(
"random_state_lib", [cupy.random.RandomState, np.random.RandomState]
)
def test_sample_reproducibility(replace, random_state_lib):
df = cudf.DataFrame({"a": cupy.arange(0, 1024)})
n = 1024
expected = df.sample(n, replace=replace, random_state=random_state_lib(10))
out = df.sample(n, replace=replace, random_state=random_state_lib(10))
assert_eq(expected, out)
@pytest.mark.parametrize("axis", [0, 1])
def test_sample_invalid_n_frac_combo(axis):
n, frac = 2, 0.5
pdf = pd.DataFrame(
{
"a": [1, 2, 3, 4, 5],
"float": [0.05, 0.2, 0.3, 0.2, 0.25],
"int": [1, 3, 5, 4, 2],
},
)
df = cudf.DataFrame.from_pandas(pdf)
assert_exceptions_equal(
lfunc=pdf.sample,
rfunc=df.sample,
lfunc_args_and_kwargs=([], {"n": n, "frac": frac, "axis": axis}),
rfunc_args_and_kwargs=([], {"n": n, "frac": frac, "axis": axis}),
)
@pytest.mark.parametrize("n, frac", [(100, None), (None, 3)])
@pytest.mark.parametrize("axis", [0, 1])
def test_oversample_without_replace(n, frac, axis):
pdf = pd.DataFrame({"a": [1, 2, 3, 4, 5]})
df = cudf.DataFrame.from_pandas(pdf)
assert_exceptions_equal(
lfunc=pdf.sample,
rfunc=df.sample,
lfunc_args_and_kwargs=(
[],
{"n": n, "frac": frac, "axis": axis, "replace": False},
),
rfunc_args_and_kwargs=(
[],
{"n": n, "frac": frac, "axis": axis, "replace": False},
),
)
@pytest.mark.parametrize("random_state", [None, cupy.random.RandomState(42)])
def test_sample_unsupported_arguments(random_state):
df = cudf.DataFrame({"float": [0.05, 0.2, 0.3, 0.2, 0.25]})
with pytest.raises(
NotImplementedError,
match="Random sampling with cupy does not support these inputs.",
):
df.sample(
n=2, replace=False, random_state=random_state, weights=[1] * 5
)
@pytest.mark.parametrize(
"df",
[
pd.DataFrame(),
pd.DataFrame(index=[100, 10, 1, 0]),
pd.DataFrame(columns=["a", "b", "c", "d"]),
pd.DataFrame(columns=["a", "b", "c", "d"], index=[100]),
pd.DataFrame(
columns=["a", "b", "c", "d"], index=[100, 10000, 2131, 133]
),
pd.DataFrame({"a": [1, 2, 3], "b": ["abc", "xyz", "klm"]}),
],
)
def test_dataframe_empty(df):
pdf = df
gdf = cudf.from_pandas(pdf)
assert_eq(pdf.empty, gdf.empty)
@pytest.mark.parametrize(
"df",
[
pd.DataFrame(),
pd.DataFrame(index=[100, 10, 1, 0]),
pd.DataFrame(columns=["a", "b", "c", "d"]),
pd.DataFrame(columns=["a", "b", "c", "d"], index=[100]),
pd.DataFrame(
columns=["a", "b", "c", "d"], index=[100, 10000, 2131, 133]
),
pd.DataFrame({"a": [1, 2, 3], "b": ["abc", "xyz", "klm"]}),
],
)
def test_dataframe_size(df):
pdf = df
gdf = cudf.from_pandas(pdf)
assert_eq(pdf.size, gdf.size)
@pytest.mark.parametrize(
"ps",
[
pd.Series(dtype="float64"),
pd.Series(index=[100, 10, 1, 0], dtype="float64"),
pd.Series([], dtype="float64"),
pd.Series(["a", "b", "c", "d"]),
pd.Series(["a", "b", "c", "d"], index=[0, 1, 10, 11]),
],
)
def test_series_empty(ps):
ps = ps
gs = cudf.from_pandas(ps)
assert_eq(ps.empty, gs.empty)
@pytest.mark.parametrize(
"data",
[
None,
[],
[1],
{"a": [10, 11, 12]},
{
"a": [10, 11, 12],
"another column name": [12, 22, 34],
"xyz": [0, 10, 11],
},
],
)
@pytest.mark.parametrize(
"columns",
[["a"], ["another column name"], None, pd.Index(["a"], name="index name")],
)
def test_dataframe_init_with_columns(data, columns, request):
if data == [] and columns is None and not PANDAS_GE_200:
request.node.add_marker(
pytest.mark.xfail(reason=".column returns Index[object]")
)
pdf = pd.DataFrame(data, columns=columns)
gdf = cudf.DataFrame(data, columns=columns)
assert_eq(
pdf,
gdf,
check_index_type=len(pdf.index) != 0,
check_dtype=not (pdf.empty and len(pdf.columns)),
)
@pytest_unmark_spilling
@pytest.mark.parametrize(
"data, ignore_dtype",
[
([pd.Series([1, 2, 3])], False),
([pd.Series(index=[1, 2, 3], dtype="float64")], False),
([pd.Series(name="empty series name", dtype="float64")], False),
(
[pd.Series([1]), pd.Series([], dtype="float64"), pd.Series([3])],
False,
),
(
[
pd.Series([1, 0.324234, 32424.323, -1233, 34242]),
pd.Series([], dtype="float64"),
pd.Series([3], name="series that is named"),
],
False,
),
([pd.Series([1, 2, 3], name="hi")] * 10, False),
([pd.Series([1, 2, 3], name=None, index=[10, 11, 12])] * 10, False),
(
[
pd.Series([1, 2, 3], name=None, index=[10, 11, 12]),
pd.Series([1, 2, 30], name=None, index=[13, 144, 15]),
],
True,
),
(
[
pd.Series([1, 0.324234, 32424.323, -1233, 34242]),
pd.Series([], dtype="float64"),
pd.Series(index=[10, 11, 12], dtype="float64"),
],
False,
),
(
[
pd.Series([1, 0.324234, 32424.323, -1233, 34242]),
pd.Series([], name="abc", dtype="float64"),
pd.Series(index=[10, 11, 12], dtype="float64"),
],
False,
),
(
[
pd.Series([1, 0.324234, 32424.323, -1233, 34242]),
pd.Series([1, -100, 200, -399, 400], name="abc"),
pd.Series([111, 222, 333], index=[10, 11, 12]),
],
False,
),
],
)
@pytest.mark.parametrize(
"columns",
[
None,
["0"],
[0],
["abc"],
[144, 13],
[2, 1, 0],
pd.Index(["abc"], name="custom_name"),
],
)
def test_dataframe_init_from_series_list(data, ignore_dtype, columns, request):
if columns is None and data[0].empty and not PANDAS_GE_200:
request.applymarker(
pytest.mark.xfail(reason=".column returns Index[object]")
)
gd_data = [cudf.from_pandas(obj) for obj in data]
expected = pd.DataFrame(data, columns=columns)
actual = cudf.DataFrame(gd_data, columns=columns)
if ignore_dtype:
# When a union is performed to generate columns,
# the order is never guaranteed. Hence sort by
# columns before comparison.
if not expected.columns.equals(actual.columns):
expected = expected.sort_index(axis=1)
actual = actual.sort_index(axis=1)
assert_eq(
expected.fillna(-1),
actual.fillna(-1),
check_dtype=False,
check_index_type=True,
)
else:
assert_eq(expected, actual, check_index_type=True)
@pytest_unmark_spilling
@pytest.mark.parametrize(
"data, ignore_dtype, index",
[
([pd.Series([1, 2, 3])], False, ["a", "b", "c"]),
([pd.Series(index=[1, 2, 3], dtype="float64")], False, ["a", "b"]),
(
[pd.Series(name="empty series name", dtype="float64")],
False,
["index1"],
),
(
[pd.Series([1]), pd.Series([], dtype="float64"), pd.Series([3])],
False,
["0", "2", "1"],
),
(
[
pd.Series([1, 0.324234, 32424.323, -1233, 34242]),
pd.Series([], dtype="float64"),
pd.Series([3], name="series that is named"),
],
False,
["_", "+", "*"],
),
([pd.Series([1, 2, 3], name="hi")] * 10, False, ["mean"] * 10),
(
[pd.Series([1, 2, 3], name=None, index=[10, 11, 12])] * 10,
False,
["abc"] * 10,
),
(
[
pd.Series([1, 2, 3], name=None, index=[10, 11, 12]),
pd.Series([1, 2, 30], name=None, index=[13, 144, 15]),
],
True,
["set_index_a", "set_index_b"],
),
(
[
pd.Series([1, 0.324234, 32424.323, -1233, 34242]),
pd.Series([], dtype="float64"),
pd.Series(index=[10, 11, 12], dtype="float64"),
],
False,
["a", "b", "c"],
),
(
[
pd.Series([1, 0.324234, 32424.323, -1233, 34242]),
pd.Series([], name="abc", dtype="float64"),
pd.Series(index=[10, 11, 12], dtype="float64"),
],
False,
["a", "v", "z"],
),
(
[
pd.Series([1, 0.324234, 32424.323, -1233, 34242]),
pd.Series([1, -100, 200, -399, 400], name="abc"),
pd.Series([111, 222, 333], index=[10, 11, 12]),
],
False,
["a", "v", "z"],
),
],
)
@pytest.mark.parametrize(
"columns", [None, ["0"], [0], ["abc"], [144, 13], [2, 1, 0]]
)
def test_dataframe_init_from_series_list_with_index(
data,
ignore_dtype,
index,
columns,
request,
):
if columns is None and data[0].empty and not PANDAS_GE_200:
request.applymarker(
pytest.mark.xfail(reason=".column returns Index[object]")
)
gd_data = [cudf.from_pandas(obj) for obj in data]
expected = pd.DataFrame(data, columns=columns, index=index)
actual = cudf.DataFrame(gd_data, columns=columns, index=index)
if ignore_dtype:
# When a union is performed to generate columns,
# the order is never guaranteed. Hence sort by
# columns before comparison.
if not expected.columns.equals(actual.columns):
expected = expected.sort_index(axis=1)
actual = actual.sort_index(axis=1)
assert_eq(expected.fillna(-1), actual.fillna(-1), check_dtype=False)
else:
assert_eq(expected, actual)
@pytest.mark.parametrize(
"data, index",
[
([pd.Series([1, 2]), pd.Series([1, 2])], ["a", "b", "c"]),
(
[
pd.Series([1, 0.324234, 32424.323, -1233, 34242]),
pd.Series([], dtype="float64"),
pd.Series([3], name="series that is named"),
],
["_", "+"],
),
([pd.Series([1, 2, 3], name="hi")] * 10, ["mean"] * 9),
],
)
def test_dataframe_init_from_series_list_with_index_error(data, index):
gd_data = [cudf.from_pandas(obj) for obj in data]
assert_exceptions_equal(
pd.DataFrame,
cudf.DataFrame,
([data], {"index": index}),
([gd_data], {"index": index}),
)
@pytest.mark.parametrize(
"data",
[
[pd.Series([1, 2, 3], index=["a", "a", "a"])],
[pd.Series([1, 2, 3], index=["a", "a", "a"])] * 4,
[
pd.Series([1, 2, 3], index=["a", "b", "a"]),
pd.Series([1, 2, 3], index=["b", "b", "a"]),
],
[
pd.Series([1, 2, 3], index=["a", "b", "z"]),
pd.Series([1, 2, 3], index=["u", "b", "a"]),
pd.Series([1, 2, 3], index=["u", "b", "u"]),
],
],
)
def test_dataframe_init_from_series_list_duplicate_index_error(data):
gd_data = [cudf.from_pandas(obj) for obj in data]
assert_exceptions_equal(
lfunc=pd.DataFrame,
rfunc=cudf.DataFrame,
lfunc_args_and_kwargs=([], {"data": data}),
rfunc_args_and_kwargs=([], {"data": gd_data}),
check_exception_type=False,
)
def test_dataframe_iterrows_itertuples():
df = cudf.DataFrame({"a": [1, 2, 3], "b": [4, 5, 6]})
with pytest.raises(
TypeError,
match=re.escape(
"cuDF does not support iteration of DataFrame "
"via itertuples. Consider using "
"`.to_pandas().itertuples()` "
"if you wish to iterate over namedtuples."
),
):
df.itertuples()
with pytest.raises(
TypeError,
match=re.escape(
"cuDF does not support iteration of DataFrame "
"via iterrows. Consider using "
"`.to_pandas().iterrows()` "
"if you wish to iterate over each row."
),
):
df.iterrows()
@pytest_unmark_spilling
@pytest.mark.parametrize(
"df",
[
cudf.DataFrame(
{
"a": [1, 2, 3],
"b": [10, 22, 33],
"c": [0.3234, 0.23432, 0.0],
"d": ["hello", "world", "hello"],
}
),
cudf.DataFrame(
{
"a": [1, 2, 3],
"b": ["hello", "world", "hello"],
"c": [0.3234, 0.23432, 0.0],
}
),
cudf.DataFrame(
{
"int_data": [1, 2, 3],
"str_data": ["hello", "world", "hello"],
"float_data": [0.3234, 0.23432, 0.0],
"timedelta_data": cudf.Series(
[1, 2, 1], dtype="timedelta64[ns]"
),
"datetime_data": cudf.Series(
[1, 2, 1], dtype="datetime64[ns]"
),
}
),
cudf.DataFrame(
{
"int_data": [1, 2, 3],
"str_data": ["hello", "world", "hello"],
"float_data": [0.3234, 0.23432, 0.0],
"timedelta_data": cudf.Series(
[1, 2, 1], dtype="timedelta64[ns]"
),
"datetime_data": cudf.Series(
[1, 2, 1], dtype="datetime64[ns]"
),
"category_data": cudf.Series(
["a", "a", "b"], dtype="category"
),
}
),
],
)
@pytest.mark.parametrize(
"include",
[None, "all", ["object"], ["int"], ["object", "int", "category"]],
)
def test_describe_misc_include(df, include):
pdf = df.to_pandas()
expected = pdf.describe(include=include, datetime_is_numeric=True)
actual = df.describe(include=include, datetime_is_numeric=True)
for col in expected.columns:
if expected[col].dtype == np.dtype("object"):
expected[col] = expected[col].fillna(-1).astype("str")
actual[col] = actual[col].fillna(-1).astype("str")
assert_eq(expected, actual)
@pytest_unmark_spilling
@pytest.mark.parametrize(
"df",
[
cudf.DataFrame(
{
"a": [1, 2, 3],
"b": [10, 22, 33],
"c": [0.3234, 0.23432, 0.0],
"d": ["hello", "world", "hello"],
}
),
cudf.DataFrame(
{
"a": [1, 2, 3],
"b": ["hello", "world", "hello"],
"c": [0.3234, 0.23432, 0.0],
}
),
cudf.DataFrame(
{
"int_data": [1, 2, 3],
"str_data": ["hello", "world", "hello"],
"float_data": [0.3234, 0.23432, 0.0],
"timedelta_data": cudf.Series(
[1, 2, 1], dtype="timedelta64[ns]"
),
"datetime_data": cudf.Series(
[1, 2, 1], dtype="datetime64[ns]"
),
}
),
cudf.DataFrame(
{
"int_data": [1, 2, 3],
"str_data": ["hello", "world", "hello"],
"float_data": [0.3234, 0.23432, 0.0],
"timedelta_data": cudf.Series(
[1, 2, 1], dtype="timedelta64[ns]"
),
"datetime_data": cudf.Series(
[1, 2, 1], dtype="datetime64[ns]"
),
"category_data": cudf.Series(
["a", "a", "b"], dtype="category"
),
}
),
],
)
@pytest.mark.parametrize(
"exclude", [None, ["object"], ["int"], ["object", "int", "category"]]
)
def test_describe_misc_exclude(df, exclude):
pdf = df.to_pandas()
expected = pdf.describe(exclude=exclude, datetime_is_numeric=True)
actual = df.describe(exclude=exclude, datetime_is_numeric=True)
for col in expected.columns:
if expected[col].dtype == np.dtype("object"):
expected[col] = expected[col].fillna(-1).astype("str")
actual[col] = actual[col].fillna(-1).astype("str")
assert_eq(expected, actual)
@pytest.mark.parametrize(
"df",
[
cudf.DataFrame({"a": [1, 2, 3]}),
cudf.DataFrame(
{"a": [1, 2, 3], "b": ["a", "z", "c"]}, index=["a", "z", "x"]
),
cudf.DataFrame(
{
"a": [1, 2, 3, None, 2, 1, None],
"b": ["a", "z", "c", "a", "v", "z", "z"],
}
),
cudf.DataFrame({"a": [], "b": []}),
cudf.DataFrame({"a": [None, None], "b": [None, None]}),
cudf.DataFrame(
{
"a": ["hello", "world", "rapids", "ai", "nvidia"],
"b": cudf.Series(
[1, 21, 21, 11, 11],
dtype="timedelta64[s]",
index=["a", "b", "c", "d", " e"],
),
},
index=["a", "b", "c", "d", " e"],
),
cudf.DataFrame(
{
"a": ["hello", None, "world", "rapids", None, "ai", "nvidia"],
"b": cudf.Series(
[1, 21, None, 11, None, 11, None], dtype="datetime64[s]"
),
}
),
],
)
@pytest.mark.parametrize("numeric_only", [True, False])
@pytest.mark.parametrize("dropna", [True, False])
def test_dataframe_mode(df, numeric_only, dropna):
pdf = df.to_pandas()
expected = pdf.mode(numeric_only=numeric_only, dropna=dropna)
actual = df.mode(numeric_only=numeric_only, dropna=dropna)
assert_eq(expected, actual, check_dtype=False)
@pytest.mark.parametrize(
"lhs, rhs", [("a", "a"), ("a", "b"), (1, 1.0), (None, None), (None, "a")]
)
def test_equals_names(lhs, rhs):
lhs = cudf.DataFrame({lhs: [1, 2]})
rhs = cudf.DataFrame({rhs: [1, 2]})
got = lhs.equals(rhs)
expect = lhs.to_pandas().equals(rhs.to_pandas())
assert_eq(expect, got)
def test_equals_dtypes():
lhs = cudf.DataFrame({"a": [1, 2.0]})
rhs = cudf.DataFrame({"a": [1, 2]})
got = lhs.equals(rhs)
expect = lhs.to_pandas().equals(rhs.to_pandas())
assert_eq(expect, got)
@pytest.mark.parametrize(
"df1",
[
pd.DataFrame({"a": [10, 11, 12]}, index=["a", "b", "z"]),
pd.DataFrame({"z": ["a"]}),
pd.DataFrame({"a": [], "b": []}),
],
)
@pytest.mark.parametrize(
"df2",
[
pd.DataFrame(),
pd.DataFrame({"a": ["a", "a", "c", "z", "A"], "z": [1, 2, 3, 4, 5]}),
],
)
@pytest.mark.parametrize(
"op",
[
operator.eq,
operator.ne,
operator.lt,
operator.gt,
operator.le,
operator.ge,
],
)
def test_dataframe_error_equality(df1, df2, op):
gdf1 = cudf.from_pandas(df1)
gdf2 = cudf.from_pandas(df2)
assert_exceptions_equal(op, op, ([df1, df2],), ([gdf1, gdf2],))
@pytest.mark.parametrize(
"df,expected_pdf",
[
(
cudf.DataFrame(
{
"a": cudf.Series([1, 2, None, 3], dtype="uint8"),
"b": cudf.Series([23, None, None, 32], dtype="uint16"),
}
),
pd.DataFrame(
{
"a": pd.Series([1, 2, None, 3], dtype=pd.UInt8Dtype()),
"b": pd.Series(
[23, None, None, 32], dtype=pd.UInt16Dtype()
),
}
),
),
(
cudf.DataFrame(
{
"a": cudf.Series([None, 123, None, 1], dtype="uint32"),
"b": cudf.Series(
[234, 2323, 23432, None, None, 224], dtype="uint64"
),
}
),
pd.DataFrame(
{
"a": pd.Series(
[None, 123, None, 1], dtype=pd.UInt32Dtype()
),
"b": pd.Series(
[234, 2323, 23432, None, None, 224],
dtype=pd.UInt64Dtype(),
),
}
),
),
(
cudf.DataFrame(
{
"a": cudf.Series(
[-10, 1, None, -1, None, 3], dtype="int8"
),
"b": cudf.Series(
[111, None, 222, None, 13], dtype="int16"
),
}
),
pd.DataFrame(
{
"a": pd.Series(
[-10, 1, None, -1, None, 3], dtype=pd.Int8Dtype()
),
"b": pd.Series(
[111, None, 222, None, 13], dtype=pd.Int16Dtype()
),
}
),
),
(
cudf.DataFrame(
{
"a": cudf.Series(
[11, None, 22, 33, None, 2, None, 3], dtype="int32"
),
"b": cudf.Series(
[32431, None, None, 32322, 0, 10, -32324, None],
dtype="int64",
),
}
),
pd.DataFrame(
{
"a": pd.Series(
[11, None, 22, 33, None, 2, None, 3],
dtype=pd.Int32Dtype(),
),
"b": pd.Series(
[32431, None, None, 32322, 0, 10, -32324, None],
dtype=pd.Int64Dtype(),
),
}
),
),
(
cudf.DataFrame(
{
"a": cudf.Series(
[True, None, False, None, False, True, True, False],
dtype="bool_",
),
"b": cudf.Series(
[
"abc",
"a",
None,
"hello world",
"foo buzz",
"",
None,
"rapids ai",
],
dtype="object",
),
"c": cudf.Series(
[0.1, None, 0.2, None, 3, 4, 1000, None],
dtype="float64",
),
}
),
pd.DataFrame(
{
"a": pd.Series(
[True, None, False, None, False, True, True, False],
dtype=pd.BooleanDtype(),
),
"b": pd.Series(
[
"abc",
"a",
None,
"hello world",
"foo buzz",
"",
None,
"rapids ai",
],
dtype=pd.StringDtype(),
),
"c": pd.Series(
[0.1, None, 0.2, None, 3, 4, 1000, None],
dtype=pd.Float64Dtype(),
),
}
),
),
],
)
def test_dataframe_to_pandas_nullable_dtypes(df, expected_pdf):
actual_pdf = df.to_pandas(nullable=True)
assert_eq(actual_pdf, expected_pdf)
@pytest.mark.parametrize(
"data",
[
[{"a": 1, "b": 2, "c": 3}, {"a": 4, "b": 5, "c": 6}],
[{"a": 1, "b": 2, "c": None}, {"a": None, "b": 5, "c": 6}],
[{"a": 1, "b": 2}, {"a": 1, "b": 5, "c": 6}],
[{"a": 1, "b": 2}, {"b": 5, "c": 6}],
[{}, {"a": 1, "b": 5, "c": 6}],
[{"a": 1, "b": 2, "c": 3}, {"a": 4.5, "b": 5.5, "c": 6.5}],
],
)
def test_dataframe_init_from_list_of_dicts(data):
expect = pd.DataFrame(data)
got = cudf.DataFrame(data)
assert_eq(expect, got)
def test_dataframe_pipe():
pdf = pd.DataFrame()
gdf = cudf.DataFrame()
def add_int_col(df, column):
df[column] = df._constructor_sliced([10, 20, 30, 40])
return df
def add_str_col(df, column):
df[column] = df._constructor_sliced(["a", "b", "xyz", "ai"])
return df
expected = (
pdf.pipe(add_int_col, "one")
.pipe(add_int_col, column="two")
.pipe(add_str_col, "three")
)
actual = (
gdf.pipe(add_int_col, "one")
.pipe(add_int_col, column="two")
.pipe(add_str_col, "three")
)
assert_eq(expected, actual)
expected = (
pdf.pipe((add_str_col, "df"), column="one")
.pipe(add_str_col, column="two")
.pipe(add_int_col, "three")
)
actual = (
gdf.pipe((add_str_col, "df"), column="one")
.pipe(add_str_col, column="two")
.pipe(add_int_col, "three")
)
assert_eq(expected, actual)
def test_dataframe_pipe_error():
pdf = pd.DataFrame()
gdf = cudf.DataFrame()
def custom_func(df, column):
df[column] = df._constructor_sliced([10, 20, 30, 40])
return df
assert_exceptions_equal(
lfunc=pdf.pipe,
rfunc=gdf.pipe,
lfunc_args_and_kwargs=([(custom_func, "columns")], {"columns": "d"}),
rfunc_args_and_kwargs=([(custom_func, "columns")], {"columns": "d"}),
)
@pytest.mark.parametrize(
"op",
["count", "kurt", "kurtosis", "skew"],
)
def test_dataframe_axis1_unsupported_ops(op):
df = cudf.DataFrame({"a": [1, 2, 3], "b": [8, 9, 10]})
with pytest.raises(
NotImplementedError, match="Only axis=0 is currently supported."
):
getattr(df, op)(axis=1)
def test_dataframe_from_pandas_duplicate_columns():
pdf = pd.DataFrame(columns=["a", "b", "c", "a"])
pdf["a"] = [1, 2, 3]
with pytest.raises(
ValueError, match="Duplicate column names are not allowed"
):
cudf.from_pandas(pdf)
@pytest.mark.parametrize(
"df",
[
pd.DataFrame(
{"a": [1, 2, 3], "b": [10, 11, 20], "c": ["a", "bcd", "xyz"]}
),
pd.DataFrame(),
],
)
@pytest.mark.parametrize(
"columns",
[
None,
["a"],
["c", "a"],
["b", "a", "c"],
[],
pd.Index(["c", "a"]),
cudf.Index(["c", "a"]),
["abc", "a"],
["column_not_exists1", "column_not_exists2"],
],
)
@pytest.mark.parametrize("index", [["abc", "def", "ghi"]])
def test_dataframe_constructor_columns(df, columns, index, request):
def assert_local_eq(actual, df, expected, host_columns):
check_index_type = not expected.empty
if host_columns is not None and any(
col not in df.columns for col in host_columns
):
assert_eq(
expected,
actual,
check_dtype=False,
check_index_type=check_index_type,
)
else:
assert_eq(expected, actual, check_index_type=check_index_type)
if df.empty and columns is None and not PANDAS_GE_200:
request.node.add_marker(
pytest.mark.xfail(
reason="pandas returns Index[object] instead of RangeIndex"
)
)
gdf = cudf.from_pandas(df)
host_columns = (
columns.to_pandas() if isinstance(columns, cudf.BaseIndex) else columns
)
expected = pd.DataFrame(df, columns=host_columns, index=index)
actual = cudf.DataFrame(gdf, columns=columns, index=index)
assert_local_eq(actual, df, expected, host_columns)
def test_dataframe_constructor_column_index_only():
columns = ["a", "b", "c"]
index = ["r1", "r2", "r3"]
gdf = cudf.DataFrame(index=index, columns=columns)
assert not id(gdf["a"]._column) == id(gdf["b"]._column) and not id(
gdf["b"]._column
) == id(gdf["c"]._column)
@pytest_unmark_spilling
@pytest.mark.parametrize(
"data",
[
{"a": [1, 2, 3], "b": [3.0, 4.0, 5.0], "c": [True, True, False]},
{"a": [1.0, 2.0, 3.0], "b": [3.0, 4.0, 5.0], "c": [True, True, False]},
{"a": [1, 2, 3], "b": [3, 4, 5], "c": [True, True, False]},
{"a": [1, 2, 3], "b": [True, True, False], "c": [False, True, False]},
{
"a": [1.0, 2.0, 3.0],
"b": [True, True, False],
"c": [False, True, False],
},
{"a": [1, 2, 3], "b": [3, 4, 5], "c": [2.0, 3.0, 4.0]},
{"a": [1, 2, 3], "b": [2.0, 3.0, 4.0], "c": [5.0, 6.0, 4.0]},
],
)
@pytest.mark.parametrize(
"aggs",
[
["min", "sum", "max"],
("min", "sum", "max"),
{"min", "sum", "max"},
"sum",
{"a": "sum", "b": "min", "c": "max"},
{"a": ["sum"], "b": ["min"], "c": ["max"]},
{"a": ("sum"), "b": ("min"), "c": ("max")},
{"a": {"sum"}, "b": {"min"}, "c": {"max"}},
{"a": ["sum", "min"], "b": ["sum", "max"], "c": ["min", "max"]},
{"a": ("sum", "min"), "b": ("sum", "max"), "c": ("min", "max")},
{"a": {"sum", "min"}, "b": {"sum", "max"}, "c": {"min", "max"}},
],
)
def test_agg_for_dataframes(data, aggs):
pdf = pd.DataFrame(data)
gdf = cudf.DataFrame(data)
expect = pdf.agg(aggs).sort_index()
got = gdf.agg(aggs).sort_index()
assert_eq(expect, got, check_dtype=False)
@pytest.mark.parametrize("aggs", [{"a": np.sum, "b": np.min, "c": np.max}])
def test_agg_for_unsupported_function(aggs):
gdf = cudf.DataFrame(
{"a": [1, 2, 3], "b": [3.0, 4.0, 5.0], "c": [True, True, False]}
)
with pytest.raises(NotImplementedError):
gdf.agg(aggs)
@pytest.mark.parametrize("aggs", ["asdf"])
def test_agg_for_dataframe_with_invalid_function(aggs):
gdf = cudf.DataFrame(
{"a": [1, 2, 3], "b": [3.0, 4.0, 5.0], "c": [True, True, False]}
)
with pytest.raises(
AttributeError,
match=f"{aggs} is not a valid function for 'DataFrame' object",
):
gdf.agg(aggs)
@pytest.mark.parametrize("aggs", [{"a": "asdf"}])
def test_agg_for_series_with_invalid_function(aggs):
gdf = cudf.DataFrame(
{"a": [1, 2, 3], "b": [3.0, 4.0, 5.0], "c": [True, True, False]}
)
with pytest.raises(
AttributeError,
match=f"{aggs['a']} is not a valid function for 'Series' object",
):
gdf.agg(aggs)
@pytest.mark.parametrize(
"aggs",
[
"sum",
["min", "sum", "max"],
{"a": {"sum", "min"}, "b": {"sum", "max"}, "c": {"min", "max"}},
],
)
def test_agg_for_dataframe_with_string_columns(aggs):
gdf = cudf.DataFrame(
{"a": ["m", "n", "o"], "b": ["t", "u", "v"], "c": ["x", "y", "z"]},
index=["a", "b", "c"],
)
with pytest.raises(
NotImplementedError,
match=re.escape(
"DataFrame.agg() is not supported for "
"frames containing string columns"
),
):
gdf.agg(aggs)
@pytest_unmark_spilling
@pytest.mark.parametrize(
"join",
["left"],
)
@pytest.mark.parametrize(
"overwrite",
[True, False],
)
@pytest.mark.parametrize(
"errors",
["ignore"],
)
@pytest.mark.parametrize(
"data",
[
{"a": [1, 2, 3], "b": [3, 4, 5]},
{"e": [1.0, 2.0, 3.0], "d": [3.0, 4.0, 5.0]},
{"c": [True, False, False], "d": [False, True, True]},
{"g": [2.0, np.nan, 4.0], "n": [np.nan, np.nan, np.nan]},
{"d": [np.nan, np.nan, np.nan], "e": [np.nan, np.nan, np.nan]},
{"a": [1.0, 2, 3], "b": pd.Series([4.0, 8.0, 3.0], index=[1, 2, 3])},
{
"d": [1.0, 2.0, 3.0],
"c": pd.Series([np.nan, np.nan, np.nan], index=[1, 2, 3]),
},
{
"a": [False, True, False],
"b": pd.Series([1.0, 2.0, np.nan], index=[1, 2, 3]),
},
{
"a": [np.nan, np.nan, np.nan],
"e": pd.Series([np.nan, np.nan, np.nan], index=[1, 2, 3]),
},
],
)
@pytest.mark.parametrize(
"data2",
[
{"b": [3, 5, 6], "e": [8, 2, 1]},
{"c": [True, False, True], "d": [3.0, 4.0, 5.0]},
{"e": [False, False, True], "g": [True, True, False]},
{"g": [np.nan, np.nan, np.nan], "c": [np.nan, np.nan, np.nan]},
{"a": [7, 5, 8], "b": pd.Series([2.0, 7.0, 9.0], index=[0, 1, 2])},
{
"b": [np.nan, 2.0, np.nan],
"c": pd.Series([2, np.nan, 5.0], index=[2, 3, 4]),
},
{
"a": pd.Series([True, None, True], dtype=pd.BooleanDtype()),
"d": pd.Series(
[False, True, None], index=[0, 1, 3], dtype=pd.BooleanDtype()
),
},
],
)
def test_update_for_dataframes(request, data, data2, join, overwrite, errors):
request.applymarker(
pytest.mark.xfail(
condition=request.node.name
in {
"test_update_for_dataframes[data21-data2-ignore-True-left]",
"test_update_for_dataframes[data24-data7-ignore-True-left]",
"test_update_for_dataframes[data25-data2-ignore-True-left]",
},
reason="mixing of bools & non-bools is not allowed.",
)
)
pdf = pd.DataFrame(data)
gdf = cudf.DataFrame(data, nan_as_null=False)
other_pd = pd.DataFrame(data2)
other_gd = cudf.DataFrame(data2, nan_as_null=False)
pdf.update(other=other_pd, join=join, overwrite=overwrite, errors=errors)
gdf.update(other=other_gd, join=join, overwrite=overwrite, errors=errors)
assert_eq(pdf, gdf, check_dtype=False)
@pytest.mark.parametrize(
"join",
["right"],
)
def test_update_for_right_join(join):
gdf = cudf.DataFrame({"a": [1, 2, 3], "b": [3.0, 4.0, 5.0]})
other_gd = cudf.DataFrame({"a": [1, np.nan, 3], "b": [np.nan, 2.0, 5.0]})
with pytest.raises(
NotImplementedError, match="Only left join is supported"
):
gdf.update(other_gd, join)
@pytest.mark.parametrize(
"errors",
["raise"],
)
def test_update_for_data_overlap(errors):
pdf = pd.DataFrame({"a": [1, 2, 3], "b": [3.0, 4.0, 5.0]})
gdf = cudf.DataFrame({"a": [1, 2, 3], "b": [3.0, 4.0, 5.0]})
other_pd = pd.DataFrame({"a": [1, np.nan, 3], "b": [np.nan, 2.0, 5.0]})
other_gd = cudf.DataFrame({"a": [1, np.nan, 3], "b": [np.nan, 2.0, 5.0]})
assert_exceptions_equal(
lfunc=pdf.update,
rfunc=gdf.update,
lfunc_args_and_kwargs=([other_pd, errors], {}),
rfunc_args_and_kwargs=([other_gd, errors], {}),
)
@pytest.mark.parametrize(
"gdf",
[
cudf.DataFrame({"a": [[1], [2], [3]]}),
cudf.DataFrame(
{
"left-a": [0, 1, 2],
"a": [[1], None, [3]],
"right-a": ["abc", "def", "ghi"],
}
),
cudf.DataFrame(
{
"left-a": [[], None, None],
"a": [[1], None, [3]],
"right-a": ["abc", "def", "ghi"],
}
),
],
)
def test_dataframe_roundtrip_arrow_list_dtype(gdf):
table = gdf.to_arrow()
expected = cudf.DataFrame.from_arrow(table)
assert_eq(gdf, expected)
@pytest.mark.parametrize(
"gdf",
[
cudf.DataFrame({"a": [{"one": 3, "two": 4, "three": 10}]}),
cudf.DataFrame(
{
"left-a": [0, 1, 2],
"a": [{"x": 0.23, "y": 43}, None, {"x": 23.9, "y": 4.3}],
"right-a": ["abc", "def", "ghi"],
}
),
cudf.DataFrame(
{
"left-a": [{"a": 1}, None, None],
"a": [
{"one": 324, "two": 23432, "three": 324},
None,
{"one": 3.24, "two": 1, "three": 324},
],
"right-a": ["abc", "def", "ghi"],
}
),
],
)
def test_dataframe_roundtrip_arrow_struct_dtype(gdf):
table = gdf.to_arrow()
expected = cudf.DataFrame.from_arrow(table)
assert_eq(gdf, expected)
def test_dataframe_setitem_cupy_array():
np.random.seed(0)
pdf = pd.DataFrame(np.random.randn(10, 2))
gdf = cudf.from_pandas(pdf)
gpu_array = cupy.array([True, False] * 5)
pdf[gpu_array.get()] = 1.5
gdf[gpu_array] = 1.5
assert_eq(pdf, gdf)
@pytest.mark.parametrize(
"data", [{"a": [1, 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]}]
)
@pytest.mark.parametrize(
"index",
[{0: 123, 1: 4, 2: 6}],
)
@pytest.mark.parametrize(
"level",
["x", 0],
)
def test_rename_for_level_MultiIndex_dataframe(data, index, level):
pdf = pd.DataFrame(
data,
index=pd.MultiIndex.from_tuples([(0, 1, 2), (1, 2, 3), (2, 3, 4)]),
)
pdf.index.names = ["x", "y", "z"]
gdf = cudf.from_pandas(pdf)
expect = pdf.rename(index=index, level=level)
got = gdf.rename(index=index, level=level)
assert_eq(expect, got)
@pytest.mark.parametrize(
"data", [{"a": [1, 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]}]
)
@pytest.mark.parametrize(
"columns",
[{"a": "f", "b": "g"}, {1: 3, 2: 4}, lambda s: 2 * s],
)
@pytest.mark.parametrize(
"level",
[0, 1],
)
def test_rename_for_level_MultiColumn_dataframe(data, columns, level):
gdf = cudf.DataFrame(data)
gdf.columns = pd.MultiIndex.from_tuples([("a", 1), ("a", 2), ("b", 1)])
pdf = gdf.to_pandas()
expect = pdf.rename(columns=columns, level=level)
got = gdf.rename(columns=columns, level=level)
assert_eq(expect, got)
def test_rename_for_level_RangeIndex_dataframe():
gdf = cudf.DataFrame({"a": [1, 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]})
pdf = gdf.to_pandas()
expect = pdf.rename(columns={"a": "f"}, index={0: 3, 1: 4}, level=0)
got = gdf.rename(columns={"a": "f"}, index={0: 3, 1: 4}, level=0)
assert_eq(expect, got)
@pytest_xfail(reason="level=None not implemented yet")
def test_rename_for_level_is_None_MC():
gdf = cudf.DataFrame({"a": [1, 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]})
gdf.columns = pd.MultiIndex.from_tuples([("a", 1), ("a", 2), ("b", 1)])
pdf = gdf.to_pandas()
expect = pdf.rename(columns={"a": "f"}, level=None)
got = gdf.rename(columns={"a": "f"}, level=None)
assert_eq(expect, got)
@pytest.mark.parametrize(
"data",
[
[
[[1, 2, 3], 11, "a"],
[None, 22, "e"],
[[4], 33, "i"],
[[], 44, "o"],
[[5, 6], 55, "u"],
], # nested
[
[1, 11, "a"],
[2, 22, "e"],
[3, 33, "i"],
[4, 44, "o"],
[5, 55, "u"],
], # non-nested
],
)
@pytest.mark.parametrize(
("labels", "label_to_explode"),
[
(None, 0),
(pd.Index(["a", "b", "c"]), "a"),
(
pd.MultiIndex.from_tuples(
[(0, "a"), (0, "b"), (1, "a")], names=["l0", "l1"]
),
(0, "a"),
),
],
)
@pytest.mark.parametrize("ignore_index", [True, False])
@pytest.mark.parametrize(
"p_index",
[
None,
["ia", "ib", "ic", "id", "ie"],
pd.MultiIndex.from_tuples(
[(0, "a"), (0, "b"), (0, "c"), (1, "a"), (1, "b")]
),
],
)
def test_explode(data, labels, ignore_index, p_index, label_to_explode):
pdf = pd.DataFrame(data, index=p_index, columns=labels)
gdf = cudf.from_pandas(pdf)
if PANDAS_GE_134:
expect = pdf.explode(label_to_explode, ignore_index)
else:
# https://github.com/pandas-dev/pandas/issues/43314
if isinstance(label_to_explode, int):
pdlabel_to_explode = [label_to_explode]
else:
pdlabel_to_explode = label_to_explode
expect = pdf.explode(pdlabel_to_explode, ignore_index)
got = gdf.explode(label_to_explode, ignore_index)
assert_eq(expect, got, check_dtype=False)
@pytest.mark.parametrize(
"df,ascending,expected",
[
(
cudf.DataFrame({"a": [10, 0, 2], "b": [-10, 10, 1]}),
True,
cupy.array([1, 2, 0], dtype="int32"),
),
(
cudf.DataFrame({"a": [10, 0, 2], "b": [-10, 10, 1]}),
False,
cupy.array([0, 2, 1], dtype="int32"),
),
],
)
def test_dataframe_argsort(df, ascending, expected):
actual = df.argsort(ascending=ascending)
assert_eq(actual, expected)
@pytest.mark.parametrize(
"data,columns,index",
[
(pd.Series([1, 2, 3]), None, None),
(pd.Series(["a", "b", None, "c"], name="abc"), None, None),
(
pd.Series(["a", "b", None, "c"], name="abc"),
["abc", "b"],
[1, 2, 3],
),
],
)
def test_dataframe_init_from_series(data, columns, index):
expected = pd.DataFrame(data, columns=columns, index=index)
actual = cudf.DataFrame(data, columns=columns, index=index)
assert_eq(
expected,
actual,
check_index_type=len(expected) != 0,
)
def test_frame_series_where():
gdf = cudf.DataFrame(
{"a": [1.0, 2.0, None, 3.0, None], "b": [None, 10.0, 11.0, None, 23.0]}
)
pdf = gdf.to_pandas()
expected = gdf.where(gdf.notna(), gdf.mean())
actual = pdf.where(pdf.notna(), pdf.mean(), axis=1)
assert_eq(expected, actual)
@pytest.mark.parametrize(
"data",
[{"a": [1, 2, 3], "b": [1, 1, 0]}],
)
def test_frame_series_where_other(data):
gdf = cudf.DataFrame(data)
pdf = gdf.to_pandas()
expected = gdf.where(gdf["b"] == 1, cudf.NA)
actual = pdf.where(pdf["b"] == 1, pd.NA)
assert_eq(
actual.fillna(-1).values,
expected.fillna(-1).values,
check_dtype=False,
)
expected = gdf.where(gdf["b"] == 1, 0)
actual = pdf.where(pdf["b"] == 1, 0)
assert_eq(expected, actual)
@pytest.mark.parametrize(
"data, gkey",
[
(
{
"id": ["a", "a", "a", "b", "b", "b", "c", "c", "c"],
"val1": [5, 4, 6, 4, 8, 7, 4, 5, 2],
"val2": [4, 5, 6, 1, 2, 9, 8, 5, 1],
"val3": [4, 5, 6, 1, 2, 9, 8, 5, 1],
},
["id", "val1", "val2"],
),
(
{
"id": [0] * 4 + [1] * 3,
"a": [10, 3, 4, 2, -3, 9, 10],
"b": [10, 23, -4, 2, -3, 9, 19],
},
["id", "a"],
),
(
{
"id": ["a", "a", "b", "b", "c", "c"],
"val": cudf.Series(
[None, None, None, None, None, None], dtype="float64"
),
},
["id"],
),
(
{
"id": ["a", "a", "b", "b", "c", "c"],
"val1": [None, 4, 6, 8, None, 2],
"val2": [4, 5, None, 2, 9, None],
},
["id"],
),
({"id": [1.0], "val1": [2.0], "val2": [3.0]}, ["id"]),
],
)
@pytest.mark.parametrize(
"min_per",
[0, 1, 2, 3, 4],
)
def test_pearson_corr_passing(data, gkey, min_per):
gdf = cudf.DataFrame(data)
pdf = gdf.to_pandas()
actual = gdf.groupby(gkey).corr(method="pearson", min_periods=min_per)
expected = pdf.groupby(gkey).corr(method="pearson", min_periods=min_per)
assert_eq(expected, actual)
@pytest.mark.parametrize("method", ["kendall", "spearman"])
def test_pearson_corr_unsupported_methods(method):
gdf = cudf.DataFrame(
{
"id": ["a", "a", "a", "b", "b", "b", "c", "c", "c"],
"val1": [5, 4, 6, 4, 8, 7, 4, 5, 2],
"val2": [4, 5, 6, 1, 2, 9, 8, 5, 1],
"val3": [4, 5, 6, 1, 2, 9, 8, 5, 1],
}
)
with pytest.raises(
NotImplementedError,
match="Only pearson correlation is currently supported",
):
gdf.groupby("id").corr(method)
def test_pearson_corr_empty_columns():
gdf = cudf.DataFrame(columns=["id", "val1", "val2"])
pdf = gdf.to_pandas()
actual = gdf.groupby("id").corr("pearson")
expected = pdf.groupby("id").corr("pearson")
assert_eq(
expected,
actual,
check_dtype=False,
check_index_type=False,
)
@pytest.mark.parametrize(
"data",
[
{
"id": ["a", "a", "a", "b", "b", "b", "c", "c", "c"],
"val1": ["v", "n", "k", "l", "m", "i", "y", "r", "w"],
"val2": ["d", "d", "d", "e", "e", "e", "f", "f", "f"],
},
{
"id": ["a", "a", "a", "b", "b", "b", "c", "c", "c"],
"val1": [1, 1, 1, 2, 2, 2, 3, 3, 3],
"val2": ["d", "d", "d", "e", "e", "e", "f", "f", "f"],
},
],
)
@pytest.mark.parametrize("gkey", ["id", "val1", "val2"])
def test_pearson_corr_invalid_column_types(data, gkey):
with pytest.raises(
TypeError,
match="Correlation accepts only numerical column-pairs",
):
cudf.DataFrame(data).groupby(gkey).corr("pearson")
def test_pearson_corr_multiindex_dataframe():
gdf = cudf.DataFrame(
{"a": [1, 1, 2, 2], "b": [1, 1, 2, 3], "c": [2, 3, 4, 5]}
).set_index(["a", "b"])
actual = gdf.groupby(level="a").corr("pearson")
expected = gdf.to_pandas().groupby(level="a").corr("pearson")
assert_eq(expected, actual)
@pytest.mark.parametrize(
"data",
[
{"a": [np.nan, 1, 2], "b": [None, None, None]},
{"a": [1, 2, np.nan, 2], "b": [np.nan, np.nan, np.nan, np.nan]},
{
"a": [1, 2, np.nan, 2, None],
"b": [np.nan, np.nan, None, np.nan, np.nan],
},
{"a": [1, 2, 2, None, 1.1], "b": [1, 2.2, 3, None, 5]},
],
)
@pytest.mark.parametrize("nan_as_null", [True, False])
def test_dataframe_constructor_nan_as_null(data, nan_as_null):
actual = cudf.DataFrame(data, nan_as_null=nan_as_null)
if nan_as_null:
assert (
not (
actual.astype("float").replace(
cudf.Series([np.nan], nan_as_null=False), cudf.Series([-1])
)
== -1
)
.any()
.any()
)
else:
actual = actual.select_dtypes(exclude=["object"])
assert (actual.replace(np.nan, -1) == -1).any().any()
def test_dataframe_add_prefix():
cdf = cudf.DataFrame({"A": [1, 2, 3, 4], "B": [3, 4, 5, 6]})
pdf = cdf.to_pandas()
got = cdf.add_prefix("item_")
expected = pdf.add_prefix("item_")
assert_eq(got, expected)
def test_dataframe_add_suffix():
cdf = cudf.DataFrame({"A": [1, 2, 3, 4], "B": [3, 4, 5, 6]})
pdf = cdf.to_pandas()
got = cdf.add_suffix("_item")
expected = pdf.add_suffix("_item")
assert_eq(got, expected)
@pytest.mark.parametrize(
"data, gkey",
[
(
{
"id": ["a", "a", "a", "b", "b", "b", "c", "c", "c"],
"val1": [5, 4, 6, 4, 8, 7, 4, 5, 2],
"val2": [4, 5, 6, 1, 2, 9, 8, 5, 1],
"val3": [4, 5, 6, 1, 2, 9, 8, 5, 1],
},
["id"],
),
(
{
"id": [0, 0, 0, 0, 1, 1, 1],
"a": [10.0, 3, 4, 2.0, -3.0, 9.0, 10.0],
"b": [10.0, 23, -4.0, 2, -3.0, 9, 19.0],
},
["id", "a"],
),
],
)
@pytest.mark.parametrize(
"min_periods",
[0, 3],
)
@pytest.mark.parametrize(
"ddof",
[1, 2],
)
def test_groupby_covariance(data, gkey, min_periods, ddof):
gdf = cudf.DataFrame(data)
pdf = gdf.to_pandas()
actual = gdf.groupby(gkey).cov(min_periods=min_periods, ddof=ddof)
# We observe a warning if there are too few observations to generate a
# non-singular covariance matrix _and_ there are enough that pandas will
# actually attempt to compute a value. Groups with fewer than min_periods
# inputs will be skipped altogether, so no warning occurs.
with expect_warning_if(
(pdf.groupby(gkey).count() < 2).all().all()
and (pdf.groupby(gkey).count() > min_periods).all().all(),
RuntimeWarning,
):
expected = pdf.groupby(gkey).cov(min_periods=min_periods, ddof=ddof)
assert_eq(expected, actual)
def test_groupby_covariance_multiindex_dataframe():
gdf = cudf.DataFrame(
{
"a": [1, 1, 2, 2],
"b": [1, 1, 2, 2],
"c": [2, 3, 4, 5],
"d": [6, 8, 9, 1],
}
).set_index(["a", "b"])
actual = gdf.groupby(level=["a", "b"]).cov()
expected = gdf.to_pandas().groupby(level=["a", "b"]).cov()
assert_eq(expected, actual)
def test_groupby_covariance_empty_columns():
gdf = cudf.DataFrame(columns=["id", "val1", "val2"])
pdf = gdf.to_pandas()
actual = gdf.groupby("id").cov()
expected = pdf.groupby("id").cov()
assert_eq(
expected,
actual,
check_dtype=False,
check_index_type=False,
)
def test_groupby_cov_invalid_column_types():
gdf = cudf.DataFrame(
{
"id": ["a", "a", "a", "b", "b", "b", "c", "c", "c"],
"val1": ["v", "n", "k", "l", "m", "i", "y", "r", "w"],
"val2": ["d", "d", "d", "e", "e", "e", "f", "f", "f"],
},
)
with pytest.raises(
TypeError,
match="Covariance accepts only numerical column-pairs",
):
gdf.groupby("id").cov()
def test_groupby_cov_positive_semidefinite_matrix():
# Refer to discussions in PR #9889 re "pair-wise deletion" strategy
# being used in pandas to compute the covariance of a dataframe with
# rows containing missing values.
# Note: cuDF currently matches pandas behavior in that the covariance
# matrices are not guaranteed PSD (positive semi definite).
# https://github.com/rapidsai/cudf/pull/9889#discussion_r794158358
gdf = cudf.DataFrame(
[[1, 2], [None, 4], [5, None], [7, 8]], columns=["v0", "v1"]
)
actual = gdf.groupby(by=cudf.Series([1, 1, 1, 1])).cov()
actual.reset_index(drop=True, inplace=True)
pdf = gdf.to_pandas()
expected = pdf.groupby(by=pd.Series([1, 1, 1, 1])).cov()
expected.reset_index(drop=True, inplace=True)
assert_eq(
expected,
actual,
check_dtype=False,
)
@pytest_xfail
def test_groupby_cov_for_pandas_bug_case():
# Handles case: pandas bug using ddof with missing data.
# Filed an issue in Pandas on GH, link below:
# https://github.com/pandas-dev/pandas/issues/45814
pdf = pd.DataFrame(
{"id": ["a", "a"], "val1": [1.0, 2.0], "val2": [np.nan, np.nan]}
)
expected = pdf.groupby("id").cov(ddof=2)
gdf = cudf.from_pandas(pdf)
actual = gdf.groupby("id").cov(ddof=2)
assert_eq(expected, actual)
@pytest.mark.parametrize(
"data",
[
np.random.RandomState(seed=10).randint(-50, 50, (25, 30)),
np.random.RandomState(seed=10).random_sample((4, 4)),
np.array([1.123, 2.343, 5.890, 0.0]),
[True, False, True, False, False],
{"a": [1.123, 2.343, np.nan, np.nan], "b": [None, 3, 9.08, None]},
],
)
@pytest.mark.parametrize("periods", (-5, -1, 0, 1, 5))
def test_diff_numeric_dtypes(data, periods):
gdf = cudf.DataFrame(data)
pdf = gdf.to_pandas()
actual = gdf.diff(periods=periods, axis=0)
expected = pdf.diff(periods=periods, axis=0)
assert_eq(
expected,
actual,
check_dtype=False,
)
@pytest.mark.parametrize(
("precision", "scale"),
[(5, 2), (8, 5)],
)
@pytest.mark.parametrize(
"dtype",
[cudf.Decimal32Dtype, cudf.Decimal64Dtype],
)
def test_diff_decimal_dtypes(precision, scale, dtype):
gdf = cudf.DataFrame(
np.random.default_rng(seed=42).uniform(10.5, 75.5, (10, 6)),
dtype=dtype(precision=precision, scale=scale),
)
pdf = gdf.to_pandas()
actual = gdf.diff()
expected = pdf.diff()
assert_eq(
expected,
actual,
check_dtype=False,
)
def test_diff_invalid_axis():
gdf = cudf.DataFrame(np.array([1.123, 2.343, 5.890, 0.0]))
with pytest.raises(NotImplementedError, match="Only axis=0 is supported."):
gdf.diff(periods=1, axis=1)
@pytest.mark.parametrize(
"data",
[
{
"int_col": [1, 2, 3, 4, 5],
"float_col": [1.0, 2.0, 3.0, 4.0, 5.0],
"string_col": ["a", "b", "c", "d", "e"],
},
["a", "b", "c", "d", "e"],
],
)
def test_diff_unsupported_dtypes(data):
gdf = cudf.DataFrame(data)
with pytest.raises(
TypeError,
match=r"unsupported operand type\(s\)",
):
gdf.diff()
def test_diff_many_dtypes():
pdf = pd.DataFrame(
{
"dates": pd.date_range("2020-01-01", "2020-01-06", freq="D"),
"bools": [True, True, True, False, True, True],
"floats": [1.0, 2.0, 3.5, np.nan, 5.0, -1.7],
"ints": [1, 2, 3, 3, 4, 5],
"nans_nulls": [np.nan, None, None, np.nan, np.nan, None],
}
)
gdf = cudf.from_pandas(pdf)
assert_eq(pdf.diff(), gdf.diff())
assert_eq(pdf.diff(periods=2), gdf.diff(periods=2))
def test_dataframe_assign_cp_np_array():
m, n = 5, 3
cp_ndarray = cupy.random.randn(m, n)
pdf = pd.DataFrame({f"f_{i}": range(m) for i in range(n)})
gdf = cudf.DataFrame({f"f_{i}": range(m) for i in range(n)})
pdf[[f"f_{i}" for i in range(n)]] = cupy.asnumpy(cp_ndarray)
gdf[[f"f_{i}" for i in range(n)]] = cp_ndarray
assert_eq(pdf, gdf)
@pytest.mark.parametrize(
"data",
[{"a": [1, 2, 3], "b": [1, 1, 0]}],
)
def test_dataframe_nunique(data):
gdf = cudf.DataFrame(data)
pdf = gdf.to_pandas()
actual = gdf.nunique()
expected = pdf.nunique()
assert_eq(expected, actual)
@pytest.mark.parametrize(
"data",
[{"key": [0, 1, 1, 0, 0, 1], "val": [1, 8, 3, 9, -3, 8]}],
)
def test_dataframe_nunique_index(data):
gdf = cudf.DataFrame(data)
pdf = gdf.to_pandas()
actual = gdf.index.nunique()
expected = pdf.index.nunique()
assert_eq(expected, actual)
def test_dataframe_rename_duplicate_column():
gdf = cudf.DataFrame({"a": [1, 2, 3], "b": [3, 4, 5]})
with pytest.raises(
ValueError, match="Duplicate column names are not allowed"
):
gdf.rename(columns={"a": "b"}, inplace=True)
@pytest_unmark_spilling
@pytest.mark.parametrize(
"data",
[
np.random.RandomState(seed=10).randint(-50, 50, (10, 10)),
np.random.RandomState(seed=10).random_sample((4, 4)),
np.array([1.123, 2.343, 5.890, 0.0]),
{"a": [1.123, 2.343, np.nan, np.nan], "b": [None, 3, 9.08, None]},
],
)
@pytest.mark.parametrize("periods", [-5, -2, 0, 2, 5])
@pytest.mark.parametrize("fill_method", ["ffill", "bfill", "pad", "backfill"])
def test_dataframe_pct_change(data, periods, fill_method):
gdf = cudf.DataFrame(data)
pdf = gdf.to_pandas()
actual = gdf.pct_change(periods=periods, fill_method=fill_method)
expected = pdf.pct_change(periods=periods, fill_method=fill_method)
assert_eq(expected, actual)
def test_mean_timeseries():
gdf = cudf.datasets.timeseries()
pdf = gdf.to_pandas()
expected = pdf.mean(numeric_only=True)
actual = gdf.mean(numeric_only=True)
assert_eq(expected, actual)
with pytest.warns(FutureWarning):
expected = pdf.mean()
with pytest.warns(FutureWarning):
actual = gdf.mean()
assert_eq(expected, actual)
@pytest.mark.parametrize(
"data",
[
{
"a": [1, 2, 3, 4, 5],
"b": ["a", "b", "c", "d", "e"],
"c": [1.0, 2.0, 3.0, 4.0, 5.0],
}
],
)
def test_std_different_dtypes(data):
gdf = cudf.DataFrame(data)
pdf = gdf.to_pandas()
expected = pdf.std(numeric_only=True)
actual = gdf.std(numeric_only=True)
assert_eq(expected, actual)
with pytest.warns(FutureWarning):
expected = pdf.std()
with pytest.warns(FutureWarning):
actual = gdf.std()
assert_eq(expected, actual)
@pytest.mark.parametrize(
"data",
[
{
"id": ["a", "a", "a", "b", "b", "b", "c", "c", "c"],
"val1": ["v", "n", "k", "l", "m", "i", "y", "r", "w"],
"val2": ["d", "d", "d", "e", "e", "e", "f", "f", "f"],
}
],
)
def test_empty_numeric_only(data):
gdf = cudf.DataFrame(data)
pdf = gdf.to_pandas()
expected = pdf.prod(numeric_only=True)
actual = gdf.prod(numeric_only=True)
assert_eq(expected, actual)
@pytest.fixture(params=[0, 10], ids=["empty", "10"])
def df_eval(request):
N = request.param
if N == 0:
value = np.zeros(0, dtype="int")
return cudf.DataFrame(
{
"a": value,
"b": value,
"c": value,
"d": value,
}
)
int_max = 10
rng = cupy.random.default_rng(0)
return cudf.DataFrame(
{
"a": rng.integers(N, size=int_max),
"b": rng.integers(N, size=int_max),
"c": rng.integers(N, size=int_max),
"d": rng.integers(N, size=int_max),
}
)
# Note that for now expressions do not automatically handle casting, so inputs
# need to be casted appropriately
@pytest.mark.parametrize(
"expr, dtype",
[
("a", int),
("+a", int),
("a + b", int),
("a == b", int),
("a / b", float),
("a * b", int),
("a > b", int),
("a >= b", int),
("a > b > c", int),
("a > b < c", int),
("a & b", int),
("a & b | c", int),
("sin(a)", float),
("exp(sin(abs(a)))", float),
("sqrt(floor(a))", float),
("ceil(arctanh(a))", float),
("(a + b) - (c * d)", int),
("~a", int),
("(a > b) and (c > d)", int),
("(a > b) or (c > d)", int),
("not (a > b)", int),
("a + 1", int),
("a + 1.0", float),
("-a + 1", int),
("+a + 1", int),
("e = a + 1", int),
(
"""
e = log(cos(a)) + 1.0
f = abs(c) - exp(d)
""",
float,
),
("a_b_are_equal = (a == b)", int),
("a > b", str),
("a < '1'", str),
('a == "1"', str),
],
)
def test_dataframe_eval(df_eval, expr, dtype):
df_eval = df_eval.astype(dtype)
expect = df_eval.to_pandas().eval(expr)
got = df_eval.eval(expr)
# In the specific case where the evaluated expression is a unary function
# of a single column with no nesting, pandas will retain the name. This
# level of compatibility is out of scope for now.
assert_eq(expect, got, check_names=False)
# Test inplace
if re.search("[^=><]=[^=]", expr) is not None:
pdf_eval = df_eval.to_pandas()
pdf_eval.eval(expr, inplace=True)
df_eval.eval(expr, inplace=True)
assert_eq(pdf_eval, df_eval)
@pytest.mark.parametrize(
"expr",
[
"""
e = a + b
a == b
""",
"a_b_are_equal = (a == b) = c",
],
)
def test_dataframe_eval_errors(df_eval, expr):
with pytest.raises(ValueError):
df_eval.eval(expr)
def test_dataframe_eval_misc():
df = cudf.DataFrame({"a": [1, 2, 3, None, 5]})
got = df.eval("isnull(a)")
assert_eq(got, cudf.Series.isnull(df["a"]), check_names=False)
df.eval("c = isnull(1)", inplace=True)
assert_eq(df["c"], cudf.Series([False] * len(df), name="c"))
@pytest.mark.parametrize(
"gdf,subset",
[
(
cudf.DataFrame(
{"num_legs": [2, 4, 4, 6], "num_wings": [2, 0, 0, 0]},
index=["falcon", "dog", "cat", "ant"],
),
["num_legs"],
),
(
cudf.DataFrame(
{
"first_name": ["John", "Anne", "John", "Beth"],
"middle_name": ["Smith", None, None, "Louise"],
}
),
["first_name"],
),
],
)
@pytest.mark.parametrize("sort", [True, False])
@pytest.mark.parametrize("ascending", [True, False])
@pytest.mark.parametrize("normalize", [True, False])
@pytest.mark.parametrize("dropna", [True, False])
@pytest.mark.parametrize("use_subset", [True, False])
def test_value_counts(
gdf,
subset,
sort,
ascending,
normalize,
dropna,
use_subset,
):
pdf = gdf.to_pandas()
got = gdf.value_counts(
subset=subset if (use_subset) else None,
sort=sort,
ascending=ascending,
normalize=normalize,
dropna=dropna,
)
expected = pdf.value_counts(
subset=subset if (use_subset) else None,
sort=sort,
ascending=ascending,
normalize=normalize,
dropna=dropna,
)
if not dropna:
# Convert the Pandas series to a cuDF one due to difference
# in the handling of NaNs between the two (<NA> in cuDF and
# NaN in Pandas) when dropna=False.
assert_eq(got.sort_index(), cudf.from_pandas(expected).sort_index())
else:
assert_eq(got.sort_index(), expected.sort_index())
with pytest.raises(KeyError):
gdf.value_counts(subset=["not_a_column_name"])
@pytest.fixture
def wildcard_df():
midx = cudf.MultiIndex.from_tuples(
[(c1, c2) for c1 in "abc" for c2 in "ab"]
)
df = cudf.DataFrame({f"{i}": [i] for i in range(6)})
df.columns = midx
return df
def test_multiindex_wildcard_selection_all(wildcard_df):
expect = wildcard_df.to_pandas().loc[:, (slice(None), "b")]
got = wildcard_df.loc[:, (slice(None), "b")]
assert_eq(expect, got)
@pytest_xfail(reason="Not yet properly supported.")
def test_multiindex_wildcard_selection_partial(wildcard_df):
expect = wildcard_df.to_pandas().loc[:, (slice("a", "b"), "b")]
got = wildcard_df.loc[:, (slice("a", "b"), "b")]
assert_eq(expect, got)
@pytest_xfail(reason="Not yet properly supported.")
def test_multiindex_wildcard_selection_three_level_all():
midx = cudf.MultiIndex.from_tuples(
[(c1, c2, c3) for c1 in "abcd" for c2 in "abc" for c3 in "ab"]
)
df = cudf.DataFrame({f"{i}": [i] for i in range(24)})
df.columns = midx
expect = df.to_pandas().loc[:, (slice("a", "c"), slice("a", "b"), "b")]
got = df.loc[:, (slice(None), "b")]
assert_eq(expect, got)
def test_dataframe_assign_scalar_to_empty_series():
expected = pd.DataFrame({"a": []})
actual = cudf.DataFrame({"a": []})
expected.a = 0
actual.a = 0
assert_eq(expected, actual)
@pytest.mark.parametrize(
"data",
[
{0: [1, 2, 3], 2: [10, 11, 23]},
{("a", "b"): [1, 2, 3], ("2",): [10, 11, 23]},
],
)
def test_non_string_column_name_to_arrow(data):
df = cudf.DataFrame(data)
expected = df.to_arrow()
actual = pa.Table.from_pandas(df.to_pandas())
assert expected.equals(actual)
def test_complex_types_from_arrow():
expected = pa.Table.from_arrays(
[
pa.array([1, 2, 3]),
pa.array([10, 20, 30]),
pa.array([{"a": 9}, {"b": 10}, {"c": 11}]),
pa.array([[{"a": 1}], [{"b": 2}], [{"c": 3}]]),
pa.array([10, 11, 12]).cast(pa.decimal128(21, 2)),
pa.array([{"a": 9}, {"b": 10, "c": {"g": 43}}, {"c": {"a": 10}}]),
],
names=["a", "b", "c", "d", "e", "f"],
)
df = cudf.DataFrame.from_arrow(expected)
actual = df.to_arrow()
assert expected.equals(actual)
@pytest.mark.parametrize(
"data",
[
{
"brand": ["Yum Yum", "Yum Yum", "Indomie", "Indomie", "Indomie"],
"style": ["cup", "cup", "cup", "pack", "pack"],
"rating": [4, 4, 3.5, 15, 5],
},
{
"brand": ["Indomie", "Yum Yum", "Indomie", "Indomie", "Indomie"],
"style": ["cup", "cup", "cup", "cup", "pack"],
"rating": [4, 4, 3.5, 4, 5],
},
],
)
@pytest.mark.parametrize(
"subset", [None, ["brand"], ["rating"], ["style", "rating"]]
)
@pytest.mark.parametrize("keep", ["first", "last", False])
def test_dataframe_duplicated(data, subset, keep):
gdf = cudf.DataFrame(data)
pdf = gdf.to_pandas()
expected = pdf.duplicated(subset=subset, keep=keep)
actual = gdf.duplicated(subset=subset, keep=keep)
assert_eq(expected, actual)
@pytest.mark.parametrize(
"data",
[
{"col": [{"a": 1.1}, {"a": 2.1}, {"a": 10.0}, {"a": 11.2323}, None]},
{"a": [[{"b": 567}], None] * 10},
{"a": [decimal.Decimal(10), decimal.Decimal(20), None]},
],
)
def test_dataframe_transpose_complex_types(data):
gdf = cudf.DataFrame(data)
pdf = gdf.to_pandas()
expected = pdf.T
actual = gdf.T
assert_eq(expected, actual)
@pytest.mark.parametrize(
"data",
[
{"col": [{"a": 1.1}, {"a": 2.1}, {"a": 10.0}, {"a": 11.2323}, None]},
{"a": [[{"b": 567}], None] * 10},
{"a": [decimal.Decimal(10), decimal.Decimal(20), None]},
],
)
def test_dataframe_values_complex_types(data):
gdf = cudf.DataFrame(data)
with pytest.raises(NotImplementedError):
gdf.values
def test_dataframe_from_arrow_slice():
table = pa.Table.from_pandas(
pd.DataFrame.from_dict(
{"a": ["aa", "bb", "cc"] * 3, "b": [1, 2, 3] * 3}
)
)
table_slice = table.slice(3, 7)
expected = table_slice.to_pandas()
actual = cudf.DataFrame.from_arrow(table_slice)
assert_eq(expected, actual)
@pytest.mark.parametrize(
"data",
[
{"a": [1, 2, 3], "b": ["x", "y", "z"], "c": 4},
{"c": 4, "a": [1, 2, 3], "b": ["x", "y", "z"]},
{"a": [1, 2, 3], "c": 4},
],
)
def test_dataframe_init_from_scalar_and_lists(data):
actual = cudf.DataFrame(data)
expected = pd.DataFrame(data)
assert_eq(expected, actual)
@pytest.mark.parametrize(
"data,index",
[
({"a": [1, 2, 3], "b": ["x", "y", "z", "z"], "c": 4}, None),
(
{
"a": [1, 2, 3],
"b": ["x", "y", "z"],
},
[10, 11],
),
(
{
"a": [1, 2, 3],
"b": ["x", "y", "z"],
},
[10, 11],
),
([[10, 11], [12, 13]], ["a", "b", "c"]),
],
)
def test_dataframe_init_length_error(data, index):
assert_exceptions_equal(
lfunc=pd.DataFrame,
rfunc=cudf.DataFrame,
lfunc_args_and_kwargs=(
[],
{"data": data, "index": index},
),
rfunc_args_and_kwargs=(
[],
{"data": data, "index": index},
),
)
def test_dataframe_binop_with_mixed_date_types():
df = pd.DataFrame(
np.random.rand(2, 2),
columns=pd.Index(["2000-01-03", "2000-01-04"], dtype="datetime64[ns]"),
)
ser = pd.Series(np.random.rand(3), index=[0, 1, 2])
gdf = cudf.from_pandas(df)
gser = cudf.from_pandas(ser)
expected = df - ser
got = gdf - gser
assert_eq(expected, got)
def test_dataframe_binop_with_mixed_string_types():
df1 = pd.DataFrame(np.random.rand(3, 3), columns=pd.Index([0, 1, 2]))
df2 = pd.DataFrame(
np.random.rand(6, 6),
columns=pd.Index([0, 1, 2, "VhDoHxRaqt", "X0NNHBIPfA", "5FbhPtS0D1"]),
)
gdf1 = cudf.from_pandas(df1)
gdf2 = cudf.from_pandas(df2)
expected = df2 + df1
got = gdf2 + gdf1
assert_eq(expected, got)
def test_dataframe_binop_and_where():
df = pd.DataFrame(np.random.rand(2, 2), columns=pd.Index([True, False]))
gdf = cudf.from_pandas(df)
expected = df > 1
got = gdf > 1
assert_eq(expected, got)
expected = df[df > 1]
got = gdf[gdf > 1]
assert_eq(expected, got)
def test_dataframe_binop_with_datetime_index():
df = pd.DataFrame(
np.random.rand(2, 2),
columns=pd.Index(["2000-01-03", "2000-01-04"], dtype="datetime64[ns]"),
)
ser = pd.Series(
np.random.rand(2),
index=pd.Index(
[
"2000-01-04",
"2000-01-03",
],
dtype="datetime64[ns]",
),
)
gdf = cudf.from_pandas(df)
gser = cudf.from_pandas(ser)
expected = df - ser
got = gdf - gser
assert_eq(expected, got)
@pytest.mark.parametrize(
"columns",
(
[],
["c", "a"],
["a", "d", "b", "e", "c"],
["a", "b", "c"],
pd.Index(["b", "a", "c"], name="custom_name"),
),
)
@pytest.mark.parametrize("index", (None, [4, 5, 6]))
def test_dataframe_dict_like_with_columns(columns, index):
data = {"a": [1, 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]}
expect = pd.DataFrame(data, columns=columns, index=index)
actual = cudf.DataFrame(data, columns=columns, index=index)
if index is None and len(columns) == 0:
# We make an empty range index, pandas makes an empty index
expect = expect.reset_index(drop=True)
assert_eq(expect, actual)
def test_dataframe_init_columns_named_multiindex():
np.random.seed(0)
data = np.random.randn(2, 2)
columns = cudf.MultiIndex.from_tuples(
[("A", "one"), ("A", "two")], names=["y", "z"]
)
gdf = cudf.DataFrame(data, columns=columns)
pdf = pd.DataFrame(data, columns=columns.to_pandas())
assert_eq(gdf, pdf)
def test_dataframe_init_columns_named_index():
np.random.seed(0)
data = np.random.randn(2, 2)
columns = pd.Index(["a", "b"], name="custom_name")
gdf = cudf.DataFrame(data, columns=columns)
pdf = pd.DataFrame(data, columns=columns)
assert_eq(gdf, pdf)
def test_dataframe_from_pandas_sparse():
pdf = pd.DataFrame(range(2), dtype=pd.SparseDtype(np.int64, 0))
with pytest.raises(NotImplementedError):
cudf.DataFrame(pdf)
def test_dataframe_constructor_unbounded_sequence():
class A:
def __getitem__(self, key):
return 1
with pytest.raises(TypeError):
cudf.DataFrame([A()])
with pytest.raises(TypeError):
cudf.DataFrame({"a": A()})
def test_dataframe_constructor_dataframe_list():
df = cudf.DataFrame(range(2))
with pytest.raises(ValueError):
cudf.DataFrame([df])
def test_dataframe_constructor_from_namedtuple():
Point1 = namedtuple("Point1", ["a", "b", "c"])
Point2 = namedtuple("Point1", ["x", "y"])
data = [Point1(1, 2, 3), Point2(4, 5)]
idx = ["a", "b"]
gdf = cudf.DataFrame(data, index=idx)
pdf = pd.DataFrame(data, index=idx)
assert_eq(gdf, pdf)
data = [Point2(4, 5), Point1(1, 2, 3)]
with pytest.raises(ValueError):
cudf.DataFrame(data, index=idx)
with pytest.raises(ValueError):
pd.DataFrame(data, index=idx)
@pytest.mark.parametrize(
"dtype", ["datetime64[ns]", "timedelta64[ns]", "int64", "float32"]
)
def test_dataframe_mixed_dtype_error(dtype):
pdf = pd.Series([1, 2, 3], dtype=dtype).to_frame().astype(object)
with pytest.raises(TypeError):
cudf.from_pandas(pdf)
@pytest.mark.parametrize(
"index_data,name",
[([10, 13], "a"), ([30, 40, 20], "b"), (["ef"], "c"), ([2, 3], "Z")],
)
def test_dataframe_reindex_with_index_names(index_data, name):
gdf = cudf.DataFrame(
{
"a": [10, 12, 13],
"b": [20, 30, 40],
"c": cudf.Series(["ab", "cd", "ef"], dtype="category"),
}
)
if name in gdf.columns:
gdf = gdf.set_index(name)
pdf = gdf.to_pandas()
gidx = cudf.Index(index_data, name=name)
actual = gdf.reindex(gidx)
expected = pdf.reindex(gidx.to_pandas())
assert_eq(actual, expected)
actual = gdf.reindex(index_data)
expected = pdf.reindex(index_data)
assert_eq(actual, expected)
@pytest.mark.parametrize("attr", ["nlargest", "nsmallest"])
def test_dataframe_nlargest_nsmallest_str_error(attr):
gdf = cudf.DataFrame({"a": [1, 2, 3, 4], "b": ["a", "b", "c", "d"]})
pdf = gdf.to_pandas()
assert_exceptions_equal(
getattr(gdf, attr),
getattr(pdf, attr),
([], {"n": 1, "columns": ["a", "b"]}),
([], {"n": 1, "columns": ["a", "b"]}),
)
def test_series_data_no_name_with_columns():
gdf = cudf.DataFrame(cudf.Series([1]), columns=[1])
pdf = pd.DataFrame(pd.Series([1]), columns=[1])
assert_eq(gdf, pdf)
def test_series_data_no_name_with_columns_more_than_one_raises():
with pytest.raises(ValueError):
cudf.DataFrame(cudf.Series([1]), columns=[1, 2])
with pytest.raises(ValueError):
pd.DataFrame(pd.Series([1]), columns=[1, 2])
def test_series_data_with_name_with_columns_matching():
gdf = cudf.DataFrame(cudf.Series([1], name=1), columns=[1])
pdf = pd.DataFrame(pd.Series([1], name=1), columns=[1])
assert_eq(gdf, pdf)
@pytest.mark.xfail(
version.parse(pd.__version__) < version.parse("2.0"),
reason="pandas returns Index[object] instead of RangeIndex",
)
def test_series_data_with_name_with_columns_not_matching():
gdf = cudf.DataFrame(cudf.Series([1], name=2), columns=[1])
pdf = pd.DataFrame(pd.Series([1], name=2), columns=[1])
assert_eq(gdf, pdf)
def test_series_data_with_name_with_columns_matching_align():
gdf = cudf.DataFrame(cudf.Series([1], name=2), columns=[1, 2])
pdf = pd.DataFrame(pd.Series([1], name=2), columns=[1, 2])
assert_eq(gdf, pdf)
@pytest.mark.parametrize("digits", [0, 1, 3, 4, 10])
def test_dataframe_round_builtin(digits):
pdf = pd.DataFrame(
{
"a": [1.2234242333234, 323432.3243423, np.nan],
"b": ["a", "b", "c"],
"c": pd.Series([34224, 324324, 324342], dtype="datetime64[ns]"),
"d": pd.Series([224.242, None, 2424.234324], dtype="category"),
"e": [
decimal.Decimal("342.3243234234242"),
decimal.Decimal("89.32432497687622"),
None,
],
}
)
gdf = cudf.from_pandas(pdf, nan_as_null=False)
expected = round(pdf, digits)
actual = round(gdf, digits)
assert_eq(expected, actual)
def test_dataframe_init_from_nested_dict():
ordered_dict = OrderedDict(
[
("one", OrderedDict([("col_a", "foo1"), ("col_b", "bar1")])),
("two", OrderedDict([("col_a", "foo2"), ("col_b", "bar2")])),
("three", OrderedDict([("col_a", "foo3"), ("col_b", "bar3")])),
]
)
pdf = pd.DataFrame(ordered_dict)
gdf = cudf.DataFrame(ordered_dict)
assert_eq(pdf, gdf)
regular_dict = {key: dict(value) for key, value in ordered_dict.items()}
pdf = pd.DataFrame(regular_dict)
gdf = cudf.DataFrame(regular_dict)
assert_eq(pdf, gdf)
def test_init_from_2_categoricalindex_series_diff_categories():
s1 = cudf.Series(
[39, 6, 4], index=cudf.CategoricalIndex(["female", "male", "unknown"])
)
s2 = cudf.Series(
[2, 152, 2, 242, 150],
index=cudf.CategoricalIndex(["f", "female", "m", "male", "unknown"]),
)
result = cudf.DataFrame([s1, s2])
expected = pd.DataFrame([s1.to_pandas(), s2.to_pandas()])
assert_eq(result, expected, check_dtype=False)
def test_data_frame_values_no_cols_but_index():
result = cudf.DataFrame(index=range(5)).values
expected = pd.DataFrame(index=range(5)).values
assert_eq(result, expected)
def test_dataframe_reduction_error():
gdf = cudf.DataFrame(
{
"a": cudf.Series([1, 2, 3], dtype="float"),
"d": cudf.Series([10, 20, 30], dtype="timedelta64[ns]"),
}
)
with pytest.raises(TypeError):
gdf.sum()
def test_dataframe_from_generator():
pdf = pd.DataFrame((i for i in range(5)))
gdf = cudf.DataFrame((i for i in range(5)))
assert_eq(pdf, gdf)
def test_dataframe_from_ndarray_dup_columns():
with pytest.raises(ValueError):
cudf.DataFrame(np.eye(2), columns=["A", "A"])
@pytest.mark.parametrize("name", ["a", 0, None, np.nan, cudf.NA])
@pytest.mark.parametrize("contains", ["a", 0, None, np.nan, cudf.NA])
@pytest.mark.parametrize("other_names", [[], ["b", "c"], [1, 2]])
def test_dataframe_contains(name, contains, other_names):
column_names = [name] + other_names
gdf = cudf.DataFrame({c: [0] for c in column_names})
pdf = pd.DataFrame({c: [0] for c in column_names})
assert_eq(gdf, pdf)
if contains is cudf.NA or name is cudf.NA:
expectation = contains is cudf.NA and name is cudf.NA
assert (contains in pdf) == expectation
assert (contains in gdf) == expectation
elif pd.api.types.is_float_dtype(gdf.columns.dtype):
# In some cases, the columns are converted to a Float64Index based on
# the other column names. That casts name values from None to np.nan.
expectation = contains is np.nan and (name is None or name is np.nan)
assert (contains in pdf) == expectation
assert (contains in gdf) == expectation
else:
expectation = contains == name or (
contains is np.nan and name is np.nan
)
assert (contains in pdf) == expectation
assert (contains in gdf) == expectation
assert (contains in pdf) == (contains in gdf)
def test_dataframe_series_dot():
pser = pd.Series(range(2))
gser = cudf.from_pandas(pser)
expected = pser @ pser
actual = gser @ gser
assert_eq(expected, actual)
pdf = pd.DataFrame([[1, 2], [3, 4]], columns=list("ab"))
gdf = cudf.from_pandas(pdf)
expected = pser @ pdf
actual = gser @ gdf
assert_eq(expected, actual)
assert_exceptions_equal(
lfunc=pdf.dot,
rfunc=gdf.dot,
lfunc_args_and_kwargs=([pser], {}),
rfunc_args_and_kwargs=([gser], {}),
)
assert_exceptions_equal(
lfunc=pdf.dot,
rfunc=gdf.dot,
lfunc_args_and_kwargs=([pdf], {}),
rfunc_args_and_kwargs=([gdf], {}),
)
pser = pd.Series(range(2), index=["a", "k"])
gser = cudf.from_pandas(pser)
pdf = pd.DataFrame([[1, 2], [3, 4]], columns=list("ab"), index=["a", "k"])
gdf = cudf.from_pandas(pdf)
expected = pser @ pdf
actual = gser @ gdf
assert_eq(expected, actual)
actual = gdf @ [2, 3]
expected = pdf @ [2, 3]
assert_eq(expected, actual)
actual = pser @ [12, 13]
expected = gser @ [12, 13]
assert_eq(expected, actual)
def test_dataframe_duplicate_index_reindex():
gdf = cudf.DataFrame({"a": [0, 1, 2, 3]}, index=[0, 0, 1, 1])
pdf = gdf.to_pandas()
assert_exceptions_equal(
gdf.reindex,
pdf.reindex,
lfunc_args_and_kwargs=([10, 11, 12, 13], {}),
rfunc_args_and_kwargs=([10, 11, 12, 13], {}),
)
| 0 |
rapidsai_public_repos/cudf/python/cudf/cudf
|
rapidsai_public_repos/cudf/python/cudf/cudf/tests/test_avro_reader_fastavro_integration.py
|
# Copyright (c) 2021-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import datetime
import io
import pathlib
from typing import Optional
import fastavro
import numpy as np
import pandas as pd
import pytest
import cudf
from cudf.testing._utils import assert_eq
from cudf.testing.dataset_generator import rand_dataframe
def cudf_from_avro_util(schema: dict, records: list) -> cudf.DataFrame:
schema = [] if schema is None else fastavro.parse_schema(schema)
buffer = io.BytesIO()
fastavro.writer(buffer, schema, records)
buffer.seek(0)
return cudf.read_avro(buffer)
avro_type_params = [
("boolean", "bool"),
("int", "int32"),
("long", "int64"),
("float", "float32"),
("double", "float64"),
("bytes", "str"),
("string", "str"),
]
@pytest.mark.parametrize("avro_type, expected_dtype", avro_type_params)
@pytest.mark.parametrize("namespace", [None, "root_ns"])
@pytest.mark.parametrize("nullable", [True, False])
def test_can_detect_dtype_from_avro_type(
avro_type, expected_dtype, namespace, nullable
):
avro_type = avro_type if not nullable else ["null", avro_type]
schema = fastavro.parse_schema(
{
"type": "record",
"name": "test",
"namespace": namespace,
"fields": [{"name": "prop", "type": avro_type}],
}
)
actual = cudf_from_avro_util(schema, [])
expected = cudf.DataFrame(
{"prop": cudf.Series(None, None, expected_dtype)}
)
assert_eq(expected, actual)
@pytest.mark.parametrize("avro_type, expected_dtype", avro_type_params)
@pytest.mark.parametrize("namespace", [None, "root_ns"])
@pytest.mark.parametrize("nullable", [True, False])
def test_can_detect_dtype_from_avro_type_nested(
avro_type, expected_dtype, namespace, nullable
):
avro_type = avro_type if not nullable else ["null", avro_type]
schema_leaf = {
"name": "leaf",
"type": "record",
"fields": [{"name": "prop3", "type": avro_type}],
}
schema_child = {
"name": "child",
"type": "record",
"fields": [{"name": "prop2", "type": schema_leaf}],
}
schema_root = {
"name": "root",
"type": "record",
"namespace": namespace,
"fields": [{"name": "prop1", "type": schema_child}],
}
actual = cudf_from_avro_util(schema_root, [])
col_name = "{ns}child.{ns}leaf.prop3".format(
ns="" if namespace is None else namespace + "."
)
expected = cudf.DataFrame(
{col_name: cudf.Series(None, None, expected_dtype)}
)
assert_eq(expected, actual)
@pytest.mark.parametrize(
"avro_type, cudf_type, avro_val, cudf_val",
[
("boolean", "bool", True, True),
("boolean", "bool", False, False),
("int", "int32", 1234, 1234),
("long", "int64", 1234, 1234),
("float", "float32", 12.34, 12.34),
("double", "float64", 12.34, 12.34),
("string", "str", "heyΟ΄", "heyΟ΄"),
# ("bytes", "str", "heyΟ΄", "heyΟ΄"),
],
)
def test_can_parse_single_value(avro_type, cudf_type, avro_val, cudf_val):
schema_root = {
"name": "root",
"type": "record",
"fields": [{"name": "prop", "type": ["null", avro_type]}],
}
records = [
{"prop": avro_val},
]
actual = cudf_from_avro_util(schema_root, records)
expected = cudf.DataFrame(
{"prop": cudf.Series(data=[cudf_val], dtype=cudf_type)}
)
assert_eq(expected, actual)
@pytest.mark.parametrize("avro_type, cudf_type", avro_type_params)
def test_can_parse_single_null(avro_type, cudf_type):
schema_root = {
"name": "root",
"type": "record",
"fields": [{"name": "prop", "type": ["null", avro_type]}],
}
records = [{"prop": None}]
actual = cudf_from_avro_util(schema_root, records)
expected = cudf.DataFrame(
{"prop": cudf.Series(data=[None], dtype=cudf_type)}
)
assert_eq(expected, actual)
@pytest.mark.parametrize("avro_type, cudf_type", avro_type_params)
def test_can_parse_no_data(avro_type, cudf_type):
schema_root = {
"name": "root",
"type": "record",
"fields": [{"name": "prop", "type": ["null", avro_type]}],
}
records = []
actual = cudf_from_avro_util(schema_root, records)
expected = cudf.DataFrame({"prop": cudf.Series(data=[], dtype=cudf_type)})
assert_eq(expected, actual)
@pytest.mark.xfail(
reason="cudf avro reader is unable to parse zero-field metadata."
)
@pytest.mark.parametrize("avro_type, cudf_type", avro_type_params)
def test_can_parse_no_fields(avro_type, cudf_type):
schema_root = {
"name": "root",
"type": "record",
"fields": [],
}
records = []
actual = cudf_from_avro_util(schema_root, records)
expected = cudf.DataFrame()
assert_eq(expected, actual)
def test_can_parse_no_schema():
schema_root = None
records = []
actual = cudf_from_avro_util(schema_root, records)
expected = cudf.DataFrame()
assert_eq(expected, actual)
@pytest.mark.parametrize("rows", [0, 1, 10, 1000])
@pytest.mark.parametrize("codec", ["null", "deflate", "snappy"])
def test_avro_compression(rows, codec):
schema = {
"name": "root",
"type": "record",
"fields": [
{"name": "0", "type": "int"},
{"name": "1", "type": "string"},
],
}
# N.B. rand_dataframe() is brutally slow for some reason. Switching to
# np.random() speeds things up by a factor of 10.
# See also: https://github.com/rapidsai/cudf/issues/13128
df = rand_dataframe(
[
{"dtype": "int32", "null_frequency": 0, "cardinality": 1000},
{
"dtype": "str",
"null_frequency": 0,
"cardinality": 100,
"max_string_length": 10,
},
],
rows,
)
expected_df = cudf.DataFrame.from_arrow(df)
records = df.to_pandas().to_dict(orient="records")
buffer = io.BytesIO()
fastavro.writer(buffer, schema, records, codec=codec)
buffer.seek(0)
got_df = cudf.read_avro(buffer)
assert_eq(expected_df, got_df)
avro_logical_type_params = [
# (avro logical type, avro primitive type, cudf expected dtype)
("date", "int", "datetime64[s]"),
]
@pytest.mark.parametrize(
"logical_type, primitive_type, expected_dtype", avro_logical_type_params
)
@pytest.mark.parametrize("namespace", [None, "root_ns"])
@pytest.mark.parametrize("nullable", [True, False])
@pytest.mark.parametrize("prepend_null", [True, False])
def test_can_detect_dtypes_from_avro_logical_type(
logical_type,
primitive_type,
expected_dtype,
namespace,
nullable,
prepend_null,
):
avro_type = [{"logicalType": logical_type, "type": primitive_type}]
if nullable:
if prepend_null:
avro_type.insert(0, "null")
else:
avro_type.append("null")
schema = fastavro.parse_schema(
{
"type": "record",
"name": "test",
"namespace": namespace,
"fields": [{"name": "prop", "type": avro_type}],
}
)
actual = cudf_from_avro_util(schema, [])
expected = cudf.DataFrame(
{"prop": cudf.Series(None, None, expected_dtype)}
)
assert_eq(expected, actual)
def get_days_from_epoch(date: Optional[datetime.date]) -> Optional[int]:
if date is None:
return None
return (date - datetime.date(1970, 1, 1)).days
@pytest.mark.parametrize("namespace", [None, "root_ns"])
@pytest.mark.parametrize("nullable", [True, False])
@pytest.mark.parametrize("prepend_null", [True, False])
def test_can_parse_avro_date_logical_type(namespace, nullable, prepend_null):
avro_type = {"logicalType": "date", "type": "int"}
if nullable:
if prepend_null:
avro_type = ["null", avro_type]
else:
avro_type = [avro_type, "null"]
schema_dict = {
"type": "record",
"name": "test",
"fields": [
{"name": "o_date", "type": avro_type},
],
}
if namespace:
schema_dict["namespace"] = namespace
schema = fastavro.parse_schema(schema_dict)
# Insert some None values in no particular order. These will get converted
# into avro "nulls" by the fastavro writer (or filtered out if we're not
# nullable). The first and last dates are epoch min/max values, the rest
# are arbitrarily chosen.
dates = [
None,
datetime.date(1970, 1, 1),
datetime.date(1970, 1, 2),
datetime.date(1981, 10, 25),
None,
None,
datetime.date(2012, 5, 18),
None,
datetime.date(2019, 9, 3),
None,
datetime.date(9999, 12, 31),
]
if not nullable:
dates = [date for date in dates if date is not None]
days_from_epoch = [get_days_from_epoch(date) for date in dates]
records = [{"o_date": day} for day in days_from_epoch]
actual = cudf_from_avro_util(schema, records)
expected = cudf.DataFrame(
{"o_date": cudf.Series(dates, dtype="datetime64[s]")}
)
assert_eq(expected, actual)
def test_alltypes_plain_avro():
# During development of the logical type support, the Java avro tests were
# triggering CUDA kernel crashes (null pointer dereferences). We were able
# to replicate the behavior in a C++ test case, and then subsequently came
# up with this Python unit test to also trigger the problematic code path.
#
# So, unlike the other tests, this test is inherently reactive in nature,
# added simply to verify we fixed the problematic code path that was
# causing CUDA kernel crashes.
#
# See https://github.com/rapidsai/cudf/pull/12788#issuecomment-1468822875
# for more information.
relpath = "../../../../java/src/test/resources/alltypes_plain.avro"
path = pathlib.Path(__file__).parent.joinpath(relpath).resolve()
assert path.is_file(), path
path = str(path)
with open(path, "rb") as f:
reader = fastavro.reader(f)
records = [record for record in reader]
# For reference:
#
# >>> from pprint import pprint
# >>> pprint(reader.writer_schema)
# {'fields': [{'name': 'id', 'type': ['int', 'null']},
# {'name': 'bool_col', 'type': ['boolean', 'null']},
# {'name': 'tinyint_col', 'type': ['int', 'null']},
# {'name': 'smallint_col', 'type': ['int', 'null']},
# {'name': 'int_col', 'type': ['int', 'null']},
# {'name': 'bigint_col', 'type': ['long', 'null']},
# {'name': 'float_col', 'type': ['float', 'null']},
# {'name': 'double_col', 'type': ['double', 'null']},
# {'name': 'date_string_col', 'type': ['bytes', 'null']},
# {'name': 'string_col', 'type': ['bytes', 'null']},
# {'name': 'timestamp_col',
# 'type': [{'logicalType': 'timestamp-micros',
# 'type': 'long'},
# 'null']}],
# 'name': 'topLevelRecord',
# 'type': 'record'}
#
# >>> pprint(records[0])
# {'bigint_col': 0,
# 'bool_col': True,
# 'date_string_col': b'03/01/09',
# 'double_col': 0.0,
# 'float_col': 0.0,
# 'id': 4,
# 'int_col': 0,
# 'smallint_col': 0,
# 'string_col': b'0',
# 'timestamp_col': datetime.datetime(2009, 3, 1, 0, 0,
# tzinfo=datetime.timezone.utc),
# 'tinyint_col': 0}
# Nothing particularly special about these columns, other than them being
# the ones that @davidwendt used to coerce the crash.
columns = ["bool_col", "int_col", "timestamp_col"]
# This next line would trigger the fatal CUDA kernel crash.
actual = cudf.read_avro(path, columns=columns)
# If we get here, we haven't crashed, obviously. Verify the returned data
# frame meets our expectations. We need to fiddle with the dtypes of the
# expected data frame in order to correctly match the schema definition and
# our corresponding read_avro()-returned data frame.
data = [{column: row[column] for column in columns} for row in records]
# discard timezone information as we don't support it:
expected = pd.DataFrame(data)
expected["timestamp_col"].dt.tz_localize(None)
# The fastavro.reader supports the `'logicalType': 'timestamp-micros'` used
# by the 'timestamp_col' column, which is converted into Python
# datetime.datetime() objects (see output of pprint(records[0]) above).
# As we don't support that logical type yet in cudf, we need to convert to
# int64, then divide by 1000 to convert from nanoseconds to microseconds.
timestamps = expected["timestamp_col"].astype("int64")
timestamps //= 1000
expected["timestamp_col"] = timestamps
# Furthermore, we need to force the 'int_col' into an int32, per the schema
# definition. (It ends up as an int64 due to cudf.DataFrame() defaulting
# all Python int values to int64 sans a dtype= override.)
expected["int_col"] = expected["int_col"].astype("int32")
assert_eq(actual, expected)
def multiblock_testname_ids(param):
(total_rows, num_rows, skip_rows, sync_interval) = param
return f"{total_rows=}-{num_rows=}-{skip_rows=}-{sync_interval=}"
# The following values are used to test various boundary conditions associated
# with multiblock avro files. Each tuple consists of four values: total number
# of rows to generate, number of rows to limit the result set to, number of
# rows to skip, and number of rows per block. If the total number of rows and
# number of rows (i.e. first and second tuple elements) are equal, it means
# that all rows will be returned. If the rows per block also equals the first
# two numbers, it means that a single block will be used.
@pytest.fixture(
ids=multiblock_testname_ids,
params=[
(10, 10, 9, 9),
(10, 10, 9, 5),
(10, 10, 9, 3),
(10, 10, 9, 2),
(10, 10, 9, 10),
(10, 10, 8, 2),
(10, 10, 5, 5),
(10, 10, 2, 9),
(10, 10, 2, 2),
(10, 10, 1, 9),
(10, 10, 1, 5),
(10, 10, 1, 2),
(10, 10, 1, 10),
(10, 10, 10, 9),
(10, 10, 10, 5),
(10, 10, 10, 2),
(10, 10, 10, 10),
(10, 10, 0, 9),
(10, 10, 0, 5),
(10, 10, 0, 2),
(10, 10, 0, 10),
(100, 100, 99, 10),
(100, 100, 90, 90),
(100, 100, 90, 89),
(100, 100, 90, 88),
(100, 100, 90, 87),
(100, 100, 90, 5),
(100, 100, 89, 90),
(100, 100, 87, 90),
(100, 100, 50, 7),
(100, 100, 50, 31),
(10, 1, 8, 9),
(100, 1, 99, 10),
(100, 1, 98, 10),
(100, 1, 97, 10),
(100, 3, 90, 87),
(100, 4, 90, 5),
(100, 2, 89, 90),
(100, 9, 87, 90),
(100, 20, 50, 7),
(100, 10, 50, 31),
(100, 20, 50, 31),
(100, 30, 50, 31),
(256, 256, 0, 256),
(256, 256, 0, 32),
(256, 256, 0, 31),
(256, 256, 0, 33),
(256, 256, 31, 32),
(256, 256, 32, 31),
(256, 256, 31, 33),
(512, 512, 0, 32),
(512, 512, 0, 31),
(512, 512, 0, 33),
(512, 512, 31, 32),
(512, 512, 32, 31),
(512, 512, 31, 33),
(1024, 1024, 0, 1),
(1024, 1024, 0, 3),
(1024, 1024, 0, 7),
(1024, 1024, 0, 8),
(1024, 1024, 0, 9),
(1024, 1024, 0, 15),
(1024, 1024, 0, 16),
(1024, 1024, 0, 17),
(1024, 1024, 0, 32),
(1024, 1024, 0, 31),
(1024, 1024, 0, 33),
(1024, 1024, 31, 32),
(1024, 1024, 32, 31),
(1024, 1024, 31, 33),
(16384, 16384, 0, 31),
(16384, 16384, 0, 32),
(16384, 16384, 0, 33),
(16384, 16384, 0, 16384),
],
)
def total_rows_and_num_rows_and_skip_rows_and_rows_per_block(request):
return request.param
# N.B. The float32 and float64 types are chosen specifically to exercise
# the only path in the avro reader GPU code that can process multiple
# rows in parallel (via warp-level parallelism). See the logic around
# the line `if (cur + min_row_size * rows_remaining == end)` in
# gpuDecodeAvroColumnData().
@pytest.mark.parametrize("dtype", ["str", "float32", "float64"])
@pytest.mark.parametrize(
"use_sync_interval",
[True, False],
ids=["use_sync_interval", "ignore_sync_interval"],
)
@pytest.mark.parametrize("codec", ["null", "deflate", "snappy"])
def test_avro_reader_multiblock(
dtype,
codec,
use_sync_interval,
total_rows_and_num_rows_and_skip_rows_and_rows_per_block,
):
(
total_rows,
num_rows,
skip_rows,
rows_per_block,
) = total_rows_and_num_rows_and_skip_rows_and_rows_per_block
assert total_rows >= num_rows
assert rows_per_block <= total_rows
limit_rows = num_rows != total_rows
if limit_rows:
assert total_rows >= num_rows + skip_rows
if dtype == "str":
avro_type = "string"
# Generate a list of strings, each of which is a 6-digit number, padded
# with leading zeros. This data set was very useful during development
# of the multiblock avro reader logic, as you get implicit feedback as
# to what may have gone wrong when the test fails, based on the
# expected vs actual values.
values = [f"{i:0>6}" for i in range(0, total_rows)]
# Strings are encoded in avro with a zigzag-encoded length prefix, and
# then the string data. As all of our strings are fixed at length 6,
# we only need one byte to encode the length prefix (0xc). Thus, our
# bytes per row is 6 + 1 = 7.
bytes_per_row = len(values[0]) + 1
assert bytes_per_row == 7, bytes_per_row
else:
assert dtype in ("float32", "float64")
avro_type = "float" if dtype == "float32" else "double"
# We don't use rand_dataframe() here, because it increases the
# execution time of each test by a factor of 10 or more (it appears
# to use a very costly approach to generating random data).
# See also: https://github.com/rapidsai/cudf/issues/13128
values = np.random.rand(total_rows).astype(dtype)
bytes_per_row = values.dtype.itemsize
# The sync_interval is the number of bytes between sync blocks. We know
# how many bytes we need per row, so we can calculate the number of bytes
# per block by multiplying the number of rows per block by the bytes per
# row. This is the sync interval.
total_bytes_per_block = rows_per_block * bytes_per_row
sync_interval = total_bytes_per_block
source_df = cudf.DataFrame({"0": pd.Series(values)})
if limit_rows:
expected_df = source_df[skip_rows : skip_rows + num_rows].reset_index(
drop=True
)
else:
expected_df = source_df[skip_rows:].reset_index(drop=True)
records = source_df.to_pandas().to_dict(orient="records")
schema = {
"name": "root",
"type": "record",
"fields": [
{"name": "0", "type": avro_type},
],
}
if use_sync_interval:
kwds = {"sync_interval": sync_interval}
else:
kwds = {}
kwds["codec"] = codec
buffer = io.BytesIO()
fastavro.writer(buffer, schema, records, **kwds)
buffer.seek(0)
if not limit_rows:
# Explicitly set num_rows to None if we want to read all rows. This
# ensures we exercise the logic behind a read_avro() call where the
# caller doesn't specify the number of rows desired (which will be the
# most common use case).
num_rows = None
actual_df = cudf.read_avro(buffer, skiprows=skip_rows, num_rows=num_rows)
assert_eq(expected_df, actual_df)
| 0 |
rapidsai_public_repos/cudf/python/cudf/cudf
|
rapidsai_public_repos/cudf/python/cudf/cudf/tests/test_no_cuinit.py
|
# Copyright (c) 2023, NVIDIA CORPORATION.
import os
import subprocess
import sys
from shutil import which
import pytest
GDB_COMMANDS = """
set confirm off
set breakpoint pending on
break cuInit
run
exit
"""
@pytest.fixture(scope="module")
def cuda_gdb(request):
gdb = which("cuda-gdb")
if gdb is None:
request.applymarker(
pytest.mark.xfail(reason="No cuda-gdb found, can't detect cuInit"),
)
return gdb
else:
output = subprocess.run(
[gdb, "--version"], capture_output=True, text=True, cwd="/"
)
if output.returncode != 0:
request.applymarker(
pytest.mark.xfail(
reason=(
"cuda-gdb not working on this platform, "
f"can't detect cuInit: {output.stderr}"
)
),
)
return gdb
def test_cudf_import_no_cuinit(cuda_gdb):
# When RAPIDS_NO_INITIALIZE is set, importing cudf should _not_
# create a CUDA context (i.e. cuInit should not be called).
# Intercepting the call to cuInit programmatically is tricky since
# the way it is resolved from dynamic libraries by
# cuda-python/numba/cupy is multitudinous (see discussion at
# https://github.com/rapidsai/cudf/pull/12361 which does this, but
# needs provide hooks that override dlsym, cuGetProcAddress, and
# cuInit.
# Instead, we just run under GDB and see if we hit a breakpoint
env = os.environ.copy()
env["RAPIDS_NO_INITIALIZE"] = "1"
output = subprocess.run(
[
cuda_gdb,
"-x",
"-",
"--args",
sys.executable,
"-c",
"import cudf",
],
input=GDB_COMMANDS,
env=env,
capture_output=True,
text=True,
cwd="/",
)
cuInit_called = output.stdout.find("in cuInit ()")
print("Command output:\n")
print("*** STDOUT ***")
print(output.stdout)
print("*** STDERR ***")
print(output.stderr)
assert output.returncode == 0
assert cuInit_called < 0
def test_cudf_create_series_cuinit(cuda_gdb):
# This tests that our gdb scripting correctly identifies cuInit
# when it definitely should have been called.
env = os.environ.copy()
env["RAPIDS_NO_INITIALIZE"] = "1"
output = subprocess.run(
[
cuda_gdb,
"-x",
"-",
"--args",
sys.executable,
"-c",
"import cudf; cudf.Series([1])",
],
input=GDB_COMMANDS,
env=env,
capture_output=True,
text=True,
cwd="/",
)
cuInit_called = output.stdout.find("in cuInit ()")
print("Command output:\n")
print("*** STDOUT ***")
print(output.stdout)
print("*** STDERR ***")
print(output.stderr)
assert output.returncode == 0
assert cuInit_called >= 0
| 0 |
rapidsai_public_repos/cudf/python/cudf/cudf
|
rapidsai_public_repos/cudf/python/cudf/cudf/tests/test_scalar.py
|
# Copyright (c) 2021-2023, NVIDIA CORPORATION.
import datetime
import re
from decimal import Decimal
import numpy as np
import pandas as pd
import pyarrow as pa
import pytest
import rmm
import cudf
from cudf._lib.copying import get_element
from cudf.testing._utils import (
ALL_TYPES,
DATETIME_TYPES,
NUMERIC_TYPES,
TIMEDELTA_TYPES,
)
@pytest.fixture(autouse=True)
def clear_scalar_cache():
cudf.Scalar._clear_instance_cache()
yield
TEST_DECIMAL_TYPES = [
cudf.Decimal64Dtype(1, 1),
cudf.Decimal64Dtype(4, 2),
cudf.Decimal64Dtype(4, -2),
cudf.Decimal32Dtype(3, 1),
cudf.Decimal128Dtype(28, 3),
]
SCALAR_VALUES = [
0,
-1,
42,
0.0,
1.0,
np.int8(0),
np.int8(1),
np.int8(-1),
np.iinfo(np.int8).min,
np.iinfo(np.int8).max,
np.int16(1),
np.iinfo(np.int16).min,
np.iinfo(np.int16).max,
np.int32(42),
np.int32(-42),
np.iinfo(np.int32).min,
np.iinfo(np.int32).max,
np.int64(42),
np.iinfo(np.int64).min,
np.iinfo(np.int64).max,
np.uint8(0),
np.uint8(1),
np.uint8(255),
np.iinfo(np.uint8).min,
np.iinfo(np.uint8).max,
np.uint16(1),
np.iinfo(np.uint16).min,
np.iinfo(np.uint16).max,
np.uint32(42),
np.uint32(4294967254),
np.iinfo(np.uint32).min,
np.iinfo(np.uint32).max,
np.uint64(42),
np.iinfo(np.uint64).min,
np.uint64(np.iinfo(np.uint64).max),
np.float32(1),
np.float32(-1),
np.finfo(np.float32).min,
np.finfo(np.float32).max,
np.float64(1),
np.float64(-1),
np.finfo(np.float64).min,
np.finfo(np.float64).max,
np.float32("NaN"),
np.float64("NaN"),
np.datetime64(0, "s"),
np.datetime64(1, "s"),
np.datetime64(-1, "s"),
np.datetime64(42, "s"),
np.datetime64(np.iinfo(np.int64).max, "s"),
np.datetime64(np.iinfo(np.int64).min + 1, "s"),
np.datetime64(42, "ms"),
np.datetime64(np.iinfo(np.int64).max, "ms"),
np.datetime64(np.iinfo(np.int64).min + 1, "ms"),
np.datetime64(42, "us"),
np.datetime64(np.iinfo(np.int64).max, "us"),
np.datetime64(np.iinfo(np.int64).min + 1, "us"),
np.datetime64(42, "ns"),
np.datetime64(np.iinfo(np.int64).max, "ns"),
np.datetime64(np.iinfo(np.int64).min + 1, "ns"),
np.timedelta64(0, "s"),
np.timedelta64(1, "s"),
np.timedelta64(-1, "s"),
np.timedelta64(42, "s"),
np.timedelta64(np.iinfo(np.int64).max, "s"),
np.timedelta64(np.iinfo(np.int64).min + 1, "s"),
np.timedelta64(42, "ms"),
np.timedelta64(np.iinfo(np.int64).max, "ms"),
np.timedelta64(np.iinfo(np.int64).min + 1, "ms"),
np.timedelta64(42, "us"),
np.timedelta64(np.iinfo(np.int64).max, "us"),
np.timedelta64(np.iinfo(np.int64).min + 1, "us"),
np.timedelta64(42, "ns"),
np.timedelta64(np.iinfo(np.int64).max, "ns"),
np.timedelta64(np.iinfo(np.int64).min + 1, "ns"),
"",
"one",
"1",
True,
False,
np.bool_(True),
np.bool_(False),
np.str_("asdf"),
np.object_("asdf"),
]
DECIMAL_VALUES = [
Decimal("100"),
Decimal("0.0042"),
Decimal("1.0042"),
]
@pytest.mark.parametrize("value", SCALAR_VALUES + DECIMAL_VALUES)
def test_scalar_host_initialization(value):
s = cudf.Scalar(value)
np.testing.assert_equal(s.value, value)
assert s.is_valid() is True
assert s._is_host_value_current
assert not s._is_device_value_current
@pytest.mark.parametrize("value", SCALAR_VALUES)
def test_scalar_device_initialization(value):
column = cudf.Series([value], nan_as_null=False)._column
dev_slr = get_element(column, 0)
s = cudf.Scalar.from_device_scalar(dev_slr)
assert s._is_device_value_current
assert not s._is_host_value_current
assert s.value == value or np.isnan(s.value) and np.isnan(value)
assert s._is_device_value_current
assert s._is_host_value_current
@pytest.mark.parametrize("value", DECIMAL_VALUES)
@pytest.mark.parametrize(
"decimal_type",
[cudf.Decimal32Dtype, cudf.Decimal64Dtype, cudf.Decimal128Dtype],
)
def test_scalar_device_initialization_decimal(value, decimal_type):
dtype = decimal_type._from_decimal(value)
column = cudf.Series([str(value)]).astype(dtype)._column
dev_slr = get_element(column, 0)
s = cudf.Scalar.from_device_scalar(dev_slr)
assert s._is_device_value_current
assert not s._is_host_value_current
assert s.value == value
assert s._is_device_value_current
assert s._is_host_value_current
@pytest.mark.parametrize("value", SCALAR_VALUES + DECIMAL_VALUES)
def test_scalar_roundtrip(value):
s = cudf.Scalar(value)
assert s._is_host_value_current
assert not s._is_device_value_current
# call this property to sync the scalar
s.device_value
assert s._is_host_value_current
assert s._is_device_value_current
# invalidate the host cache
s._host_value = None
s._host_dtype = None
assert not s._is_host_value_current
assert s._is_device_value_current
# this should trigger a host copy
assert s.value == value or np.isnan(s.value) and np.isnan(value)
@pytest.mark.parametrize(
"dtype",
NUMERIC_TYPES
+ DATETIME_TYPES
+ TIMEDELTA_TYPES
+ ["object"]
+ TEST_DECIMAL_TYPES,
)
def test_null_scalar(dtype):
s = cudf.Scalar(None, dtype=dtype)
if cudf.api.types.is_datetime64_dtype(
dtype
) or cudf.api.types.is_timedelta64_dtype(dtype):
assert s.value is cudf.NaT
else:
assert s.value is cudf.NA
assert s.dtype == (
cudf.dtype(dtype)
if not isinstance(dtype, cudf.core.dtypes.DecimalDtype)
else dtype
)
assert s.is_valid() is False
@pytest.mark.parametrize(
"value",
[
np.datetime64("NaT", "ns"),
np.datetime64("NaT", "us"),
np.datetime64("NaT", "ms"),
np.datetime64("NaT", "s"),
np.timedelta64("NaT", "ns"),
np.timedelta64("NaT", "us"),
np.timedelta64("NaT", "ms"),
np.timedelta64("NaT", "s"),
],
)
def test_nat_to_null_scalar_succeeds(value):
s = cudf.Scalar(value)
assert s.value is cudf.NaT
assert not s.is_valid()
assert s.dtype == value.dtype
@pytest.mark.parametrize(
"value", [None, np.datetime64("NaT"), np.timedelta64("NaT")]
)
def test_generic_null_scalar_construction_fails(value):
with pytest.raises(TypeError):
cudf.Scalar(value)
@pytest.mark.parametrize(
"dtype", NUMERIC_TYPES + DATETIME_TYPES + TIMEDELTA_TYPES + ["object"]
)
def test_scalar_dtype_and_validity(dtype):
s = cudf.Scalar(1, dtype=dtype)
assert s.dtype == cudf.dtype(dtype)
assert s.is_valid() is True
@pytest.mark.parametrize(
"slr,dtype,expect",
[
(1, cudf.Decimal64Dtype(1, 0), Decimal("1")),
(Decimal(1), cudf.Decimal64Dtype(1, 0), Decimal("1")),
(Decimal("1.1"), cudf.Decimal64Dtype(2, 1), Decimal("1.1")),
(Decimal("1.1"), cudf.Decimal64Dtype(4, 3), Decimal("1.100")),
(Decimal("41.123"), cudf.Decimal32Dtype(5, 3), Decimal("41.123")),
(
Decimal("41345435344353535344373628492731234.123"),
cudf.Decimal128Dtype(38, 3),
Decimal("41345435344353535344373628492731234.123"),
),
(Decimal("1.11"), cudf.Decimal64Dtype(2, 2), pa.lib.ArrowInvalid),
],
)
def test_scalar_dtype_and_validity_decimal(slr, dtype, expect):
if expect is pa.lib.ArrowInvalid:
with pytest.raises(expect):
cudf.Scalar(slr, dtype=dtype)
return
else:
result = cudf.Scalar(slr, dtype=dtype)
assert result.dtype == dtype
assert result.is_valid
@pytest.mark.parametrize(
"value",
[
datetime.timedelta(seconds=76),
datetime.timedelta(microseconds=7),
datetime.timedelta(minutes=47),
datetime.timedelta(hours=4427),
datetime.timedelta(weeks=7134),
pd.Timestamp(15133.5, unit="s"),
pd.Timestamp(15133.5, unit="D"),
pd.Timedelta(1513393355.5, unit="s"),
pd.Timedelta(34765, unit="D"),
],
)
def test_date_duration_scalars(value):
s = cudf.Scalar(value)
actual = s.value
if isinstance(value, datetime.datetime):
expected = np.datetime64(value)
elif isinstance(value, datetime.timedelta):
expected = np.timedelta64(value)
elif isinstance(value, pd.Timestamp):
expected = value.to_datetime64()
elif isinstance(value, pd.Timedelta):
expected = value.to_timedelta64()
np.testing.assert_equal(actual, expected)
assert s.is_valid() is True
def test_scalar_implicit_bool_conversion():
assert cudf.Scalar(True)
assert not cudf.Scalar(False)
assert cudf.Scalar(0) == cudf.Scalar(0)
assert cudf.Scalar(1) <= cudf.Scalar(2)
assert cudf.Scalar(1) <= 2
@pytest.mark.parametrize("value", [1, -1, 1.5, 0, "1.5", "1", True, False])
def test_scalar_implicit_float_conversion(value):
expect = float(value)
got = float(cudf.Scalar(value))
assert expect == got
assert type(expect) == type(got)
@pytest.mark.parametrize("value", [1, -1, 1.5, 0, "1", True, False])
def test_scalar_implicit_int_conversion(value):
expect = int(value)
got = int(cudf.Scalar(value))
assert expect == got
assert type(expect) == type(got)
@pytest.mark.parametrize("cls", [int, float, bool])
@pytest.mark.parametrize("dtype", sorted(set(ALL_TYPES) - {"category"}))
def test_scalar_invalid_implicit_conversion(cls, dtype):
try:
cls(
pd.NaT
if cudf.api.types.is_datetime64_dtype(dtype)
or cudf.api.types.is_timedelta64_dtype(dtype)
else pd.NA
)
except TypeError as e:
with pytest.raises(TypeError, match=re.escape(str(e))):
slr = cudf.Scalar(None, dtype=dtype)
cls(slr)
@pytest.mark.parametrize("value", SCALAR_VALUES + DECIMAL_VALUES)
@pytest.mark.parametrize(
"decimal_type",
[cudf.Decimal32Dtype, cudf.Decimal64Dtype, cudf.Decimal128Dtype],
)
def test_device_scalar_direct_construction(value, decimal_type):
value = cudf.utils.dtypes.to_cudf_compatible_scalar(value)
dtype = (
value.dtype
if not isinstance(value, Decimal)
else decimal_type._from_decimal(value)
)
s = cudf._lib.scalar.DeviceScalar(value, dtype)
assert s.value == value or np.isnan(s.value) and np.isnan(value)
if isinstance(
dtype, (cudf.Decimal64Dtype, cudf.Decimal128Dtype, cudf.Decimal32Dtype)
):
assert s.dtype.precision == dtype.precision
assert s.dtype.scale == dtype.scale
elif dtype.char == "U":
assert s.dtype == "object"
else:
assert s.dtype == dtype
@pytest.mark.parametrize("value", SCALAR_VALUES + DECIMAL_VALUES)
def test_construct_from_scalar(value):
value = cudf.utils.dtypes.to_cudf_compatible_scalar(value)
x = cudf.Scalar(
value, value.dtype if not isinstance(value, Decimal) else None
)
y = cudf.Scalar(x)
assert x.value == y.value or np.isnan(x.value) and np.isnan(y.value)
# check that this works:
y.device_value
x._is_host_value_current == y._is_host_value_current
x._is_device_value_current == y._is_device_value_current
@pytest.mark.parametrize(
"data", ["20000101", "2000-01-01", "2000-01-01T00:00:00.000000000", "2000"]
)
@pytest.mark.parametrize("dtype", DATETIME_TYPES)
def test_datetime_scalar_from_string(data, dtype):
slr = cudf.Scalar(data, dtype)
expected = np.datetime64(datetime.datetime(2000, 1, 1)).astype(dtype)
assert expected == slr.value
def test_scalar_cache():
s = cudf.Scalar(1)
s2 = cudf.Scalar(1)
assert s is s2
def test_scalar_cache_rmm_hook():
# test that reinitializing rmm clears the cuDF scalar cache, as we
# register a hook with RMM that does that on reinitialization
s = cudf.Scalar(1)
s2 = cudf.Scalar(1)
assert s is s2
rmm.reinitialize()
s3 = cudf.Scalar(1)
assert s3 is not s
def test_default_integer_bitwidth_scalar(default_integer_bitwidth):
# Test that integer scalars are default to 32 bits under user options.
slr = cudf.Scalar(128)
assert slr.dtype == np.dtype(f"i{default_integer_bitwidth//8}")
def test_default_float_bitwidth_scalar(default_float_bitwidth):
# Test that float scalars are default to 32 bits under user options.
slr = cudf.Scalar(128.0)
assert slr.dtype == np.dtype(f"f{default_float_bitwidth//8}")
def test_scalar_numpy_casting():
# binop should upcast to wider type
s1 = cudf.Scalar(1, dtype=np.int32)
s2 = np.int64(2)
assert s1 < s2
def test_construct_timezone_scalar_error():
pd_scalar = pd.Timestamp("1970-01-01 00:00:00.000000001", tz="utc")
with pytest.raises(NotImplementedError):
cudf.utils.dtypes.to_cudf_compatible_scalar(pd_scalar)
date_scalar = datetime.datetime.now(datetime.timezone.utc)
with pytest.raises(NotImplementedError):
cudf.utils.dtypes.to_cudf_compatible_scalar(date_scalar)
| 0 |
rapidsai_public_repos/cudf/python/cudf/cudf
|
rapidsai_public_repos/cudf/python/cudf/cudf/tests/test_interval.py
|
# Copyright (c) 2020-2023, NVIDIA CORPORATION.
import numpy as np
import pandas as pd
import pytest
import cudf
from cudf.testing._utils import assert_eq
@pytest.mark.parametrize(
"data1, data2",
[(1, 2), (1.0, 2.0), (3, 4.0)],
)
@pytest.mark.parametrize("data3, data4", [(6, 10), (5.0, 9.0), (2, 6.0)])
@pytest.mark.parametrize("closed", ["left", "right", "both", "neither"])
def test_create_interval_series(data1, data2, data3, data4, closed):
expect = pd.Series(pd.Interval(data1, data2, closed), dtype="interval")
got = cudf.Series(pd.Interval(data1, data2, closed), dtype="interval")
assert_eq(expect, got)
expect_two = pd.Series(
[pd.Interval(data1, data2, closed), pd.Interval(data3, data4, closed)],
dtype="interval",
)
got_two = cudf.Series(
[pd.Interval(data1, data2, closed), pd.Interval(data3, data4, closed)],
dtype="interval",
)
assert_eq(expect_two, got_two)
expect_three = pd.Series(
[
pd.Interval(data1, data2, closed),
pd.Interval(data3, data4, closed),
pd.Interval(data1, data2, closed),
],
dtype="interval",
)
got_three = cudf.Series(
[
pd.Interval(data1, data2, closed),
pd.Interval(data3, data4, closed),
pd.Interval(data1, data2, closed),
],
dtype="interval",
)
assert_eq(expect_three, got_three)
@pytest.mark.parametrize(
"data1, data2",
[(1, 2), (1.0, 2.0), (3, 4.0)],
)
@pytest.mark.parametrize("data3, data4", [(6, 10), (5.0, 9.0), (2, 6.0)])
@pytest.mark.parametrize("closed", ["left", "right", "both", "neither"])
def test_create_interval_df(data1, data2, data3, data4, closed):
# df for both pandas and cudf only works when interval is in a list
expect = pd.DataFrame(
[pd.Interval(data1, data2, closed)], dtype="interval"
)
got = cudf.DataFrame([pd.Interval(data1, data2, closed)], dtype="interval")
assert_eq(expect, got)
expect_two = pd.DataFrame(
{
"a": [
pd.Interval(data1, data2, closed),
pd.Interval(data3, data4, closed),
],
"b": [
pd.Interval(data3, data4, closed),
pd.Interval(data1, data2, closed),
],
},
dtype="interval",
)
got_two = cudf.DataFrame(
{
"a": [
pd.Interval(data1, data2, closed),
pd.Interval(data3, data4, closed),
],
"b": [
pd.Interval(data3, data4, closed),
pd.Interval(data1, data2, closed),
],
},
dtype="interval",
)
assert_eq(expect_two, got_two)
expect_three = pd.DataFrame(
{
"a": [
pd.Interval(data1, data2, closed),
pd.Interval(data3, data4, closed),
pd.Interval(data1, data2, closed),
],
"b": [
pd.Interval(data3, data4, closed),
pd.Interval(data1, data2, closed),
pd.Interval(data3, data4, closed),
],
"c": [
pd.Interval(data1, data2, closed),
pd.Interval(data1, data2, closed),
pd.Interval(data3, data4, closed),
],
},
dtype="interval",
)
got_three = cudf.DataFrame(
{
"a": [
pd.Interval(data1, data2, closed),
pd.Interval(data3, data4, closed),
pd.Interval(data1, data2, closed),
],
"b": [
pd.Interval(data3, data4, closed),
pd.Interval(data1, data2, closed),
pd.Interval(data3, data4, closed),
],
"c": [
pd.Interval(data1, data2, closed),
pd.Interval(data1, data2, closed),
pd.Interval(data3, data4, closed),
],
},
dtype="interval",
)
assert_eq(expect_three, got_three)
def test_create_interval_index_from_list():
interval_list = [
np.nan,
pd.Interval(2.0, 3.0, closed="right"),
pd.Interval(3.0, 4.0, closed="right"),
]
expected = pd.Index(interval_list)
actual = cudf.Index(interval_list)
assert_eq(expected, actual)
def test_interval_index_unique():
interval_list = [
np.nan,
pd.Interval(2.0, 3.0, closed="right"),
pd.Interval(3.0, 4.0, closed="right"),
np.nan,
pd.Interval(3.0, 4.0, closed="right"),
pd.Interval(3.0, 4.0, closed="right"),
]
pi = pd.Index(interval_list)
gi = cudf.from_pandas(pi)
expected = pi.unique()
actual = gi.unique()
assert_eq(expected, actual)
@pytest.mark.parametrize("box", [pd.Series, pd.IntervalIndex])
@pytest.mark.parametrize("tz", ["US/Eastern", None])
def test_interval_with_datetime(tz, box):
dti = pd.date_range(
start=pd.Timestamp("20180101", tz=tz),
end=pd.Timestamp("20181231", tz=tz),
freq="M",
)
pobj = box(pd.IntervalIndex.from_breaks(dti))
if tz is None:
gobj = cudf.from_pandas(pobj)
assert_eq(pobj, gobj)
else:
with pytest.raises(NotImplementedError):
cudf.from_pandas(pobj)
| 0 |
rapidsai_public_repos/cudf/python/cudf/cudf
|
rapidsai_public_repos/cudf/python/cudf/cudf/tests/test_numerical.py
|
# Copyright (c) 2021-2023, NVIDIA CORPORATION.
import numpy as np
import pandas as pd
import pytest
import cudf
from cudf.core._compat import PANDAS_GE_150
from cudf.testing._utils import NUMERIC_TYPES, assert_eq
from cudf.utils.dtypes import np_dtypes_to_pandas_dtypes
def test_can_cast_safely_same_kind():
# 'i' -> 'i'
data = cudf.Series([1, 2, 3], dtype="int32")._column
to_dtype = np.dtype("int64")
assert data.can_cast_safely(to_dtype)
data = cudf.Series([1, 2, 3], dtype="int64")._column
to_dtype = np.dtype("int32")
assert data.can_cast_safely(to_dtype)
data = cudf.Series([1, 2, 2**31], dtype="int64")._column
assert not data.can_cast_safely(to_dtype)
# 'u' -> 'u'
data = cudf.Series([1, 2, 3], dtype="uint32")._column
to_dtype = np.dtype("uint64")
assert data.can_cast_safely(to_dtype)
data = cudf.Series([1, 2, 3], dtype="uint64")._column
to_dtype = np.dtype("uint32")
assert data.can_cast_safely(to_dtype)
data = cudf.Series([1, 2, 2**33], dtype="uint64")._column
assert not data.can_cast_safely(to_dtype)
# 'f' -> 'f'
data = cudf.Series([np.inf, 1.0], dtype="float64")._column
to_dtype = np.dtype("float32")
assert data.can_cast_safely(to_dtype)
data = cudf.Series(
[np.finfo("float32").max * 2, 1.0], dtype="float64"
)._column
to_dtype = np.dtype("float32")
assert not data.can_cast_safely(to_dtype)
def test_can_cast_safely_mixed_kind():
data = cudf.Series([1, 2, 3], dtype="int32")._column
to_dtype = np.dtype("float32")
assert data.can_cast_safely(to_dtype)
# too big to fit into f32 exactly
data = cudf.Series([1, 2, 2**24 + 1], dtype="int32")._column
assert not data.can_cast_safely(to_dtype)
data = cudf.Series([1, 2, 3], dtype="uint32")._column
to_dtype = np.dtype("float32")
assert data.can_cast_safely(to_dtype)
# too big to fit into f32 exactly
data = cudf.Series([1, 2, 2**24 + 1], dtype="uint32")._column
assert not data.can_cast_safely(to_dtype)
to_dtype = np.dtype("float64")
assert data.can_cast_safely(to_dtype)
data = cudf.Series([1.0, 2.0, 3.0], dtype="float32")._column
to_dtype = np.dtype("int32")
assert data.can_cast_safely(to_dtype)
# not integer float
data = cudf.Series([1.0, 2.0, 3.5], dtype="float32")._column
assert not data.can_cast_safely(to_dtype)
data = cudf.Series([10.0, 11.0, 2000.0], dtype="float64")._column
assert data.can_cast_safely(to_dtype)
# float out of int range
data = cudf.Series([1.0, 2.0, 1.0 * (2**31)], dtype="float32")._column
assert not data.can_cast_safely(to_dtype)
# negative signed integers casting to unsigned integers
data = cudf.Series([-1, 0, 1], dtype="int32")._column
to_dtype = np.dtype("uint32")
assert not data.can_cast_safely(to_dtype)
def test_to_pandas_nullable_integer():
gsr_not_null = cudf.Series([1, 2, 3])
gsr_has_null = cudf.Series([1, 2, None])
psr_not_null = pd.Series([1, 2, 3], dtype="int64")
psr_has_null = pd.Series([1, 2, None], dtype="Int64")
assert_eq(gsr_not_null.to_pandas(), psr_not_null)
assert_eq(gsr_has_null.to_pandas(nullable=True), psr_has_null)
def test_to_pandas_nullable_bool():
gsr_not_null = cudf.Series([True, False, True])
gsr_has_null = cudf.Series([True, False, None])
psr_not_null = pd.Series([True, False, True], dtype="bool")
psr_has_null = pd.Series([True, False, None], dtype="boolean")
assert_eq(gsr_not_null.to_pandas(), psr_not_null)
assert_eq(gsr_has_null.to_pandas(nullable=True), psr_has_null)
def test_can_cast_safely_has_nulls():
data = cudf.Series([1, 2, 3, None], dtype="float32")._column
to_dtype = np.dtype("int64")
assert data.can_cast_safely(to_dtype)
data = cudf.Series([1, 2, 3.1, None], dtype="float32")._column
assert not data.can_cast_safely(to_dtype)
@pytest.mark.parametrize(
"data",
[
[1, 2, 3],
(1.0, 2.0, 3.0),
[float("nan"), None],
np.array([1, 2.0, -3, float("nan")]),
pd.Series(["123", "2.0"]),
pd.Series(["1.0", "2.", "-.3", "1e6"]),
pd.Series(
["1", "2", "3"],
dtype=pd.CategoricalDtype(categories=["1", "2", "3"]),
),
pd.Series(
["1.0", "2.0", "3.0"],
dtype=pd.CategoricalDtype(categories=["1.0", "2.0", "3.0"]),
),
# Categories with nulls
pd.Series([1, 2, 3], dtype=pd.CategoricalDtype(categories=[1, 2])),
pd.Series(
[5.0, 6.0], dtype=pd.CategoricalDtype(categories=[5.0, 6.0])
),
pd.Series(
["2020-08-01 08:00:00", "1960-08-01 08:00:00"],
dtype=np.dtype("<M8[ns]"),
),
pd.Series(
[pd.Timedelta(days=1, seconds=1), pd.Timedelta("-3 seconds 4ms")],
dtype=np.dtype("<m8[ns]"),
),
[
"inf",
"-inf",
"+inf",
"infinity",
"-infinity",
"+infinity",
"inFInity",
],
],
)
def test_to_numeric_basic_1d(data):
expected = pd.to_numeric(data)
got = cudf.to_numeric(data)
assert_eq(expected, got)
@pytest.mark.parametrize(
"data",
[
[1, 2**11],
[1, 2**33],
[1, 2**63],
[np.iinfo(np.int64).max, np.iinfo(np.int64).min],
],
)
@pytest.mark.parametrize(
"downcast", ["integer", "signed", "unsigned", "float"]
)
def test_to_numeric_downcast_int(data, downcast):
ps = pd.Series(data)
gs = cudf.from_pandas(ps)
expected = pd.to_numeric(ps, downcast=downcast)
got = cudf.to_numeric(gs, downcast=downcast)
assert_eq(expected, got)
@pytest.mark.filterwarnings("ignore:invalid value encountered in cast")
@pytest.mark.parametrize(
"data",
[
[1.0, 2.0**11],
[-1.0, -(2.0**11)],
[1.0, 2.0**33],
[-1.0, -(2.0**33)],
[1.0, 2.0**65],
[-1.0, -(2.0**65)],
[1.0, float("inf")],
[1.0, float("-inf")],
[1.0, float("nan")],
[1.0, 2.0, 3.0, 4.0],
[1.0, 1.5, 2.6, 3.4],
],
)
@pytest.mark.parametrize(
"downcast", ["signed", "integer", "unsigned", "float"]
)
def test_to_numeric_downcast_float(data, downcast):
ps = pd.Series(data)
gs = cudf.from_pandas(ps)
expected = pd.to_numeric(ps, downcast=downcast)
got = cudf.to_numeric(gs, downcast=downcast)
assert_eq(expected, got)
@pytest.mark.filterwarnings("ignore:invalid value encountered in cast")
@pytest.mark.parametrize(
"data",
[
[1.0, 2.0**129],
[1.0, 2.0**257],
[1.0, 1.79e308],
[-1.0, -(2.0**129)],
[-1.0, -(2.0**257)],
[-1.0, -1.79e308],
],
)
@pytest.mark.parametrize("downcast", ["signed", "integer", "unsigned"])
def test_to_numeric_downcast_large_float(data, downcast):
ps = pd.Series(data)
gs = cudf.from_pandas(ps)
expected = pd.to_numeric(ps, downcast=downcast)
got = cudf.to_numeric(gs, downcast=downcast)
assert_eq(expected, got)
@pytest.mark.filterwarnings("ignore:overflow encountered in cast")
@pytest.mark.parametrize(
"data",
[
[1.0, 2.0**129],
[1.0, 2.0**257],
[1.0, 1.79e308],
[-1.0, -(2.0**129)],
[-1.0, -(2.0**257)],
[-1.0, -1.79e308],
],
)
@pytest.mark.parametrize("downcast", ["float"])
def test_to_numeric_downcast_large_float_pd_bug(data, downcast):
ps = pd.Series(data)
gs = cudf.from_pandas(ps)
expected = pd.to_numeric(ps, downcast=downcast)
got = cudf.to_numeric(gs, downcast=downcast)
if PANDAS_GE_150:
assert_eq(expected, got)
else:
# Pandas bug: https://github.com/pandas-dev/pandas/issues/19729
with pytest.raises(AssertionError, match="Series are different"):
assert_eq(expected, got)
@pytest.mark.parametrize(
"data",
[
["1", "2", "3"],
[str(np.iinfo(np.int64).max), str(np.iinfo(np.int64).min)],
],
)
@pytest.mark.parametrize(
"downcast", ["signed", "integer", "unsigned", "float"]
)
def test_to_numeric_downcast_string_int(data, downcast):
ps = pd.Series(data)
gs = cudf.from_pandas(ps)
expected = pd.to_numeric(ps, downcast=downcast)
got = cudf.to_numeric(gs, downcast=downcast)
assert_eq(expected, got)
@pytest.mark.parametrize(
"data",
[
[""], # pure empty strings
["10.0", "11.0", "2e3"],
["1.0", "2e3"],
["1", "10", "1.0", "2e3"], # int-float mixed
["1", "10", "1.0", "2e3", "2e+3", "2e-3"],
["1", "10", "1.0", "2e3", "", ""], # mixed empty strings
],
)
@pytest.mark.parametrize(
"downcast", ["signed", "integer", "unsigned", "float"]
)
def test_to_numeric_downcast_string_float(data, downcast):
ps = pd.Series(data)
gs = cudf.from_pandas(ps)
expected = pd.to_numeric(ps, downcast=downcast)
if downcast in {"signed", "integer", "unsigned"}:
with pytest.warns(
UserWarning,
match="Downcasting from float to int "
"will be limited by float32 precision.",
):
got = cudf.to_numeric(gs, downcast=downcast)
else:
got = cudf.to_numeric(gs, downcast=downcast)
assert_eq(expected, got)
@pytest.mark.filterwarnings("ignore:overflow encountered in cast")
@pytest.mark.parametrize(
"data",
[
["2e128", "-2e128"],
[
"1.79769313486231e308",
"-1.79769313486231e308",
], # 2 digits relaxed from np.finfo(np.float64).min/max
],
)
@pytest.mark.parametrize(
"downcast", ["signed", "integer", "unsigned", "float"]
)
def test_to_numeric_downcast_string_large_float(data, downcast):
ps = pd.Series(data)
gs = cudf.from_pandas(ps)
if downcast == "float":
expected = pd.to_numeric(ps, downcast=downcast)
got = cudf.to_numeric(gs, downcast=downcast)
if PANDAS_GE_150:
assert_eq(expected, got)
else:
# Pandas bug: https://github.com/pandas-dev/pandas/issues/19729
with pytest.raises(AssertionError, match="Series are different"):
assert_eq(expected, got)
else:
expected = pd.Series([np.inf, -np.inf])
with pytest.warns(
UserWarning,
match="Downcasting from float to int "
"will be limited by float32 precision.",
):
got = cudf.to_numeric(gs, downcast=downcast)
assert_eq(expected, got)
@pytest.mark.parametrize(
"data",
[
pd.Series(["1", "a", "3"]),
pd.Series(["1", "a", "3", ""]), # mix of unconvertible and empty str
],
)
@pytest.mark.parametrize("errors", ["ignore", "raise", "coerce"])
def test_to_numeric_error(data, errors):
if errors == "raise":
with pytest.raises(
ValueError, match="Unable to convert some strings to numerics."
):
cudf.to_numeric(data, errors=errors)
else:
expect = pd.to_numeric(data, errors=errors)
got = cudf.to_numeric(data, errors=errors)
assert_eq(expect, got)
@pytest.mark.parametrize("dtype", NUMERIC_TYPES)
@pytest.mark.parametrize("input_obj", [[1, cudf.NA, 3]])
def test_series_construction_with_nulls(dtype, input_obj):
dtype = cudf.dtype(dtype)
# numpy case
expect = pd.Series(input_obj, dtype=np_dtypes_to_pandas_dtypes[dtype])
got = cudf.Series(input_obj, dtype=dtype).to_pandas(nullable=True)
assert_eq(expect, got)
# Test numpy array of objects case
np_data = [
dtype.type(v) if v is not cudf.NA else cudf.NA for v in input_obj
]
expect = pd.Series(np_data, dtype=np_dtypes_to_pandas_dtypes[dtype])
got = cudf.Series(np_data, dtype=dtype).to_pandas(nullable=True)
assert_eq(expect, got)
@pytest.mark.parametrize(
"data",
[[True, False, True]],
)
@pytest.mark.parametrize(
"downcast", ["signed", "integer", "unsigned", "float"]
)
def test_series_to_numeric_bool(data, downcast):
ps = pd.Series(data)
gs = cudf.from_pandas(ps)
expect = pd.to_numeric(ps, downcast=downcast)
got = cudf.to_numeric(gs, downcast=downcast)
assert_eq(expect, got)
| 0 |
rapidsai_public_repos/cudf/python/cudf/cudf
|
rapidsai_public_repos/cudf/python/cudf/cudf/tests/test_hash_vocab.py
|
# Copyright (c) 2020-2022, NVIDIA CORPORATION.
import filecmp
import os
import warnings
import pytest
from cudf.utils.hash_vocab_utils import hash_vocab
@pytest.fixture(scope="module")
def datadir(datadir):
return os.path.join(
datadir, "subword_tokenizer_data", "bert_base_cased_sampled"
)
def test_correct_bert_base_vocab_hash(datadir, tmpdir):
# The vocabulary is drawn from bert-base-cased
vocab_path = os.path.join(datadir, "vocab.txt")
groundtruth_path = os.path.join(datadir, "vocab-hash.txt")
output_path = tmpdir.join("cudf-vocab-hash.txt")
with warnings.catch_warnings():
# See https://github.com/rapidsai/cudf/issues/12403
warnings.simplefilter(action="ignore", category=RuntimeWarning)
hash_vocab(vocab_path, output_path)
assert filecmp.cmp(output_path, groundtruth_path, shallow=False)
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.