text_prompt
stringlengths
100
17.7k
code_prompt
stringlengths
7
9.86k
<SYSTEM_TASK:> Query the columns of a DataFrame with a boolean expression. <END_TASK> <USER_TASK:> Description: def query(self, expr, inplace=False, **kwargs): """ Query the columns of a DataFrame with a boolean expression. Parameters ---------- expr : str The query string to evaluate. You can refer to variables in the environment by prefixing them with an '@' character like ``@a + b``. .. versionadded:: 0.25.0 You can refer to column names that contain spaces by surrounding them in backticks. For example, if one of your columns is called ``a a`` and you want to sum it with ``b``, your query should be ```a a` + b``. inplace : bool Whether the query should modify the data in place or return a modified copy. **kwargs See the documentation for :func:`eval` for complete details on the keyword arguments accepted by :meth:`DataFrame.query`. .. versionadded:: 0.18.0 Returns ------- DataFrame DataFrame resulting from the provided query expression. See Also -------- eval : Evaluate a string describing operations on DataFrame columns. DataFrame.eval : Evaluate a string describing operations on DataFrame columns. Notes ----- The result of the evaluation of this expression is first passed to :attr:`DataFrame.loc` and if that fails because of a multidimensional key (e.g., a DataFrame) then the result will be passed to :meth:`DataFrame.__getitem__`. This method uses the top-level :func:`eval` function to evaluate the passed query. The :meth:`~pandas.DataFrame.query` method uses a slightly modified Python syntax by default. For example, the ``&`` and ``|`` (bitwise) operators have the precedence of their boolean cousins, :keyword:`and` and :keyword:`or`. This *is* syntactically valid Python, however the semantics are different. You can change the semantics of the expression by passing the keyword argument ``parser='python'``. This enforces the same semantics as evaluation in Python space. Likewise, you can pass ``engine='python'`` to evaluate an expression using Python itself as a backend. This is not recommended as it is inefficient compared to using ``numexpr`` as the engine. The :attr:`DataFrame.index` and :attr:`DataFrame.columns` attributes of the :class:`~pandas.DataFrame` instance are placed in the query namespace by default, which allows you to treat both the index and columns of the frame as a column in the frame. The identifier ``index`` is used for the frame index; you can also use the name of the index to identify it in a query. Please note that Python keywords may not be used as identifiers. For further details and examples see the ``query`` documentation in :ref:`indexing <indexing.query>`. Examples -------- >>> df = pd.DataFrame({'A': range(1, 6), ... 'B': range(10, 0, -2), ... 'C C': range(10, 5, -1)}) >>> df A B C C 0 1 10 10 1 2 8 9 2 3 6 8 3 4 4 7 4 5 2 6 >>> df.query('A > B') A B C C 4 5 2 6 The previous expression is equivalent to >>> df[df.A > df.B] A B C C 4 5 2 6 For columns with spaces in their name, you can use backtick quoting. >>> df.query('B == `C C`') A B C C 0 1 10 10 The previous expression is equivalent to >>> df[df.B == df['C C']] A B C C 0 1 10 10 """
inplace = validate_bool_kwarg(inplace, 'inplace') if not isinstance(expr, str): msg = "expr must be a string to be evaluated, {0} given" raise ValueError(msg.format(type(expr))) kwargs['level'] = kwargs.pop('level', 0) + 1 kwargs['target'] = None res = self.eval(expr, **kwargs) try: new_data = self.loc[res] except ValueError: # when res is multi-dimensional loc raises, but this is sometimes a # valid query new_data = self[res] if inplace: self._update_inplace(new_data) else: return new_data
<SYSTEM_TASK:> Evaluate a string describing operations on DataFrame columns. <END_TASK> <USER_TASK:> Description: def eval(self, expr, inplace=False, **kwargs): """ Evaluate a string describing operations on DataFrame columns. Operates on columns only, not specific rows or elements. This allows `eval` to run arbitrary code, which can make you vulnerable to code injection if you pass user input to this function. Parameters ---------- expr : str The expression string to evaluate. inplace : bool, default False If the expression contains an assignment, whether to perform the operation inplace and mutate the existing DataFrame. Otherwise, a new DataFrame is returned. .. versionadded:: 0.18.0. kwargs : dict See the documentation for :func:`eval` for complete details on the keyword arguments accepted by :meth:`~pandas.DataFrame.query`. Returns ------- ndarray, scalar, or pandas object The result of the evaluation. See Also -------- DataFrame.query : Evaluates a boolean expression to query the columns of a frame. DataFrame.assign : Can evaluate an expression or function to create new values for a column. eval : Evaluate a Python expression as a string using various backends. Notes ----- For more details see the API documentation for :func:`~eval`. For detailed examples see :ref:`enhancing performance with eval <enhancingperf.eval>`. Examples -------- >>> df = pd.DataFrame({'A': range(1, 6), 'B': range(10, 0, -2)}) >>> df A B 0 1 10 1 2 8 2 3 6 3 4 4 4 5 2 >>> df.eval('A + B') 0 11 1 10 2 9 3 8 4 7 dtype: int64 Assignment is allowed though by default the original DataFrame is not modified. >>> df.eval('C = A + B') A B C 0 1 10 11 1 2 8 10 2 3 6 9 3 4 4 8 4 5 2 7 >>> df A B 0 1 10 1 2 8 2 3 6 3 4 4 4 5 2 Use ``inplace=True`` to modify the original DataFrame. >>> df.eval('C = A + B', inplace=True) >>> df A B C 0 1 10 11 1 2 8 10 2 3 6 9 3 4 4 8 4 5 2 7 """
from pandas.core.computation.eval import eval as _eval inplace = validate_bool_kwarg(inplace, 'inplace') resolvers = kwargs.pop('resolvers', None) kwargs['level'] = kwargs.pop('level', 0) + 1 if resolvers is None: index_resolvers = self._get_index_resolvers() column_resolvers = \ self._get_space_character_free_column_resolvers() resolvers = column_resolvers, index_resolvers if 'target' not in kwargs: kwargs['target'] = self kwargs['resolvers'] = kwargs.get('resolvers', ()) + tuple(resolvers) return _eval(expr, inplace=inplace, **kwargs)
<SYSTEM_TASK:> Return a subset of the DataFrame's columns based on the column dtypes. <END_TASK> <USER_TASK:> Description: def select_dtypes(self, include=None, exclude=None): """ Return a subset of the DataFrame's columns based on the column dtypes. Parameters ---------- include, exclude : scalar or list-like A selection of dtypes or strings to be included/excluded. At least one of these parameters must be supplied. Returns ------- DataFrame The subset of the frame including the dtypes in ``include`` and excluding the dtypes in ``exclude``. Raises ------ ValueError * If both of ``include`` and ``exclude`` are empty * If ``include`` and ``exclude`` have overlapping elements * If any kind of string dtype is passed in. Notes ----- * To select all *numeric* types, use ``np.number`` or ``'number'`` * To select strings you must use the ``object`` dtype, but note that this will return *all* object dtype columns * See the `numpy dtype hierarchy <http://docs.scipy.org/doc/numpy/reference/arrays.scalars.html>`__ * To select datetimes, use ``np.datetime64``, ``'datetime'`` or ``'datetime64'`` * To select timedeltas, use ``np.timedelta64``, ``'timedelta'`` or ``'timedelta64'`` * To select Pandas categorical dtypes, use ``'category'`` * To select Pandas datetimetz dtypes, use ``'datetimetz'`` (new in 0.20.0) or ``'datetime64[ns, tz]'`` Examples -------- >>> df = pd.DataFrame({'a': [1, 2] * 3, ... 'b': [True, False] * 3, ... 'c': [1.0, 2.0] * 3}) >>> df a b c 0 1 True 1.0 1 2 False 2.0 2 1 True 1.0 3 2 False 2.0 4 1 True 1.0 5 2 False 2.0 >>> df.select_dtypes(include='bool') b 0 True 1 False 2 True 3 False 4 True 5 False >>> df.select_dtypes(include=['float64']) c 0 1.0 1 2.0 2 1.0 3 2.0 4 1.0 5 2.0 >>> df.select_dtypes(exclude=['int']) b c 0 True 1.0 1 False 2.0 2 True 1.0 3 False 2.0 4 True 1.0 5 False 2.0 """
def _get_info_slice(obj, indexer): """Slice the info axis of `obj` with `indexer`.""" if not hasattr(obj, '_info_axis_number'): msg = 'object of type {typ!r} has no info axis' raise TypeError(msg.format(typ=type(obj).__name__)) slices = [slice(None)] * obj.ndim slices[obj._info_axis_number] = indexer return tuple(slices) if not is_list_like(include): include = (include,) if include is not None else () if not is_list_like(exclude): exclude = (exclude,) if exclude is not None else () selection = tuple(map(frozenset, (include, exclude))) if not any(selection): raise ValueError('at least one of include or exclude must be ' 'nonempty') # convert the myriad valid dtypes object to a single representation include, exclude = map( lambda x: frozenset(map(infer_dtype_from_object, x)), selection) for dtypes in (include, exclude): invalidate_string_dtypes(dtypes) # can't both include AND exclude! if not include.isdisjoint(exclude): raise ValueError('include and exclude overlap on {inc_ex}'.format( inc_ex=(include & exclude))) # empty include/exclude -> defaults to True # three cases (we've already raised if both are empty) # case 1: empty include, nonempty exclude # we have True, True, ... True for include, same for exclude # in the loop below we get the excluded # and when we call '&' below we get only the excluded # case 2: nonempty include, empty exclude # same as case 1, but with include # case 3: both nonempty # the "union" of the logic of case 1 and case 2: # we get the included and excluded, and return their logical and include_these = Series(not bool(include), index=self.columns) exclude_these = Series(not bool(exclude), index=self.columns) def is_dtype_instance_mapper(idx, dtype): return idx, functools.partial(issubclass, dtype.type) for idx, f in itertools.starmap(is_dtype_instance_mapper, enumerate(self.dtypes)): if include: # checks for the case of empty include or exclude include_these.iloc[idx] = any(map(f, include)) if exclude: exclude_these.iloc[idx] = not any(map(f, exclude)) dtype_indexer = include_these & exclude_these return self.loc[_get_info_slice(self, dtype_indexer)]
<SYSTEM_TASK:> Provide boxed values for a column. <END_TASK> <USER_TASK:> Description: def _box_col_values(self, values, items): """ Provide boxed values for a column. """
klass = self._constructor_sliced return klass(values, index=self.index, name=items, fastpath=True)
<SYSTEM_TASK:> Ensure that if we don't have an index, that we can create one from the <END_TASK> <USER_TASK:> Description: def _ensure_valid_index(self, value): """ Ensure that if we don't have an index, that we can create one from the passed value. """
# GH5632, make sure that we are a Series convertible if not len(self.index) and is_list_like(value): try: value = Series(value) except (ValueError, NotImplementedError, TypeError): raise ValueError('Cannot set a frame with no defined index ' 'and a value that cannot be converted to a ' 'Series') self._data = self._data.reindex_axis(value.index.copy(), axis=1, fill_value=np.nan)
<SYSTEM_TASK:> Add series to DataFrame in specified column. <END_TASK> <USER_TASK:> Description: def _set_item(self, key, value): """ Add series to DataFrame in specified column. If series is a numpy-array (not a Series/TimeSeries), it must be the same length as the DataFrames index or an error will be thrown. Series/TimeSeries will be conformed to the DataFrames index to ensure homogeneity. """
self._ensure_valid_index(value) value = self._sanitize_column(key, value) NDFrame._set_item(self, key, value) # check if we are modifying a copy # try to set first as we want an invalid # value exception to occur first if len(self): self._check_setitem_copy()
<SYSTEM_TASK:> Insert column into DataFrame at specified location. <END_TASK> <USER_TASK:> Description: def insert(self, loc, column, value, allow_duplicates=False): """ Insert column into DataFrame at specified location. Raises a ValueError if `column` is already contained in the DataFrame, unless `allow_duplicates` is set to True. Parameters ---------- loc : int Insertion index. Must verify 0 <= loc <= len(columns) column : string, number, or hashable object label of the inserted column value : int, Series, or array-like allow_duplicates : bool, optional """
self._ensure_valid_index(value) value = self._sanitize_column(column, value, broadcast=False) self._data.insert(loc, column, value, allow_duplicates=allow_duplicates)
<SYSTEM_TASK:> r""" <END_TASK> <USER_TASK:> Description: def assign(self, **kwargs): r""" Assign new columns to a DataFrame. Returns a new object with all original columns in addition to new ones. Existing columns that are re-assigned will be overwritten. Parameters ---------- **kwargs : dict of {str: callable or Series} The column names are keywords. If the values are callable, they are computed on the DataFrame and assigned to the new columns. The callable must not change input DataFrame (though pandas doesn't check it). If the values are not callable, (e.g. a Series, scalar, or array), they are simply assigned. Returns ------- DataFrame A new DataFrame with the new columns in addition to all the existing columns. Notes ----- Assigning multiple columns within the same ``assign`` is possible. For Python 3.6 and above, later items in '\*\*kwargs' may refer to newly created or modified columns in 'df'; items are computed and assigned into 'df' in order. For Python 3.5 and below, the order of keyword arguments is not specified, you cannot refer to newly created or modified columns. All items are computed first, and then assigned in alphabetical order. .. versionchanged :: 0.23.0 Keyword argument order is maintained for Python 3.6 and later. Examples -------- >>> df = pd.DataFrame({'temp_c': [17.0, 25.0]}, ... index=['Portland', 'Berkeley']) >>> df temp_c Portland 17.0 Berkeley 25.0 Where the value is a callable, evaluated on `df`: >>> df.assign(temp_f=lambda x: x.temp_c * 9 / 5 + 32) temp_c temp_f Portland 17.0 62.6 Berkeley 25.0 77.0 Alternatively, the same behavior can be achieved by directly referencing an existing Series or sequence: >>> df.assign(temp_f=df['temp_c'] * 9 / 5 + 32) temp_c temp_f Portland 17.0 62.6 Berkeley 25.0 77.0 In Python 3.6+, you can create multiple columns within the same assign where one of the columns depends on another one defined within the same assign: >>> df.assign(temp_f=lambda x: x['temp_c'] * 9 / 5 + 32, ... temp_k=lambda x: (x['temp_f'] + 459.67) * 5 / 9) temp_c temp_f temp_k Portland 17.0 62.6 290.15 Berkeley 25.0 77.0 298.15 """
data = self.copy() # >= 3.6 preserve order of kwargs if PY36: for k, v in kwargs.items(): data[k] = com.apply_if_callable(v, data) else: # <= 3.5: do all calculations first... results = OrderedDict() for k, v in kwargs.items(): results[k] = com.apply_if_callable(v, data) # <= 3.5 and earlier results = sorted(results.items()) # ... and then assign for k, v in results: data[k] = v return data
<SYSTEM_TASK:> Label-based "fancy indexing" function for DataFrame. <END_TASK> <USER_TASK:> Description: def lookup(self, row_labels, col_labels): """ Label-based "fancy indexing" function for DataFrame. Given equal-length arrays of row and column labels, return an array of the values corresponding to each (row, col) pair. Parameters ---------- row_labels : sequence The row labels to use for lookup col_labels : sequence The column labels to use for lookup Notes ----- Akin to:: result = [df.get_value(row, col) for row, col in zip(row_labels, col_labels)] Examples -------- values : ndarray The found values """
n = len(row_labels) if n != len(col_labels): raise ValueError('Row labels must have same size as column labels') thresh = 1000 if not self._is_mixed_type or n > thresh: values = self.values ridx = self.index.get_indexer(row_labels) cidx = self.columns.get_indexer(col_labels) if (ridx == -1).any(): raise KeyError('One or more row labels was not found') if (cidx == -1).any(): raise KeyError('One or more column labels was not found') flat_index = ridx * len(self.columns) + cidx result = values.flat[flat_index] else: result = np.empty(n, dtype='O') for i, (r, c) in enumerate(zip(row_labels, col_labels)): result[i] = self._get_value(r, c) if is_object_dtype(result): result = lib.maybe_convert_objects(result) return result
<SYSTEM_TASK:> We are guaranteed non-Nones in the axes. <END_TASK> <USER_TASK:> Description: def _reindex_multi(self, axes, copy, fill_value): """ We are guaranteed non-Nones in the axes. """
new_index, row_indexer = self.index.reindex(axes['index']) new_columns, col_indexer = self.columns.reindex(axes['columns']) if row_indexer is not None and col_indexer is not None: indexer = row_indexer, col_indexer new_values = algorithms.take_2d_multi(self.values, indexer, fill_value=fill_value) return self._constructor(new_values, index=new_index, columns=new_columns) else: return self._reindex_with_indexers({0: [new_index, row_indexer], 1: [new_columns, col_indexer]}, copy=copy, fill_value=fill_value)
<SYSTEM_TASK:> Drop specified labels from rows or columns. <END_TASK> <USER_TASK:> Description: def drop(self, labels=None, axis=0, index=None, columns=None, level=None, inplace=False, errors='raise'): """ Drop specified labels from rows or columns. Remove rows or columns by specifying label names and corresponding axis, or by specifying directly index or column names. When using a multi-index, labels on different levels can be removed by specifying the level. Parameters ---------- labels : single label or list-like Index or column labels to drop. axis : {0 or 'index', 1 or 'columns'}, default 0 Whether to drop labels from the index (0 or 'index') or columns (1 or 'columns'). index : single label or list-like Alternative to specifying axis (``labels, axis=0`` is equivalent to ``index=labels``). .. versionadded:: 0.21.0 columns : single label or list-like Alternative to specifying axis (``labels, axis=1`` is equivalent to ``columns=labels``). .. versionadded:: 0.21.0 level : int or level name, optional For MultiIndex, level from which the labels will be removed. inplace : bool, default False If True, do operation inplace and return None. errors : {'ignore', 'raise'}, default 'raise' If 'ignore', suppress error and only existing labels are dropped. Returns ------- DataFrame DataFrame without the removed index or column labels. Raises ------ KeyError If any of the labels is not found in the selected axis. See Also -------- DataFrame.loc : Label-location based indexer for selection by label. DataFrame.dropna : Return DataFrame with labels on given axis omitted where (all or any) data are missing. DataFrame.drop_duplicates : Return DataFrame with duplicate rows removed, optionally only considering certain columns. Series.drop : Return Series with specified index labels removed. Examples -------- >>> df = pd.DataFrame(np.arange(12).reshape(3, 4), ... columns=['A', 'B', 'C', 'D']) >>> df A B C D 0 0 1 2 3 1 4 5 6 7 2 8 9 10 11 Drop columns >>> df.drop(['B', 'C'], axis=1) A D 0 0 3 1 4 7 2 8 11 >>> df.drop(columns=['B', 'C']) A D 0 0 3 1 4 7 2 8 11 Drop a row by index >>> df.drop([0, 1]) A B C D 2 8 9 10 11 Drop columns and/or rows of MultiIndex DataFrame >>> midx = pd.MultiIndex(levels=[['lama', 'cow', 'falcon'], ... ['speed', 'weight', 'length']], ... codes=[[0, 0, 0, 1, 1, 1, 2, 2, 2], ... [0, 1, 2, 0, 1, 2, 0, 1, 2]]) >>> df = pd.DataFrame(index=midx, columns=['big', 'small'], ... data=[[45, 30], [200, 100], [1.5, 1], [30, 20], ... [250, 150], [1.5, 0.8], [320, 250], ... [1, 0.8], [0.3, 0.2]]) >>> df big small lama speed 45.0 30.0 weight 200.0 100.0 length 1.5 1.0 cow speed 30.0 20.0 weight 250.0 150.0 length 1.5 0.8 falcon speed 320.0 250.0 weight 1.0 0.8 length 0.3 0.2 >>> df.drop(index='cow', columns='small') big lama speed 45.0 weight 200.0 length 1.5 falcon speed 320.0 weight 1.0 length 0.3 >>> df.drop(index='length', level=1) big small lama speed 45.0 30.0 weight 200.0 100.0 cow speed 30.0 20.0 weight 250.0 150.0 falcon speed 320.0 250.0 weight 1.0 0.8 """
return super().drop(labels=labels, axis=axis, index=index, columns=columns, level=level, inplace=inplace, errors=errors)
<SYSTEM_TASK:> Alter axes labels. <END_TASK> <USER_TASK:> Description: def rename(self, *args, **kwargs): """ Alter axes labels. Function / dict values must be unique (1-to-1). Labels not contained in a dict / Series will be left as-is. Extra labels listed don't throw an error. See the :ref:`user guide <basics.rename>` for more. Parameters ---------- mapper : dict-like or function Dict-like or functions transformations to apply to that axis' values. Use either ``mapper`` and ``axis`` to specify the axis to target with ``mapper``, or ``index`` and ``columns``. index : dict-like or function Alternative to specifying axis (``mapper, axis=0`` is equivalent to ``index=mapper``). columns : dict-like or function Alternative to specifying axis (``mapper, axis=1`` is equivalent to ``columns=mapper``). axis : int or str Axis to target with ``mapper``. Can be either the axis name ('index', 'columns') or number (0, 1). The default is 'index'. copy : bool, default True Also copy underlying data. inplace : bool, default False Whether to return a new DataFrame. If True then value of copy is ignored. level : int or level name, default None In case of a MultiIndex, only rename labels in the specified level. errors : {'ignore', 'raise'}, default 'ignore' If 'raise', raise a `KeyError` when a dict-like `mapper`, `index`, or `columns` contains labels that are not present in the Index being transformed. If 'ignore', existing keys will be renamed and extra keys will be ignored. Returns ------- DataFrame DataFrame with the renamed axis labels. Raises ------ KeyError If any of the labels is not found in the selected axis and "errors='raise'". See Also -------- DataFrame.rename_axis : Set the name of the axis. Examples -------- ``DataFrame.rename`` supports two calling conventions * ``(index=index_mapper, columns=columns_mapper, ...)`` * ``(mapper, axis={'index', 'columns'}, ...)`` We *highly* recommend using keyword arguments to clarify your intent. >>> df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}) >>> df.rename(index=str, columns={"A": "a", "B": "c"}) a c 0 1 4 1 2 5 2 3 6 >>> df.rename(index=str, columns={"A": "a", "C": "c"}) a B 0 1 4 1 2 5 2 3 6 >>> df.rename(index=str, columns={"A": "a", "C": "c"}, errors="raise") Traceback (most recent call last): KeyError: ['C'] not found in axis Using axis-style parameters >>> df.rename(str.lower, axis='columns') a b 0 1 4 1 2 5 2 3 6 >>> df.rename({1: 2, 2: 4}, axis='index') A B 0 1 4 2 2 5 4 3 6 """
axes = validate_axis_style_args(self, args, kwargs, 'mapper', 'rename') kwargs.update(axes) # Pop these, since the values are in `kwargs` under different names kwargs.pop('axis', None) kwargs.pop('mapper', None) return super().rename(**kwargs)
<SYSTEM_TASK:> Remove missing values. <END_TASK> <USER_TASK:> Description: def dropna(self, axis=0, how='any', thresh=None, subset=None, inplace=False): """ Remove missing values. See the :ref:`User Guide <missing_data>` for more on which values are considered missing, and how to work with missing data. Parameters ---------- axis : {0 or 'index', 1 or 'columns'}, default 0 Determine if rows or columns which contain missing values are removed. * 0, or 'index' : Drop rows which contain missing values. * 1, or 'columns' : Drop columns which contain missing value. .. deprecated:: 0.23.0 Pass tuple or list to drop on multiple axes. Only a single axis is allowed. how : {'any', 'all'}, default 'any' Determine if row or column is removed from DataFrame, when we have at least one NA or all NA. * 'any' : If any NA values are present, drop that row or column. * 'all' : If all values are NA, drop that row or column. thresh : int, optional Require that many non-NA values. subset : array-like, optional Labels along other axis to consider, e.g. if you are dropping rows these would be a list of columns to include. inplace : bool, default False If True, do operation inplace and return None. Returns ------- DataFrame DataFrame with NA entries dropped from it. See Also -------- DataFrame.isna: Indicate missing values. DataFrame.notna : Indicate existing (non-missing) values. DataFrame.fillna : Replace missing values. Series.dropna : Drop missing values. Index.dropna : Drop missing indices. Examples -------- >>> df = pd.DataFrame({"name": ['Alfred', 'Batman', 'Catwoman'], ... "toy": [np.nan, 'Batmobile', 'Bullwhip'], ... "born": [pd.NaT, pd.Timestamp("1940-04-25"), ... pd.NaT]}) >>> df name toy born 0 Alfred NaN NaT 1 Batman Batmobile 1940-04-25 2 Catwoman Bullwhip NaT Drop the rows where at least one element is missing. >>> df.dropna() name toy born 1 Batman Batmobile 1940-04-25 Drop the columns where at least one element is missing. >>> df.dropna(axis='columns') name 0 Alfred 1 Batman 2 Catwoman Drop the rows where all elements are missing. >>> df.dropna(how='all') name toy born 0 Alfred NaN NaT 1 Batman Batmobile 1940-04-25 2 Catwoman Bullwhip NaT Keep only the rows with at least 2 non-NA values. >>> df.dropna(thresh=2) name toy born 1 Batman Batmobile 1940-04-25 2 Catwoman Bullwhip NaT Define in which columns to look for missing values. >>> df.dropna(subset=['name', 'born']) name toy born 1 Batman Batmobile 1940-04-25 Keep the DataFrame with valid entries in the same variable. >>> df.dropna(inplace=True) >>> df name toy born 1 Batman Batmobile 1940-04-25 """
inplace = validate_bool_kwarg(inplace, 'inplace') if isinstance(axis, (tuple, list)): # GH20987 msg = ("supplying multiple axes to axis is deprecated and " "will be removed in a future version.") warnings.warn(msg, FutureWarning, stacklevel=2) result = self for ax in axis: result = result.dropna(how=how, thresh=thresh, subset=subset, axis=ax) else: axis = self._get_axis_number(axis) agg_axis = 1 - axis agg_obj = self if subset is not None: ax = self._get_axis(agg_axis) indices = ax.get_indexer_for(subset) check = indices == -1 if check.any(): raise KeyError(list(np.compress(check, subset))) agg_obj = self.take(indices, axis=agg_axis) count = agg_obj.count(axis=agg_axis) if thresh is not None: mask = count >= thresh elif how == 'any': mask = count == len(agg_obj._get_axis(agg_axis)) elif how == 'all': mask = count > 0 else: if how is not None: raise ValueError('invalid how option: {h}'.format(h=how)) else: raise TypeError('must specify how or thresh') result = self.loc(axis=axis)[mask] if inplace: self._update_inplace(result) else: return result
<SYSTEM_TASK:> Return DataFrame with duplicate rows removed, optionally only <END_TASK> <USER_TASK:> Description: def drop_duplicates(self, subset=None, keep='first', inplace=False): """ Return DataFrame with duplicate rows removed, optionally only considering certain columns. Indexes, including time indexes are ignored. Parameters ---------- subset : column label or sequence of labels, optional Only consider certain columns for identifying duplicates, by default use all of the columns keep : {'first', 'last', False}, default 'first' - ``first`` : Drop duplicates except for the first occurrence. - ``last`` : Drop duplicates except for the last occurrence. - False : Drop all duplicates. inplace : boolean, default False Whether to drop duplicates in place or to return a copy Returns ------- DataFrame """
if self.empty: return self.copy() inplace = validate_bool_kwarg(inplace, 'inplace') duplicated = self.duplicated(subset, keep=keep) if inplace: inds, = (-duplicated)._ndarray_values.nonzero() new_data = self._data.take(inds) self._update_inplace(new_data) else: return self[-duplicated]
<SYSTEM_TASK:> Return boolean Series denoting duplicate rows, optionally only <END_TASK> <USER_TASK:> Description: def duplicated(self, subset=None, keep='first'): """ Return boolean Series denoting duplicate rows, optionally only considering certain columns. Parameters ---------- subset : column label or sequence of labels, optional Only consider certain columns for identifying duplicates, by default use all of the columns keep : {'first', 'last', False}, default 'first' - ``first`` : Mark duplicates as ``True`` except for the first occurrence. - ``last`` : Mark duplicates as ``True`` except for the last occurrence. - False : Mark all duplicates as ``True``. Returns ------- Series """
from pandas.core.sorting import get_group_index from pandas._libs.hashtable import duplicated_int64, _SIZE_HINT_LIMIT if self.empty: return Series(dtype=bool) def f(vals): labels, shape = algorithms.factorize( vals, size_hint=min(len(self), _SIZE_HINT_LIMIT)) return labels.astype('i8', copy=False), len(shape) if subset is None: subset = self.columns elif (not np.iterable(subset) or isinstance(subset, str) or isinstance(subset, tuple) and subset in self.columns): subset = subset, # Verify all columns in subset exist in the queried dataframe # Otherwise, raise a KeyError, same as if you try to __getitem__ with a # key that doesn't exist. diff = Index(subset).difference(self.columns) if not diff.empty: raise KeyError(diff) vals = (col.values for name, col in self.iteritems() if name in subset) labels, shape = map(list, zip(*map(f, vals))) ids = get_group_index(labels, shape, sort=False, xnull=False) return Series(duplicated_int64(ids, keep), index=self.index)
<SYSTEM_TASK:> Return the first `n` rows ordered by `columns` in descending order. <END_TASK> <USER_TASK:> Description: def nlargest(self, n, columns, keep='first'): """ Return the first `n` rows ordered by `columns` in descending order. Return the first `n` rows with the largest values in `columns`, in descending order. The columns that are not specified are returned as well, but not used for ordering. This method is equivalent to ``df.sort_values(columns, ascending=False).head(n)``, but more performant. Parameters ---------- n : int Number of rows to return. columns : label or list of labels Column label(s) to order by. keep : {'first', 'last', 'all'}, default 'first' Where there are duplicate values: - `first` : prioritize the first occurrence(s) - `last` : prioritize the last occurrence(s) - ``all`` : do not drop any duplicates, even it means selecting more than `n` items. .. versionadded:: 0.24.0 Returns ------- DataFrame The first `n` rows ordered by the given columns in descending order. See Also -------- DataFrame.nsmallest : Return the first `n` rows ordered by `columns` in ascending order. DataFrame.sort_values : Sort DataFrame by the values. DataFrame.head : Return the first `n` rows without re-ordering. Notes ----- This function cannot be used with all column types. For example, when specifying columns with `object` or `category` dtypes, ``TypeError`` is raised. Examples -------- >>> df = pd.DataFrame({'population': [59000000, 65000000, 434000, ... 434000, 434000, 337000, 11300, ... 11300, 11300], ... 'GDP': [1937894, 2583560 , 12011, 4520, 12128, ... 17036, 182, 38, 311], ... 'alpha-2': ["IT", "FR", "MT", "MV", "BN", ... "IS", "NR", "TV", "AI"]}, ... index=["Italy", "France", "Malta", ... "Maldives", "Brunei", "Iceland", ... "Nauru", "Tuvalu", "Anguilla"]) >>> df population GDP alpha-2 Italy 59000000 1937894 IT France 65000000 2583560 FR Malta 434000 12011 MT Maldives 434000 4520 MV Brunei 434000 12128 BN Iceland 337000 17036 IS Nauru 11300 182 NR Tuvalu 11300 38 TV Anguilla 11300 311 AI In the following example, we will use ``nlargest`` to select the three rows having the largest values in column "population". >>> df.nlargest(3, 'population') population GDP alpha-2 France 65000000 2583560 FR Italy 59000000 1937894 IT Malta 434000 12011 MT When using ``keep='last'``, ties are resolved in reverse order: >>> df.nlargest(3, 'population', keep='last') population GDP alpha-2 France 65000000 2583560 FR Italy 59000000 1937894 IT Brunei 434000 12128 BN When using ``keep='all'``, all duplicate items are maintained: >>> df.nlargest(3, 'population', keep='all') population GDP alpha-2 France 65000000 2583560 FR Italy 59000000 1937894 IT Malta 434000 12011 MT Maldives 434000 4520 MV Brunei 434000 12128 BN To order by the largest values in column "population" and then "GDP", we can specify multiple columns like in the next example. >>> df.nlargest(3, ['population', 'GDP']) population GDP alpha-2 France 65000000 2583560 FR Italy 59000000 1937894 IT Brunei 434000 12128 BN """
return algorithms.SelectNFrame(self, n=n, keep=keep, columns=columns).nlargest()
<SYSTEM_TASK:> Return the first `n` rows ordered by `columns` in ascending order. <END_TASK> <USER_TASK:> Description: def nsmallest(self, n, columns, keep='first'): """ Return the first `n` rows ordered by `columns` in ascending order. Return the first `n` rows with the smallest values in `columns`, in ascending order. The columns that are not specified are returned as well, but not used for ordering. This method is equivalent to ``df.sort_values(columns, ascending=True).head(n)``, but more performant. Parameters ---------- n : int Number of items to retrieve. columns : list or str Column name or names to order by. keep : {'first', 'last', 'all'}, default 'first' Where there are duplicate values: - ``first`` : take the first occurrence. - ``last`` : take the last occurrence. - ``all`` : do not drop any duplicates, even it means selecting more than `n` items. .. versionadded:: 0.24.0 Returns ------- DataFrame See Also -------- DataFrame.nlargest : Return the first `n` rows ordered by `columns` in descending order. DataFrame.sort_values : Sort DataFrame by the values. DataFrame.head : Return the first `n` rows without re-ordering. Examples -------- >>> df = pd.DataFrame({'population': [59000000, 65000000, 434000, ... 434000, 434000, 337000, 11300, ... 11300, 11300], ... 'GDP': [1937894, 2583560 , 12011, 4520, 12128, ... 17036, 182, 38, 311], ... 'alpha-2': ["IT", "FR", "MT", "MV", "BN", ... "IS", "NR", "TV", "AI"]}, ... index=["Italy", "France", "Malta", ... "Maldives", "Brunei", "Iceland", ... "Nauru", "Tuvalu", "Anguilla"]) >>> df population GDP alpha-2 Italy 59000000 1937894 IT France 65000000 2583560 FR Malta 434000 12011 MT Maldives 434000 4520 MV Brunei 434000 12128 BN Iceland 337000 17036 IS Nauru 11300 182 NR Tuvalu 11300 38 TV Anguilla 11300 311 AI In the following example, we will use ``nsmallest`` to select the three rows having the smallest values in column "a". >>> df.nsmallest(3, 'population') population GDP alpha-2 Nauru 11300 182 NR Tuvalu 11300 38 TV Anguilla 11300 311 AI When using ``keep='last'``, ties are resolved in reverse order: >>> df.nsmallest(3, 'population', keep='last') population GDP alpha-2 Anguilla 11300 311 AI Tuvalu 11300 38 TV Nauru 11300 182 NR When using ``keep='all'``, all duplicate items are maintained: >>> df.nsmallest(3, 'population', keep='all') population GDP alpha-2 Nauru 11300 182 NR Tuvalu 11300 38 TV Anguilla 11300 311 AI To order by the largest values in column "a" and then "c", we can specify multiple columns like in the next example. >>> df.nsmallest(3, ['population', 'GDP']) population GDP alpha-2 Tuvalu 11300 38 TV Nauru 11300 182 NR Anguilla 11300 311 AI """
return algorithms.SelectNFrame(self, n=n, keep=keep, columns=columns).nsmallest()
<SYSTEM_TASK:> Swap levels i and j in a MultiIndex on a particular axis. <END_TASK> <USER_TASK:> Description: def swaplevel(self, i=-2, j=-1, axis=0): """ Swap levels i and j in a MultiIndex on a particular axis. Parameters ---------- i, j : int, string (can be mixed) Level of index to be swapped. Can pass level name as string. Returns ------- DataFrame .. versionchanged:: 0.18.1 The indexes ``i`` and ``j`` are now optional, and default to the two innermost levels of the index. """
result = self.copy() axis = self._get_axis_number(axis) if axis == 0: result.index = result.index.swaplevel(i, j) else: result.columns = result.columns.swaplevel(i, j) return result
<SYSTEM_TASK:> Rearrange index levels using input order. May not drop or <END_TASK> <USER_TASK:> Description: def reorder_levels(self, order, axis=0): """ Rearrange index levels using input order. May not drop or duplicate levels. Parameters ---------- order : list of int or list of str List representing new level order. Reference level by number (position) or by key (label). axis : int Where to reorder levels. Returns ------- type of caller (new object) """
axis = self._get_axis_number(axis) if not isinstance(self._get_axis(axis), MultiIndex): # pragma: no cover raise TypeError('Can only reorder levels on a hierarchical axis.') result = self.copy() if axis == 0: result.index = result.index.reorder_levels(order) else: result.columns = result.columns.reorder_levels(order) return result
<SYSTEM_TASK:> Perform column-wise combine with another DataFrame. <END_TASK> <USER_TASK:> Description: def combine(self, other, func, fill_value=None, overwrite=True): """ Perform column-wise combine with another DataFrame. Combines a DataFrame with `other` DataFrame using `func` to element-wise combine columns. The row and column indexes of the resulting DataFrame will be the union of the two. Parameters ---------- other : DataFrame The DataFrame to merge column-wise. func : function Function that takes two series as inputs and return a Series or a scalar. Used to merge the two dataframes column by columns. fill_value : scalar value, default None The value to fill NaNs with prior to passing any column to the merge func. overwrite : bool, default True If True, columns in `self` that do not exist in `other` will be overwritten with NaNs. Returns ------- DataFrame Combination of the provided DataFrames. See Also -------- DataFrame.combine_first : Combine two DataFrame objects and default to non-null values in frame calling the method. Examples -------- Combine using a simple function that chooses the smaller column. >>> df1 = pd.DataFrame({'A': [0, 0], 'B': [4, 4]}) >>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]}) >>> take_smaller = lambda s1, s2: s1 if s1.sum() < s2.sum() else s2 >>> df1.combine(df2, take_smaller) A B 0 0 3 1 0 3 Example using a true element-wise combine function. >>> df1 = pd.DataFrame({'A': [5, 0], 'B': [2, 4]}) >>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]}) >>> df1.combine(df2, np.minimum) A B 0 1 2 1 0 3 Using `fill_value` fills Nones prior to passing the column to the merge function. >>> df1 = pd.DataFrame({'A': [0, 0], 'B': [None, 4]}) >>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]}) >>> df1.combine(df2, take_smaller, fill_value=-5) A B 0 0 -5.0 1 0 4.0 However, if the same element in both dataframes is None, that None is preserved >>> df1 = pd.DataFrame({'A': [0, 0], 'B': [None, 4]}) >>> df2 = pd.DataFrame({'A': [1, 1], 'B': [None, 3]}) >>> df1.combine(df2, take_smaller, fill_value=-5) A B 0 0 -5.0 1 0 3.0 Example that demonstrates the use of `overwrite` and behavior when the axis differ between the dataframes. >>> df1 = pd.DataFrame({'A': [0, 0], 'B': [4, 4]}) >>> df2 = pd.DataFrame({'B': [3, 3], 'C': [-10, 1], }, index=[1, 2]) >>> df1.combine(df2, take_smaller) A B C 0 NaN NaN NaN 1 NaN 3.0 -10.0 2 NaN 3.0 1.0 >>> df1.combine(df2, take_smaller, overwrite=False) A B C 0 0.0 NaN NaN 1 0.0 3.0 -10.0 2 NaN 3.0 1.0 Demonstrating the preference of the passed in dataframe. >>> df2 = pd.DataFrame({'B': [3, 3], 'C': [1, 1], }, index=[1, 2]) >>> df2.combine(df1, take_smaller) A B C 0 0.0 NaN NaN 1 0.0 3.0 NaN 2 NaN 3.0 NaN >>> df2.combine(df1, take_smaller, overwrite=False) A B C 0 0.0 NaN NaN 1 0.0 3.0 1.0 2 NaN 3.0 1.0 """
other_idxlen = len(other.index) # save for compare this, other = self.align(other, copy=False) new_index = this.index if other.empty and len(new_index) == len(self.index): return self.copy() if self.empty and len(other) == other_idxlen: return other.copy() # sorts if possible new_columns = this.columns.union(other.columns) do_fill = fill_value is not None result = {} for col in new_columns: series = this[col] otherSeries = other[col] this_dtype = series.dtype other_dtype = otherSeries.dtype this_mask = isna(series) other_mask = isna(otherSeries) # don't overwrite columns unecessarily # DO propagate if this column is not in the intersection if not overwrite and other_mask.all(): result[col] = this[col].copy() continue if do_fill: series = series.copy() otherSeries = otherSeries.copy() series[this_mask] = fill_value otherSeries[other_mask] = fill_value if col not in self.columns: # If self DataFrame does not have col in other DataFrame, # try to promote series, which is all NaN, as other_dtype. new_dtype = other_dtype try: series = series.astype(new_dtype, copy=False) except ValueError: # e.g. new_dtype is integer types pass else: # if we have different dtypes, possibly promote new_dtype = find_common_type([this_dtype, other_dtype]) if not is_dtype_equal(this_dtype, new_dtype): series = series.astype(new_dtype) if not is_dtype_equal(other_dtype, new_dtype): otherSeries = otherSeries.astype(new_dtype) arr = func(series, otherSeries) arr = maybe_downcast_to_dtype(arr, this_dtype) result[col] = arr # convert_objects just in case return self._constructor(result, index=new_index, columns=new_columns)
<SYSTEM_TASK:> Update null elements with value in the same location in `other`. <END_TASK> <USER_TASK:> Description: def combine_first(self, other): """ Update null elements with value in the same location in `other`. Combine two DataFrame objects by filling null values in one DataFrame with non-null values from other DataFrame. The row and column indexes of the resulting DataFrame will be the union of the two. Parameters ---------- other : DataFrame Provided DataFrame to use to fill null values. Returns ------- DataFrame See Also -------- DataFrame.combine : Perform series-wise operation on two DataFrames using a given function. Examples -------- >>> df1 = pd.DataFrame({'A': [None, 0], 'B': [None, 4]}) >>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]}) >>> df1.combine_first(df2) A B 0 1.0 3.0 1 0.0 4.0 Null values still persist if the location of that null value does not exist in `other` >>> df1 = pd.DataFrame({'A': [None, 0], 'B': [4, None]}) >>> df2 = pd.DataFrame({'B': [3, 3], 'C': [1, 1]}, index=[1, 2]) >>> df1.combine_first(df2) A B C 0 NaN 4.0 NaN 1 0.0 3.0 1.0 2 NaN 3.0 1.0 """
import pandas.core.computation.expressions as expressions def extract_values(arr): # Does two things: # 1. maybe gets the values from the Series / Index # 2. convert datelike to i8 if isinstance(arr, (ABCIndexClass, ABCSeries)): arr = arr._values if needs_i8_conversion(arr): if is_extension_array_dtype(arr.dtype): arr = arr.asi8 else: arr = arr.view('i8') return arr def combiner(x, y): mask = isna(x) if isinstance(mask, (ABCIndexClass, ABCSeries)): mask = mask._values x_values = extract_values(x) y_values = extract_values(y) # If the column y in other DataFrame is not in first DataFrame, # just return y_values. if y.name not in self.columns: return y_values return expressions.where(mask, y_values, x_values) return self.combine(other, combiner, overwrite=False)
<SYSTEM_TASK:> Modify in place using non-NA values from another DataFrame. <END_TASK> <USER_TASK:> Description: def update(self, other, join='left', overwrite=True, filter_func=None, errors='ignore'): """ Modify in place using non-NA values from another DataFrame. Aligns on indices. There is no return value. Parameters ---------- other : DataFrame, or object coercible into a DataFrame Should have at least one matching index/column label with the original DataFrame. If a Series is passed, its name attribute must be set, and that will be used as the column name to align with the original DataFrame. join : {'left'}, default 'left' Only left join is implemented, keeping the index and columns of the original object. overwrite : bool, default True How to handle non-NA values for overlapping keys: * True: overwrite original DataFrame's values with values from `other`. * False: only update values that are NA in the original DataFrame. filter_func : callable(1d-array) -> bool 1d-array, optional Can choose to replace values other than NA. Return True for values that should be updated. errors : {'raise', 'ignore'}, default 'ignore' If 'raise', will raise a ValueError if the DataFrame and `other` both contain non-NA data in the same place. .. versionchanged :: 0.24.0 Changed from `raise_conflict=False|True` to `errors='ignore'|'raise'`. Returns ------- None : method directly changes calling object Raises ------ ValueError * When `errors='raise'` and there's overlapping non-NA data. * When `errors` is not either `'ignore'` or `'raise'` NotImplementedError * If `join != 'left'` See Also -------- dict.update : Similar method for dictionaries. DataFrame.merge : For column(s)-on-columns(s) operations. Examples -------- >>> df = pd.DataFrame({'A': [1, 2, 3], ... 'B': [400, 500, 600]}) >>> new_df = pd.DataFrame({'B': [4, 5, 6], ... 'C': [7, 8, 9]}) >>> df.update(new_df) >>> df A B 0 1 4 1 2 5 2 3 6 The DataFrame's length does not increase as a result of the update, only values at matching index/column labels are updated. >>> df = pd.DataFrame({'A': ['a', 'b', 'c'], ... 'B': ['x', 'y', 'z']}) >>> new_df = pd.DataFrame({'B': ['d', 'e', 'f', 'g', 'h', 'i']}) >>> df.update(new_df) >>> df A B 0 a d 1 b e 2 c f For Series, it's name attribute must be set. >>> df = pd.DataFrame({'A': ['a', 'b', 'c'], ... 'B': ['x', 'y', 'z']}) >>> new_column = pd.Series(['d', 'e'], name='B', index=[0, 2]) >>> df.update(new_column) >>> df A B 0 a d 1 b y 2 c e >>> df = pd.DataFrame({'A': ['a', 'b', 'c'], ... 'B': ['x', 'y', 'z']}) >>> new_df = pd.DataFrame({'B': ['d', 'e']}, index=[1, 2]) >>> df.update(new_df) >>> df A B 0 a x 1 b d 2 c e If `other` contains NaNs the corresponding values are not updated in the original dataframe. >>> df = pd.DataFrame({'A': [1, 2, 3], ... 'B': [400, 500, 600]}) >>> new_df = pd.DataFrame({'B': [4, np.nan, 6]}) >>> df.update(new_df) >>> df A B 0 1 4.0 1 2 500.0 2 3 6.0 """
import pandas.core.computation.expressions as expressions # TODO: Support other joins if join != 'left': # pragma: no cover raise NotImplementedError("Only left join is supported") if errors not in ['ignore', 'raise']: raise ValueError("The parameter errors must be either " "'ignore' or 'raise'") if not isinstance(other, DataFrame): other = DataFrame(other) other = other.reindex_like(self) for col in self.columns: this = self[col]._values that = other[col]._values if filter_func is not None: with np.errstate(all='ignore'): mask = ~filter_func(this) | isna(that) else: if errors == 'raise': mask_this = notna(that) mask_that = notna(this) if any(mask_this & mask_that): raise ValueError("Data overlaps.") if overwrite: mask = isna(that) else: mask = notna(this) # don't overwrite columns unecessarily if mask.all(): continue self[col] = expressions.where(mask, this, that)
<SYSTEM_TASK:> Apply a function along an axis of the DataFrame. <END_TASK> <USER_TASK:> Description: def apply(self, func, axis=0, broadcast=None, raw=False, reduce=None, result_type=None, args=(), **kwds): """ Apply a function along an axis of the DataFrame. Objects passed to the function are Series objects whose index is either the DataFrame's index (``axis=0``) or the DataFrame's columns (``axis=1``). By default (``result_type=None``), the final return type is inferred from the return type of the applied function. Otherwise, it depends on the `result_type` argument. Parameters ---------- func : function Function to apply to each column or row. axis : {0 or 'index', 1 or 'columns'}, default 0 Axis along which the function is applied: * 0 or 'index': apply function to each column. * 1 or 'columns': apply function to each row. broadcast : bool, optional Only relevant for aggregation functions: * ``False`` or ``None`` : returns a Series whose length is the length of the index or the number of columns (based on the `axis` parameter) * ``True`` : results will be broadcast to the original shape of the frame, the original index and columns will be retained. .. deprecated:: 0.23.0 This argument will be removed in a future version, replaced by result_type='broadcast'. raw : bool, default False * ``False`` : passes each row or column as a Series to the function. * ``True`` : the passed function will receive ndarray objects instead. If you are just applying a NumPy reduction function this will achieve much better performance. reduce : bool or None, default None Try to apply reduction procedures. If the DataFrame is empty, `apply` will use `reduce` to determine whether the result should be a Series or a DataFrame. If ``reduce=None`` (the default), `apply`'s return value will be guessed by calling `func` on an empty Series (note: while guessing, exceptions raised by `func` will be ignored). If ``reduce=True`` a Series will always be returned, and if ``reduce=False`` a DataFrame will always be returned. .. deprecated:: 0.23.0 This argument will be removed in a future version, replaced by ``result_type='reduce'``. result_type : {'expand', 'reduce', 'broadcast', None}, default None These only act when ``axis=1`` (columns): * 'expand' : list-like results will be turned into columns. * 'reduce' : returns a Series if possible rather than expanding list-like results. This is the opposite of 'expand'. * 'broadcast' : results will be broadcast to the original shape of the DataFrame, the original index and columns will be retained. The default behaviour (None) depends on the return value of the applied function: list-like results will be returned as a Series of those. However if the apply function returns a Series these are expanded to columns. .. versionadded:: 0.23.0 args : tuple Positional arguments to pass to `func` in addition to the array/series. **kwds Additional keyword arguments to pass as keywords arguments to `func`. Returns ------- Series or DataFrame Result of applying ``func`` along the given axis of the DataFrame. See Also -------- DataFrame.applymap: For elementwise operations. DataFrame.aggregate: Only perform aggregating type operations. DataFrame.transform: Only perform transforming type operations. Notes ----- In the current implementation apply calls `func` twice on the first column/row to decide whether it can take a fast or slow code path. This can lead to unexpected behavior if `func` has side-effects, as they will take effect twice for the first column/row. Examples -------- >>> df = pd.DataFrame([[4, 9]] * 3, columns=['A', 'B']) >>> df A B 0 4 9 1 4 9 2 4 9 Using a numpy universal function (in this case the same as ``np.sqrt(df)``): >>> df.apply(np.sqrt) A B 0 2.0 3.0 1 2.0 3.0 2 2.0 3.0 Using a reducing function on either axis >>> df.apply(np.sum, axis=0) A 12 B 27 dtype: int64 >>> df.apply(np.sum, axis=1) 0 13 1 13 2 13 dtype: int64 Retuning a list-like will result in a Series >>> df.apply(lambda x: [1, 2], axis=1) 0 [1, 2] 1 [1, 2] 2 [1, 2] dtype: object Passing result_type='expand' will expand list-like results to columns of a Dataframe >>> df.apply(lambda x: [1, 2], axis=1, result_type='expand') 0 1 0 1 2 1 1 2 2 1 2 Returning a Series inside the function is similar to passing ``result_type='expand'``. The resulting column names will be the Series index. >>> df.apply(lambda x: pd.Series([1, 2], index=['foo', 'bar']), axis=1) foo bar 0 1 2 1 1 2 2 1 2 Passing ``result_type='broadcast'`` will ensure the same shape result, whether list-like or scalar is returned by the function, and broadcast it along the axis. The resulting column names will be the originals. >>> df.apply(lambda x: [1, 2], axis=1, result_type='broadcast') A B 0 1 2 1 1 2 2 1 2 """
from pandas.core.apply import frame_apply op = frame_apply(self, func=func, axis=axis, broadcast=broadcast, raw=raw, reduce=reduce, result_type=result_type, args=args, kwds=kwds) return op.get_result()
<SYSTEM_TASK:> Apply a function to a Dataframe elementwise. <END_TASK> <USER_TASK:> Description: def applymap(self, func): """ Apply a function to a Dataframe elementwise. This method applies a function that accepts and returns a scalar to every element of a DataFrame. Parameters ---------- func : callable Python function, returns a single value from a single value. Returns ------- DataFrame Transformed DataFrame. See Also -------- DataFrame.apply : Apply a function along input axis of DataFrame. Notes ----- In the current implementation applymap calls `func` twice on the first column/row to decide whether it can take a fast or slow code path. This can lead to unexpected behavior if `func` has side-effects, as they will take effect twice for the first column/row. Examples -------- >>> df = pd.DataFrame([[1, 2.12], [3.356, 4.567]]) >>> df 0 1 0 1.000 2.120 1 3.356 4.567 >>> df.applymap(lambda x: len(str(x))) 0 1 0 3 4 1 5 5 Note that a vectorized version of `func` often exists, which will be much faster. You could square each number elementwise. >>> df.applymap(lambda x: x**2) 0 1 0 1.000000 4.494400 1 11.262736 20.857489 But it's better to avoid applymap in that case. >>> df ** 2 0 1 0 1.000000 4.494400 1 11.262736 20.857489 """
# if we have a dtype == 'M8[ns]', provide boxed values def infer(x): if x.empty: return lib.map_infer(x, func) return lib.map_infer(x.astype(object).values, func) return self.apply(infer)
<SYSTEM_TASK:> Append rows of `other` to the end of caller, returning a new object. <END_TASK> <USER_TASK:> Description: def append(self, other, ignore_index=False, verify_integrity=False, sort=None): """ Append rows of `other` to the end of caller, returning a new object. Columns in `other` that are not in the caller are added as new columns. Parameters ---------- other : DataFrame or Series/dict-like object, or list of these The data to append. ignore_index : boolean, default False If True, do not use the index labels. verify_integrity : boolean, default False If True, raise ValueError on creating index with duplicates. sort : boolean, default None Sort columns if the columns of `self` and `other` are not aligned. The default sorting is deprecated and will change to not-sorting in a future version of pandas. Explicitly pass ``sort=True`` to silence the warning and sort. Explicitly pass ``sort=False`` to silence the warning and not sort. .. versionadded:: 0.23.0 Returns ------- DataFrame See Also -------- concat : General function to concatenate DataFrame, Series or Panel objects. Notes ----- If a list of dict/series is passed and the keys are all contained in the DataFrame's index, the order of the columns in the resulting DataFrame will be unchanged. Iteratively appending rows to a DataFrame can be more computationally intensive than a single concatenate. A better solution is to append those rows to a list and then concatenate the list with the original DataFrame all at once. Examples -------- >>> df = pd.DataFrame([[1, 2], [3, 4]], columns=list('AB')) >>> df A B 0 1 2 1 3 4 >>> df2 = pd.DataFrame([[5, 6], [7, 8]], columns=list('AB')) >>> df.append(df2) A B 0 1 2 1 3 4 0 5 6 1 7 8 With `ignore_index` set to True: >>> df.append(df2, ignore_index=True) A B 0 1 2 1 3 4 2 5 6 3 7 8 The following, while not recommended methods for generating DataFrames, show two ways to generate a DataFrame from multiple data sources. Less efficient: >>> df = pd.DataFrame(columns=['A']) >>> for i in range(5): ... df = df.append({'A': i}, ignore_index=True) >>> df A 0 0 1 1 2 2 3 3 4 4 More efficient: >>> pd.concat([pd.DataFrame([i], columns=['A']) for i in range(5)], ... ignore_index=True) A 0 0 1 1 2 2 3 3 4 4 """
if isinstance(other, (Series, dict)): if isinstance(other, dict): other = Series(other) if other.name is None and not ignore_index: raise TypeError('Can only append a Series if ignore_index=True' ' or if the Series has a name') if other.name is None: index = None else: # other must have the same index name as self, otherwise # index name will be reset index = Index([other.name], name=self.index.name) idx_diff = other.index.difference(self.columns) try: combined_columns = self.columns.append(idx_diff) except TypeError: combined_columns = self.columns.astype(object).append(idx_diff) other = other.reindex(combined_columns, copy=False) other = DataFrame(other.values.reshape((1, len(other))), index=index, columns=combined_columns) other = other._convert(datetime=True, timedelta=True) if not self.columns.equals(combined_columns): self = self.reindex(columns=combined_columns) elif isinstance(other, list) and not isinstance(other[0], DataFrame): other = DataFrame(other) if (self.columns.get_indexer(other.columns) >= 0).all(): other = other.reindex(columns=self.columns) from pandas.core.reshape.concat import concat if isinstance(other, (list, tuple)): to_concat = [self] + other else: to_concat = [self, other] return concat(to_concat, ignore_index=ignore_index, verify_integrity=verify_integrity, sort=sort)
<SYSTEM_TASK:> Join columns of another DataFrame. <END_TASK> <USER_TASK:> Description: def join(self, other, on=None, how='left', lsuffix='', rsuffix='', sort=False): """ Join columns of another DataFrame. Join columns with `other` DataFrame either on index or on a key column. Efficiently join multiple DataFrame objects by index at once by passing a list. Parameters ---------- other : DataFrame, Series, or list of DataFrame Index should be similar to one of the columns in this one. If a Series is passed, its name attribute must be set, and that will be used as the column name in the resulting joined DataFrame. on : str, list of str, or array-like, optional Column or index level name(s) in the caller to join on the index in `other`, otherwise joins index-on-index. If multiple values given, the `other` DataFrame must have a MultiIndex. Can pass an array as the join key if it is not already contained in the calling DataFrame. Like an Excel VLOOKUP operation. how : {'left', 'right', 'outer', 'inner'}, default 'left' How to handle the operation of the two objects. * left: use calling frame's index (or column if on is specified) * right: use `other`'s index. * outer: form union of calling frame's index (or column if on is specified) with `other`'s index, and sort it. lexicographically. * inner: form intersection of calling frame's index (or column if on is specified) with `other`'s index, preserving the order of the calling's one. lsuffix : str, default '' Suffix to use from left frame's overlapping columns. rsuffix : str, default '' Suffix to use from right frame's overlapping columns. sort : bool, default False Order result DataFrame lexicographically by the join key. If False, the order of the join key depends on the join type (how keyword). Returns ------- DataFrame A dataframe containing columns from both the caller and `other`. See Also -------- DataFrame.merge : For column(s)-on-columns(s) operations. Notes ----- Parameters `on`, `lsuffix`, and `rsuffix` are not supported when passing a list of `DataFrame` objects. Support for specifying index levels as the `on` parameter was added in version 0.23.0. Examples -------- >>> df = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3', 'K4', 'K5'], ... 'A': ['A0', 'A1', 'A2', 'A3', 'A4', 'A5']}) >>> df key A 0 K0 A0 1 K1 A1 2 K2 A2 3 K3 A3 4 K4 A4 5 K5 A5 >>> other = pd.DataFrame({'key': ['K0', 'K1', 'K2'], ... 'B': ['B0', 'B1', 'B2']}) >>> other key B 0 K0 B0 1 K1 B1 2 K2 B2 Join DataFrames using their indexes. >>> df.join(other, lsuffix='_caller', rsuffix='_other') key_caller A key_other B 0 K0 A0 K0 B0 1 K1 A1 K1 B1 2 K2 A2 K2 B2 3 K3 A3 NaN NaN 4 K4 A4 NaN NaN 5 K5 A5 NaN NaN If we want to join using the key columns, we need to set key to be the index in both `df` and `other`. The joined DataFrame will have key as its index. >>> df.set_index('key').join(other.set_index('key')) A B key K0 A0 B0 K1 A1 B1 K2 A2 B2 K3 A3 NaN K4 A4 NaN K5 A5 NaN Another option to join using the key columns is to use the `on` parameter. DataFrame.join always uses `other`'s index but we can use any column in `df`. This method preserves the original DataFrame's index in the result. >>> df.join(other.set_index('key'), on='key') key A B 0 K0 A0 B0 1 K1 A1 B1 2 K2 A2 B2 3 K3 A3 NaN 4 K4 A4 NaN 5 K5 A5 NaN """
# For SparseDataFrame's benefit return self._join_compat(other, on=on, how=how, lsuffix=lsuffix, rsuffix=rsuffix, sort=sort)
<SYSTEM_TASK:> Round a DataFrame to a variable number of decimal places. <END_TASK> <USER_TASK:> Description: def round(self, decimals=0, *args, **kwargs): """ Round a DataFrame to a variable number of decimal places. Parameters ---------- decimals : int, dict, Series Number of decimal places to round each column to. If an int is given, round each column to the same number of places. Otherwise dict and Series round to variable numbers of places. Column names should be in the keys if `decimals` is a dict-like, or in the index if `decimals` is a Series. Any columns not included in `decimals` will be left as is. Elements of `decimals` which are not columns of the input will be ignored. *args Additional keywords have no effect but might be accepted for compatibility with numpy. **kwargs Additional keywords have no effect but might be accepted for compatibility with numpy. Returns ------- DataFrame A DataFrame with the affected columns rounded to the specified number of decimal places. See Also -------- numpy.around : Round a numpy array to the given number of decimals. Series.round : Round a Series to the given number of decimals. Examples -------- >>> df = pd.DataFrame([(.21, .32), (.01, .67), (.66, .03), (.21, .18)], ... columns=['dogs', 'cats']) >>> df dogs cats 0 0.21 0.32 1 0.01 0.67 2 0.66 0.03 3 0.21 0.18 By providing an integer each column is rounded to the same number of decimal places >>> df.round(1) dogs cats 0 0.2 0.3 1 0.0 0.7 2 0.7 0.0 3 0.2 0.2 With a dict, the number of places for specific columns can be specfified with the column names as key and the number of decimal places as value >>> df.round({'dogs': 1, 'cats': 0}) dogs cats 0 0.2 0.0 1 0.0 1.0 2 0.7 0.0 3 0.2 0.0 Using a Series, the number of places for specific columns can be specfified with the column names as index and the number of decimal places as value >>> decimals = pd.Series([0, 1], index=['cats', 'dogs']) >>> df.round(decimals) dogs cats 0 0.2 0.0 1 0.0 1.0 2 0.7 0.0 3 0.2 0.0 """
from pandas.core.reshape.concat import concat def _dict_round(df, decimals): for col, vals in df.iteritems(): try: yield _series_round(vals, decimals[col]) except KeyError: yield vals def _series_round(s, decimals): if is_integer_dtype(s) or is_float_dtype(s): return s.round(decimals) return s nv.validate_round(args, kwargs) if isinstance(decimals, (dict, Series)): if isinstance(decimals, Series): if not decimals.index.is_unique: raise ValueError("Index of decimals must be unique") new_cols = [col for col in _dict_round(self, decimals)] elif is_integer(decimals): # Dispatch to Series.round new_cols = [_series_round(v, decimals) for _, v in self.iteritems()] else: raise TypeError("decimals must be an integer, a dict-like or a " "Series") if len(new_cols) > 0: return self._constructor(concat(new_cols, axis=1), index=self.index, columns=self.columns) else: return self
<SYSTEM_TASK:> Compute pairwise correlation between rows or columns of DataFrame <END_TASK> <USER_TASK:> Description: def corrwith(self, other, axis=0, drop=False, method='pearson'): """ Compute pairwise correlation between rows or columns of DataFrame with rows or columns of Series or DataFrame. DataFrames are first aligned along both axes before computing the correlations. Parameters ---------- other : DataFrame, Series Object with which to compute correlations. axis : {0 or 'index', 1 or 'columns'}, default 0 0 or 'index' to compute column-wise, 1 or 'columns' for row-wise. drop : bool, default False Drop missing indices from result. method : {'pearson', 'kendall', 'spearman'} or callable * pearson : standard correlation coefficient * kendall : Kendall Tau correlation coefficient * spearman : Spearman rank correlation * callable: callable with input two 1d ndarrays and returning a float .. versionadded:: 0.24.0 Returns ------- Series Pairwise correlations. See Also ------- DataFrame.corr """
axis = self._get_axis_number(axis) this = self._get_numeric_data() if isinstance(other, Series): return this.apply(lambda x: other.corr(x, method=method), axis=axis) other = other._get_numeric_data() left, right = this.align(other, join='inner', copy=False) if axis == 1: left = left.T right = right.T if method == 'pearson': # mask missing values left = left + right * 0 right = right + left * 0 # demeaned data ldem = left - left.mean() rdem = right - right.mean() num = (ldem * rdem).sum() dom = (left.count() - 1) * left.std() * right.std() correl = num / dom elif method in ['kendall', 'spearman'] or callable(method): def c(x): return nanops.nancorr(x[0], x[1], method=method) correl = Series(map(c, zip(left.values.T, right.values.T)), index=left.columns) else: raise ValueError("Invalid method {method} was passed, " "valid methods are: 'pearson', 'kendall', " "'spearman', or callable". format(method=method)) if not drop: # Find non-matching labels along the given axis # and append missing correlations (GH 22375) raxis = 1 if axis == 0 else 0 result_index = (this._get_axis(raxis). union(other._get_axis(raxis))) idx_diff = result_index.difference(correl.index) if len(idx_diff) > 0: correl = correl.append(Series([np.nan] * len(idx_diff), index=idx_diff)) return correl
<SYSTEM_TASK:> Count non-NA cells for each column or row. <END_TASK> <USER_TASK:> Description: def count(self, axis=0, level=None, numeric_only=False): """ Count non-NA cells for each column or row. The values `None`, `NaN`, `NaT`, and optionally `numpy.inf` (depending on `pandas.options.mode.use_inf_as_na`) are considered NA. Parameters ---------- axis : {0 or 'index', 1 or 'columns'}, default 0 If 0 or 'index' counts are generated for each column. If 1 or 'columns' counts are generated for each **row**. level : int or str, optional If the axis is a `MultiIndex` (hierarchical), count along a particular `level`, collapsing into a `DataFrame`. A `str` specifies the level name. numeric_only : bool, default False Include only `float`, `int` or `boolean` data. Returns ------- Series or DataFrame For each column/row the number of non-NA/null entries. If `level` is specified returns a `DataFrame`. See Also -------- Series.count: Number of non-NA elements in a Series. DataFrame.shape: Number of DataFrame rows and columns (including NA elements). DataFrame.isna: Boolean same-sized DataFrame showing places of NA elements. Examples -------- Constructing DataFrame from a dictionary: >>> df = pd.DataFrame({"Person": ... ["John", "Myla", "Lewis", "John", "Myla"], ... "Age": [24., np.nan, 21., 33, 26], ... "Single": [False, True, True, True, False]}) >>> df Person Age Single 0 John 24.0 False 1 Myla NaN True 2 Lewis 21.0 True 3 John 33.0 True 4 Myla 26.0 False Notice the uncounted NA values: >>> df.count() Person 5 Age 4 Single 5 dtype: int64 Counts for each **row**: >>> df.count(axis='columns') 0 3 1 2 2 3 3 3 4 3 dtype: int64 Counts for one level of a `MultiIndex`: >>> df.set_index(["Person", "Single"]).count(level="Person") Age Person John 2 Lewis 1 Myla 1 """
axis = self._get_axis_number(axis) if level is not None: return self._count_level(level, axis=axis, numeric_only=numeric_only) if numeric_only: frame = self._get_numeric_data() else: frame = self # GH #423 if len(frame._get_axis(axis)) == 0: result = Series(0, index=frame._get_agg_axis(axis)) else: if frame._is_mixed_type or frame._data.any_extension_types: # the or any_extension_types is really only hit for single- # column frames with an extension array result = notna(frame).sum(axis=axis) else: # GH13407 series_counts = notna(frame).sum(axis=axis) counts = series_counts.values result = Series(counts, index=frame._get_agg_axis(axis)) return result.astype('int64')
<SYSTEM_TASK:> Count distinct observations over requested axis. <END_TASK> <USER_TASK:> Description: def nunique(self, axis=0, dropna=True): """ Count distinct observations over requested axis. Return Series with number of distinct observations. Can ignore NaN values. .. versionadded:: 0.20.0 Parameters ---------- axis : {0 or 'index', 1 or 'columns'}, default 0 The axis to use. 0 or 'index' for row-wise, 1 or 'columns' for column-wise. dropna : bool, default True Don't include NaN in the counts. Returns ------- Series See Also -------- Series.nunique: Method nunique for Series. DataFrame.count: Count non-NA cells for each column or row. Examples -------- >>> df = pd.DataFrame({'A': [1, 2, 3], 'B': [1, 1, 1]}) >>> df.nunique() A 3 B 1 dtype: int64 >>> df.nunique(axis=1) 0 1 1 2 2 2 dtype: int64 """
return self.apply(Series.nunique, axis=axis, dropna=dropna)
<SYSTEM_TASK:> Let's be explicit about this. <END_TASK> <USER_TASK:> Description: def _get_agg_axis(self, axis_num): """ Let's be explicit about this. """
if axis_num == 0: return self.columns elif axis_num == 1: return self.index else: raise ValueError('Axis must be 0 or 1 (got %r)' % axis_num)
<SYSTEM_TASK:> Return values at the given quantile over requested axis. <END_TASK> <USER_TASK:> Description: def quantile(self, q=0.5, axis=0, numeric_only=True, interpolation='linear'): """ Return values at the given quantile over requested axis. Parameters ---------- q : float or array-like, default 0.5 (50% quantile) Value between 0 <= q <= 1, the quantile(s) to compute. axis : {0, 1, 'index', 'columns'} (default 0) Equals 0 or 'index' for row-wise, 1 or 'columns' for column-wise. numeric_only : bool, default True If False, the quantile of datetime and timedelta data will be computed as well. interpolation : {'linear', 'lower', 'higher', 'midpoint', 'nearest'} This optional parameter specifies the interpolation method to use, when the desired quantile lies between two data points `i` and `j`: * linear: `i + (j - i) * fraction`, where `fraction` is the fractional part of the index surrounded by `i` and `j`. * lower: `i`. * higher: `j`. * nearest: `i` or `j` whichever is nearest. * midpoint: (`i` + `j`) / 2. .. versionadded:: 0.18.0 Returns ------- Series or DataFrame If ``q`` is an array, a DataFrame will be returned where the index is ``q``, the columns are the columns of self, and the values are the quantiles. If ``q`` is a float, a Series will be returned where the index is the columns of self and the values are the quantiles. See Also -------- core.window.Rolling.quantile: Rolling quantile. numpy.percentile: Numpy function to compute the percentile. Examples -------- >>> df = pd.DataFrame(np.array([[1, 1], [2, 10], [3, 100], [4, 100]]), ... columns=['a', 'b']) >>> df.quantile(.1) a 1.3 b 3.7 Name: 0.1, dtype: float64 >>> df.quantile([.1, .5]) a b 0.1 1.3 3.7 0.5 2.5 55.0 Specifying `numeric_only=False` will also compute the quantile of datetime and timedelta data. >>> df = pd.DataFrame({'A': [1, 2], ... 'B': [pd.Timestamp('2010'), ... pd.Timestamp('2011')], ... 'C': [pd.Timedelta('1 days'), ... pd.Timedelta('2 days')]}) >>> df.quantile(0.5, numeric_only=False) A 1.5 B 2010-07-02 12:00:00 C 1 days 12:00:00 Name: 0.5, dtype: object """
self._check_percentile(q) data = self._get_numeric_data() if numeric_only else self axis = self._get_axis_number(axis) is_transposed = axis == 1 if is_transposed: data = data.T result = data._data.quantile(qs=q, axis=1, interpolation=interpolation, transposed=is_transposed) if result.ndim == 2: result = self._constructor(result) else: result = self._constructor_sliced(result, name=q) if is_transposed: result = result.T return result
<SYSTEM_TASK:> Whether each element in the DataFrame is contained in values. <END_TASK> <USER_TASK:> Description: def isin(self, values): """ Whether each element in the DataFrame is contained in values. Parameters ---------- values : iterable, Series, DataFrame or dict The result will only be true at a location if all the labels match. If `values` is a Series, that's the index. If `values` is a dict, the keys must be the column names, which must match. If `values` is a DataFrame, then both the index and column labels must match. Returns ------- DataFrame DataFrame of booleans showing whether each element in the DataFrame is contained in values. See Also -------- DataFrame.eq: Equality test for DataFrame. Series.isin: Equivalent method on Series. Series.str.contains: Test if pattern or regex is contained within a string of a Series or Index. Examples -------- >>> df = pd.DataFrame({'num_legs': [2, 4], 'num_wings': [2, 0]}, ... index=['falcon', 'dog']) >>> df num_legs num_wings falcon 2 2 dog 4 0 When ``values`` is a list check whether every value in the DataFrame is present in the list (which animals have 0 or 2 legs or wings) >>> df.isin([0, 2]) num_legs num_wings falcon True True dog False True When ``values`` is a dict, we can pass values to check for each column separately: >>> df.isin({'num_wings': [0, 3]}) num_legs num_wings falcon False False dog False True When ``values`` is a Series or DataFrame the index and column must match. Note that 'falcon' does not match based on the number of legs in df2. >>> other = pd.DataFrame({'num_legs': [8, 2], 'num_wings': [0, 2]}, ... index=['spider', 'falcon']) >>> df.isin(other) num_legs num_wings falcon True True dog False False """
if isinstance(values, dict): from pandas.core.reshape.concat import concat values = collections.defaultdict(list, values) return concat((self.iloc[:, [i]].isin(values[col]) for i, col in enumerate(self.columns)), axis=1) elif isinstance(values, Series): if not values.index.is_unique: raise ValueError("cannot compute isin with " "a duplicate axis.") return self.eq(values.reindex_like(self), axis='index') elif isinstance(values, DataFrame): if not (values.columns.is_unique and values.index.is_unique): raise ValueError("cannot compute isin with " "a duplicate axis.") return self.eq(values.reindex_like(self)) else: if not is_list_like(values): raise TypeError("only list-like or dict-like objects are " "allowed to be passed to DataFrame.isin(), " "you passed a " "{0!r}".format(type(values).__name__)) return DataFrame( algorithms.isin(self.values.ravel(), values).reshape(self.shape), self.index, self.columns)
<SYSTEM_TASK:> Infer and return an integer array of the values. <END_TASK> <USER_TASK:> Description: def integer_array(values, dtype=None, copy=False): """ Infer and return an integer array of the values. Parameters ---------- values : 1D list-like dtype : dtype, optional dtype to coerce copy : boolean, default False Returns ------- IntegerArray Raises ------ TypeError if incompatible types """
values, mask = coerce_to_array(values, dtype=dtype, copy=copy) return IntegerArray(values, mask)
<SYSTEM_TASK:> Safely cast the values to the dtype if they <END_TASK> <USER_TASK:> Description: def safe_cast(values, dtype, copy): """ Safely cast the values to the dtype if they are equivalent, meaning floats must be equivalent to the ints. """
try: return values.astype(dtype, casting='safe', copy=copy) except TypeError: casted = values.astype(dtype, copy=copy) if (casted == values).all(): return casted raise TypeError("cannot safely cast non-equivalent {} to {}".format( values.dtype, np.dtype(dtype)))
<SYSTEM_TASK:> Coerce the input values array to numpy arrays with a mask <END_TASK> <USER_TASK:> Description: def coerce_to_array(values, dtype, mask=None, copy=False): """ Coerce the input values array to numpy arrays with a mask Parameters ---------- values : 1D list-like dtype : integer dtype mask : boolean 1D array, optional copy : boolean, default False if True, copy the input Returns ------- tuple of (values, mask) """
# if values is integer numpy array, preserve it's dtype if dtype is None and hasattr(values, 'dtype'): if is_integer_dtype(values.dtype): dtype = values.dtype if dtype is not None: if (isinstance(dtype, str) and (dtype.startswith("Int") or dtype.startswith("UInt"))): # Avoid DeprecationWarning from NumPy about np.dtype("Int64") # https://github.com/numpy/numpy/pull/7476 dtype = dtype.lower() if not issubclass(type(dtype), _IntegerDtype): try: dtype = _dtypes[str(np.dtype(dtype))] except KeyError: raise ValueError("invalid dtype specified {}".format(dtype)) if isinstance(values, IntegerArray): values, mask = values._data, values._mask if dtype is not None: values = values.astype(dtype.numpy_dtype, copy=False) if copy: values = values.copy() mask = mask.copy() return values, mask values = np.array(values, copy=copy) if is_object_dtype(values): inferred_type = lib.infer_dtype(values, skipna=True) if inferred_type == 'empty': values = np.empty(len(values)) values.fill(np.nan) elif inferred_type not in ['floating', 'integer', 'mixed-integer', 'mixed-integer-float']: raise TypeError("{} cannot be converted to an IntegerDtype".format( values.dtype)) elif not (is_integer_dtype(values) or is_float_dtype(values)): raise TypeError("{} cannot be converted to an IntegerDtype".format( values.dtype)) if mask is None: mask = isna(values) else: assert len(mask) == len(values) if not values.ndim == 1: raise TypeError("values must be a 1D list-like") if not mask.ndim == 1: raise TypeError("mask must be a 1D list-like") # infer dtype if needed if dtype is None: dtype = np.dtype('int64') else: dtype = dtype.type # if we are float, let's make sure that we can # safely cast # we copy as need to coerce here if mask.any(): values = values.copy() values[mask] = 1 values = safe_cast(values, dtype, copy=False) else: values = safe_cast(values, dtype, copy=False) return values, mask
<SYSTEM_TASK:> Construction from a string, raise a TypeError if not <END_TASK> <USER_TASK:> Description: def construct_from_string(cls, string): """ Construction from a string, raise a TypeError if not possible """
if string == cls.name: return cls() raise TypeError("Cannot construct a '{}' from " "'{}'".format(cls, string))
<SYSTEM_TASK:> coerce to an ndarary of object dtype <END_TASK> <USER_TASK:> Description: def _coerce_to_ndarray(self): """ coerce to an ndarary of object dtype """
# TODO(jreback) make this better data = self._data.astype(object) data[self._mask] = self._na_value return data
<SYSTEM_TASK:> Cast to a NumPy array or IntegerArray with 'dtype'. <END_TASK> <USER_TASK:> Description: def astype(self, dtype, copy=True): """ Cast to a NumPy array or IntegerArray with 'dtype'. Parameters ---------- dtype : str or dtype Typecode or data-type to which the array is cast. copy : bool, default True Whether to copy the data, even if not necessary. If False, a copy is made only if the old dtype does not match the new dtype. Returns ------- array : ndarray or IntegerArray NumPy ndarray or IntergerArray with 'dtype' for its dtype. Raises ------ TypeError if incompatible type with an IntegerDtype, equivalent of same_kind casting """
# if we are astyping to an existing IntegerDtype we can fastpath if isinstance(dtype, _IntegerDtype): result = self._data.astype(dtype.numpy_dtype, copy=False) return type(self)(result, mask=self._mask, copy=False) # coerce data = self._coerce_to_ndarray() return astype_nansafe(data, dtype, copy=None)
<SYSTEM_TASK:> Returns a Series containing counts of each category. <END_TASK> <USER_TASK:> Description: def value_counts(self, dropna=True): """ Returns a Series containing counts of each category. Every category will have an entry, even those with a count of 0. Parameters ---------- dropna : boolean, default True Don't include counts of NaN. Returns ------- counts : Series See Also -------- Series.value_counts """
from pandas import Index, Series # compute counts on the data with no nans data = self._data[~self._mask] value_counts = Index(data).value_counts() array = value_counts.values # TODO(extension) # if we have allow Index to hold an ExtensionArray # this is easier index = value_counts.index.astype(object) # if we want nans, count the mask if not dropna: # TODO(extension) # appending to an Index *always* infers # w/o passing the dtype array = np.append(array, [self._mask.sum()]) index = Index(np.concatenate( [index.values, np.array([np.nan], dtype=object)]), dtype=object) return Series(array, index=index)
<SYSTEM_TASK:> Return values for sorting. <END_TASK> <USER_TASK:> Description: def _values_for_argsort(self) -> np.ndarray: """Return values for sorting. Returns ------- ndarray The transformed values should maintain the ordering between values within the array. See Also -------- ExtensionArray.argsort """
data = self._data.copy() data[self._mask] = data.min() - 1 return data
<SYSTEM_TASK:> return the length of a single non-tuple indexer which could be a slice <END_TASK> <USER_TASK:> Description: def length_of_indexer(indexer, target=None): """ return the length of a single non-tuple indexer which could be a slice """
if target is not None and isinstance(indexer, slice): target_len = len(target) start = indexer.start stop = indexer.stop step = indexer.step if start is None: start = 0 elif start < 0: start += target_len if stop is None or stop > target_len: stop = target_len elif stop < 0: stop += target_len if step is None: step = 1 elif step < 0: step = -step return (stop - start + step - 1) // step elif isinstance(indexer, (ABCSeries, Index, np.ndarray, list)): return len(indexer) elif not is_list_like_indexer(indexer): return 1 raise AssertionError("cannot find the length of the indexer")
<SYSTEM_TASK:> if we are index sliceable, then return my slicer, otherwise return None <END_TASK> <USER_TASK:> Description: def convert_to_index_sliceable(obj, key): """ if we are index sliceable, then return my slicer, otherwise return None """
idx = obj.index if isinstance(key, slice): return idx._convert_slice_indexer(key, kind='getitem') elif isinstance(key, str): # we are an actual column if obj._data.items.contains(key): return None # We might have a datetimelike string that we can translate to a # slice here via partial string indexing if idx.is_all_dates: try: return idx._get_string_slice(key) except (KeyError, ValueError, NotImplementedError): return None return None
<SYSTEM_TASK:> Validate that value and indexer are the same length. <END_TASK> <USER_TASK:> Description: def check_setitem_lengths(indexer, value, values): """ Validate that value and indexer are the same length. An special-case is allowed for when the indexer is a boolean array and the number of true values equals the length of ``value``. In this case, no exception is raised. Parameters ---------- indexer : sequence The key for the setitem value : array-like The value for the setitem values : array-like The values being set into Returns ------- None Raises ------ ValueError When the indexer is an ndarray or list and the lengths don't match. """
# boolean with truth values == len of the value is ok too if isinstance(indexer, (np.ndarray, list)): if is_list_like(value) and len(indexer) != len(value): if not (isinstance(indexer, np.ndarray) and indexer.dtype == np.bool_ and len(indexer[indexer]) == len(value)): raise ValueError("cannot set using a list-like indexer " "with a different length than the value") # slice elif isinstance(indexer, slice): if is_list_like(value) and len(values): if len(value) != length_of_indexer(indexer, values): raise ValueError("cannot set using a slice indexer with a " "different length than the value")
<SYSTEM_TASK:> reverse convert a missing indexer, which is a dict <END_TASK> <USER_TASK:> Description: def convert_missing_indexer(indexer): """ reverse convert a missing indexer, which is a dict return the scalar indexer and a boolean indicating if we converted """
if isinstance(indexer, dict): # a missing key (but not a tuple indexer) indexer = indexer['key'] if isinstance(indexer, bool): raise KeyError("cannot use a single bool to index into setitem") return indexer, True return indexer, False
<SYSTEM_TASK:> create a filtered indexer that doesn't have any missing indexers <END_TASK> <USER_TASK:> Description: def convert_from_missing_indexer_tuple(indexer, axes): """ create a filtered indexer that doesn't have any missing indexers """
def get_indexer(_i, _idx): return (axes[_i].get_loc(_idx['key']) if isinstance(_idx, dict) else _idx) return tuple(get_indexer(_i, _idx) for _i, _idx in enumerate(indexer))
<SYSTEM_TASK:> Attempt to convert indices into valid, positive indices. <END_TASK> <USER_TASK:> Description: def maybe_convert_indices(indices, n): """ Attempt to convert indices into valid, positive indices. If we have negative indices, translate to positive here. If we have indices that are out-of-bounds, raise an IndexError. Parameters ---------- indices : array-like The array of indices that we are to convert. n : int The number of elements in the array that we are indexing. Returns ------- valid_indices : array-like An array-like of positive indices that correspond to the ones that were passed in initially to this function. Raises ------ IndexError : one of the converted indices either exceeded the number of elements (specified by `n`) OR was still negative. """
if isinstance(indices, list): indices = np.array(indices) if len(indices) == 0: # If list is empty, np.array will return float and cause indexing # errors. return np.empty(0, dtype=np.intp) mask = indices < 0 if mask.any(): indices = indices.copy() indices[mask] += n mask = (indices >= n) | (indices < 0) if mask.any(): raise IndexError("indices are out-of-bounds") return indices
<SYSTEM_TASK:> Perform bounds-checking for an indexer. <END_TASK> <USER_TASK:> Description: def validate_indices(indices, n): """ Perform bounds-checking for an indexer. -1 is allowed for indicating missing values. Parameters ---------- indices : ndarray n : int length of the array being indexed Raises ------ ValueError Examples -------- >>> validate_indices([1, 2], 3) # OK >>> validate_indices([1, -2], 3) ValueError >>> validate_indices([1, 2, 3], 3) IndexError >>> validate_indices([-1, -1], 0) # OK >>> validate_indices([0, 1], 0) IndexError """
if len(indices): min_idx = indices.min() if min_idx < -1: msg = ("'indices' contains values less than allowed ({} < {})" .format(min_idx, -1)) raise ValueError(msg) max_idx = indices.max() if max_idx >= n: raise IndexError("indices are out-of-bounds")
<SYSTEM_TASK:> We likely want to take the cross-product <END_TASK> <USER_TASK:> Description: def maybe_convert_ix(*args): """ We likely want to take the cross-product """
ixify = True for arg in args: if not isinstance(arg, (np.ndarray, list, ABCSeries, Index)): ixify = False if ixify: return np.ix_(*args) else: return args
<SYSTEM_TASK:> Ensurse that a slice doesn't reduce to a Series or Scalar. <END_TASK> <USER_TASK:> Description: def _non_reducing_slice(slice_): """ Ensurse that a slice doesn't reduce to a Series or Scalar. Any user-paseed `subset` should have this called on it to make sure we're always working with DataFrames. """
# default to column slice, like DataFrame # ['A', 'B'] -> IndexSlices[:, ['A', 'B']] kinds = (ABCSeries, np.ndarray, Index, list, str) if isinstance(slice_, kinds): slice_ = IndexSlice[:, slice_] def pred(part): # true when slice does *not* reduce, False when part is a tuple, # i.e. MultiIndex slice return ((isinstance(part, slice) or is_list_like(part)) and not isinstance(part, tuple)) if not is_list_like(slice_): if not isinstance(slice_, slice): # a 1-d slice, like df.loc[1] slice_ = [[slice_]] else: # slice(a, b, c) slice_ = [slice_] # to tuplize later else: slice_ = [part if pred(part) else [part] for part in slice_] return tuple(slice_)
<SYSTEM_TASK:> want nice defaults for background_gradient that don't break <END_TASK> <USER_TASK:> Description: def _maybe_numeric_slice(df, slice_, include_bool=False): """ want nice defaults for background_gradient that don't break with non-numeric data. But if slice_ is passed go with that. """
if slice_ is None: dtypes = [np.number] if include_bool: dtypes.append(bool) slice_ = IndexSlice[:, df.select_dtypes(include=dtypes).columns] return slice_
<SYSTEM_TASK:> check the key for valid keys across my indexer <END_TASK> <USER_TASK:> Description: def _has_valid_tuple(self, key): """ check the key for valid keys across my indexer """
for i, k in enumerate(key): if i >= self.obj.ndim: raise IndexingError('Too many indexers') try: self._validate_key(k, i) except ValueError: raise ValueError("Location based indexing can only have " "[{types}] types" .format(types=self._valid_types))
<SYSTEM_TASK:> validate that an positional indexer cannot enlarge its target <END_TASK> <USER_TASK:> Description: def _has_valid_positional_setitem_indexer(self, indexer): """ validate that an positional indexer cannot enlarge its target will raise if needed, does not modify the indexer externally """
if isinstance(indexer, dict): raise IndexError("{0} cannot enlarge its target object" .format(self.name)) else: if not isinstance(indexer, tuple): indexer = self._tuplify(indexer) for ax, i in zip(self.obj.axes, indexer): if isinstance(i, slice): # should check the stop slice? pass elif is_list_like_indexer(i): # should check the elements? pass elif is_integer(i): if i >= len(ax): raise IndexError("{name} cannot enlarge its target " "object".format(name=self.name)) elif isinstance(i, dict): raise IndexError("{name} cannot enlarge its target object" .format(name=self.name)) return True
<SYSTEM_TASK:> Check whether there is the possibility to use ``_multi_take``. <END_TASK> <USER_TASK:> Description: def _multi_take_opportunity(self, tup): """ Check whether there is the possibility to use ``_multi_take``. Currently the limit is that all axes being indexed must be indexed with list-likes. Parameters ---------- tup : tuple Tuple of indexers, one per axis Returns ------- boolean: Whether the current indexing can be passed through _multi_take """
if not all(is_list_like_indexer(x) for x in tup): return False # just too complicated if any(com.is_bool_indexer(x) for x in tup): return False return True
<SYSTEM_TASK:> Create the indexers for the passed tuple of keys, and execute the take <END_TASK> <USER_TASK:> Description: def _multi_take(self, tup): """ Create the indexers for the passed tuple of keys, and execute the take operation. This allows the take operation to be executed all at once - rather than once for each dimension - improving efficiency. Parameters ---------- tup : tuple Tuple of indexers, one per axis Returns ------- values: same type as the object being indexed """
# GH 836 o = self.obj d = {axis: self._get_listlike_indexer(key, axis) for (key, axis) in zip(tup, o._AXIS_ORDERS)} return o._reindex_with_indexers(d, copy=True, allow_dups=True)
<SYSTEM_TASK:> Transform a list-like of keys into a new index and an indexer. <END_TASK> <USER_TASK:> Description: def _get_listlike_indexer(self, key, axis, raise_missing=False): """ Transform a list-like of keys into a new index and an indexer. Parameters ---------- key : list-like Target labels axis: int Dimension on which the indexing is being made raise_missing: bool Whether to raise a KeyError if some labels are not found. Will be removed in the future, and then this method will always behave as if raise_missing=True. Raises ------ KeyError If at least one key was requested but none was found, and raise_missing=True. Returns ------- keyarr: Index New index (coinciding with 'key' if the axis is unique) values : array-like An indexer for the return object; -1 denotes keys not found """
o = self.obj ax = o._get_axis(axis) # Have the index compute an indexer or return None # if it cannot handle: indexer, keyarr = ax._convert_listlike_indexer(key, kind=self.name) # We only act on all found values: if indexer is not None and (indexer != -1).all(): self._validate_read_indexer(key, indexer, axis, raise_missing=raise_missing) return ax[indexer], indexer if ax.is_unique: # If we are trying to get actual keys from empty Series, we # patiently wait for a KeyError later on - otherwise, convert if len(ax) or not len(key): key = self._convert_for_reindex(key, axis) indexer = ax.get_indexer_for(key) keyarr = ax.reindex(keyarr)[0] else: keyarr, indexer, new_indexer = ax._reindex_non_unique(keyarr) self._validate_read_indexer(keyarr, indexer, o._get_axis_number(axis), raise_missing=raise_missing) return keyarr, indexer
<SYSTEM_TASK:> Convert indexing key into something we can use to do actual fancy <END_TASK> <USER_TASK:> Description: def _convert_to_indexer(self, obj, axis=None, is_setter=False, raise_missing=False): """ Convert indexing key into something we can use to do actual fancy indexing on an ndarray Examples ix[:5] -> slice(0, 5) ix[[1,2,3]] -> [1,2,3] ix[['foo', 'bar', 'baz']] -> [i, j, k] (indices of foo, bar, baz) Going by Zen of Python? 'In the face of ambiguity, refuse the temptation to guess.' raise AmbiguousIndexError with integer labels? - No, prefer label-based indexing """
if axis is None: axis = self.axis or 0 labels = self.obj._get_axis(axis) if isinstance(obj, slice): return self._convert_slice_indexer(obj, axis) # try to find out correct indexer, if not type correct raise try: obj = self._convert_scalar_indexer(obj, axis) except TypeError: # but we will allow setting if is_setter: pass # see if we are positional in nature is_int_index = labels.is_integer() is_int_positional = is_integer(obj) and not is_int_index # if we are a label return me try: return labels.get_loc(obj) except LookupError: if isinstance(obj, tuple) and isinstance(labels, MultiIndex): if is_setter and len(obj) == labels.nlevels: return {'key': obj} raise except TypeError: pass except (ValueError): if not is_int_positional: raise # a positional if is_int_positional: # if we are setting and its not a valid location # its an insert which fails by definition if is_setter: # always valid if self.name == 'loc': return {'key': obj} # a positional if (obj >= self.obj.shape[axis] and not isinstance(labels, MultiIndex)): raise ValueError("cannot set by positional indexing with " "enlargement") return obj if is_nested_tuple(obj, labels): return labels.get_locs(obj) elif is_list_like_indexer(obj): if com.is_bool_indexer(obj): obj = check_bool_indexer(labels, obj) inds, = obj.nonzero() return inds else: # When setting, missing keys are not allowed, even with .loc: kwargs = {'raise_missing': True if is_setter else raise_missing} return self._get_listlike_indexer(obj, axis, **kwargs)[1] else: try: return labels.get_loc(obj) except LookupError: # allow a not found key only if we are a setter if not is_list_like_indexer(obj) and is_setter: return {'key': obj} raise
<SYSTEM_TASK:> this is pretty simple as we just have to deal with labels <END_TASK> <USER_TASK:> Description: def _get_slice_axis(self, slice_obj, axis=None): """ this is pretty simple as we just have to deal with labels """
if axis is None: axis = self.axis or 0 obj = self.obj if not need_slice(slice_obj): return obj.copy(deep=False) labels = obj._get_axis(axis) indexer = labels.slice_indexer(slice_obj.start, slice_obj.stop, slice_obj.step, kind=self.name) if isinstance(indexer, slice): return self._slice(indexer, axis=axis, kind='iloc') else: return self.obj._take(indexer, axis=axis)
<SYSTEM_TASK:> Check that 'key' is a valid position in the desired axis. <END_TASK> <USER_TASK:> Description: def _validate_integer(self, key, axis): """ Check that 'key' is a valid position in the desired axis. Parameters ---------- key : int Requested position axis : int Desired axis Returns ------- None Raises ------ IndexError If 'key' is not a valid position in axis 'axis' """
len_axis = len(self.obj._get_axis(axis)) if key >= len_axis or key < -len_axis: raise IndexError("single positional indexer is out-of-bounds")
<SYSTEM_TASK:> Return Series values by list or array of integers <END_TASK> <USER_TASK:> Description: def _get_list_axis(self, key, axis=None): """ Return Series values by list or array of integers Parameters ---------- key : list-like positional indexer axis : int (can only be zero) Returns ------- Series object """
if axis is None: axis = self.axis or 0 try: return self.obj._take(key, axis=axis) except IndexError: # re-raise with different error message raise IndexError("positional indexers are out-of-bounds")
<SYSTEM_TASK:> much simpler as we only have to deal with our valid types <END_TASK> <USER_TASK:> Description: def _convert_to_indexer(self, obj, axis=None, is_setter=False): """ much simpler as we only have to deal with our valid types """
if axis is None: axis = self.axis or 0 # make need to convert a float key if isinstance(obj, slice): return self._convert_slice_indexer(obj, axis) elif is_float(obj): return self._convert_scalar_indexer(obj, axis) try: self._validate_key(obj, axis) return obj except ValueError: raise ValueError("Can only index by location with " "a [{types}]".format(types=self._valid_types))
<SYSTEM_TASK:> create and return the block manager from a dataframe of series, <END_TASK> <USER_TASK:> Description: def to_manager(sdf, columns, index): """ create and return the block manager from a dataframe of series, columns, index """
# from BlockManager perspective axes = [ensure_index(columns), ensure_index(index)] return create_block_manager_from_arrays( [sdf[c] for c in columns], columns, axes)
<SYSTEM_TASK:> Only makes sense when fill_value is NaN <END_TASK> <USER_TASK:> Description: def stack_sparse_frame(frame): """ Only makes sense when fill_value is NaN """
lengths = [s.sp_index.npoints for _, s in frame.items()] nobs = sum(lengths) # this is pretty fast minor_codes = np.repeat(np.arange(len(frame.columns)), lengths) inds_to_concat = [] vals_to_concat = [] # TODO: Figure out whether this can be reached. # I think this currently can't be reached because you can't build a # SparseDataFrame with a non-np.NaN fill value (fails earlier). for _, series in frame.items(): if not np.isnan(series.fill_value): raise TypeError('This routine assumes NaN fill value') int_index = series.sp_index.to_int_index() inds_to_concat.append(int_index.indices) vals_to_concat.append(series.sp_values) major_codes = np.concatenate(inds_to_concat) stacked_values = np.concatenate(vals_to_concat) index = MultiIndex(levels=[frame.index, frame.columns], codes=[major_codes, minor_codes], verify_integrity=False) lp = DataFrame(stacked_values.reshape((nobs, 1)), index=index, columns=['foo']) return lp.sort_index(level=0)
<SYSTEM_TASK:> Init self from ndarray or list of lists. <END_TASK> <USER_TASK:> Description: def _init_matrix(self, data, index, columns, dtype=None): """ Init self from ndarray or list of lists. """
data = prep_ndarray(data, copy=False) index, columns = self._prep_index(data, index, columns) data = {idx: data[:, i] for i, idx in enumerate(columns)} return self._init_dict(data, index, columns, dtype)
<SYSTEM_TASK:> Init self from scipy.sparse matrix. <END_TASK> <USER_TASK:> Description: def _init_spmatrix(self, data, index, columns, dtype=None, fill_value=None): """ Init self from scipy.sparse matrix. """
index, columns = self._prep_index(data, index, columns) data = data.tocoo() N = len(index) # Construct a dict of SparseSeries sdict = {} values = Series(data.data, index=data.row, copy=False) for col, rowvals in values.groupby(data.col): # get_blocks expects int32 row indices in sorted order rowvals = rowvals.sort_index() rows = rowvals.index.values.astype(np.int32) blocs, blens = get_blocks(rows) sdict[columns[col]] = SparseSeries( rowvals.values, index=index, fill_value=fill_value, sparse_index=BlockIndex(N, blocs, blens)) # Add any columns that were empty and thus not grouped on above sdict.update({column: SparseSeries(index=index, fill_value=fill_value, sparse_index=BlockIndex(N, [], [])) for column in columns if column not in sdict}) return self._init_dict(sdict, index, columns, dtype)
<SYSTEM_TASK:> Return the contents of the frame as a sparse SciPy COO matrix. <END_TASK> <USER_TASK:> Description: def to_coo(self): """ Return the contents of the frame as a sparse SciPy COO matrix. .. versionadded:: 0.20.0 Returns ------- coo_matrix : scipy.sparse.spmatrix If the caller is heterogeneous and contains booleans or objects, the result will be of dtype=object. See Notes. Notes ----- The dtype will be the lowest-common-denominator type (implicit upcasting); that is to say if the dtypes (even of numeric types) are mixed, the one that accommodates all will be chosen. e.g. If the dtypes are float16 and float32, dtype will be upcast to float32. By numpy.find_common_type convention, mixing int64 and and uint64 will result in a float64 dtype. """
try: from scipy.sparse import coo_matrix except ImportError: raise ImportError('Scipy is not installed') dtype = find_common_type(self.dtypes) if isinstance(dtype, SparseDtype): dtype = dtype.subtype cols, rows, datas = [], [], [] for col, name in enumerate(self): s = self[name] row = s.sp_index.to_int_index().indices cols.append(np.repeat(col, len(row))) rows.append(row) datas.append(s.sp_values.astype(dtype, copy=False)) cols = np.concatenate(cols) rows = np.concatenate(rows) datas = np.concatenate(datas) return coo_matrix((datas, (rows, cols)), shape=self.shape)
<SYSTEM_TASK:> Creates a new SparseArray from the input value. <END_TASK> <USER_TASK:> Description: def _sanitize_column(self, key, value, **kwargs): """ Creates a new SparseArray from the input value. Parameters ---------- key : object value : scalar, Series, or array-like kwargs : dict Returns ------- sanitized_column : SparseArray """
def sp_maker(x, index=None): return SparseArray(x, index=index, fill_value=self._default_fill_value, kind=self._default_kind) if isinstance(value, SparseSeries): clean = value.reindex(self.index).as_sparse_array( fill_value=self._default_fill_value, kind=self._default_kind) elif isinstance(value, SparseArray): if len(value) != len(self.index): raise ValueError('Length of values does not match ' 'length of index') clean = value elif hasattr(value, '__iter__'): if isinstance(value, Series): clean = value.reindex(self.index) if not isinstance(value, SparseSeries): clean = sp_maker(clean) else: if len(value) != len(self.index): raise ValueError('Length of values does not match ' 'length of index') clean = sp_maker(value) # Scalar else: clean = sp_maker(value, self.index) # always return a SparseArray! return clean
<SYSTEM_TASK:> Return SparseDataFrame of cumulative sums over requested axis. <END_TASK> <USER_TASK:> Description: def cumsum(self, axis=0, *args, **kwargs): """ Return SparseDataFrame of cumulative sums over requested axis. Parameters ---------- axis : {0, 1} 0 for row-wise, 1 for column-wise Returns ------- y : SparseDataFrame """
nv.validate_cumsum(args, kwargs) if axis is None: axis = self._stat_axis_number return self.apply(lambda x: x.cumsum(), axis=axis)
<SYSTEM_TASK:> Analogous to DataFrame.apply, for SparseDataFrame <END_TASK> <USER_TASK:> Description: def apply(self, func, axis=0, broadcast=None, reduce=None, result_type=None): """ Analogous to DataFrame.apply, for SparseDataFrame Parameters ---------- func : function Function to apply to each column axis : {0, 1, 'index', 'columns'} broadcast : bool, default False For aggregation functions, return object of same size with values propagated .. deprecated:: 0.23.0 This argument will be removed in a future version, replaced by result_type='broadcast'. reduce : boolean or None, default None Try to apply reduction procedures. If the DataFrame is empty, apply will use reduce to determine whether the result should be a Series or a DataFrame. If reduce is None (the default), apply's return value will be guessed by calling func an empty Series (note: while guessing, exceptions raised by func will be ignored). If reduce is True a Series will always be returned, and if False a DataFrame will always be returned. .. deprecated:: 0.23.0 This argument will be removed in a future version, replaced by result_type='reduce'. result_type : {'expand', 'reduce', 'broadcast, None} These only act when axis=1 {columns}: * 'expand' : list-like results will be turned into columns. * 'reduce' : return a Series if possible rather than expanding list-like results. This is the opposite to 'expand'. * 'broadcast' : results will be broadcast to the original shape of the frame, the original index & columns will be retained. The default behaviour (None) depends on the return value of the applied function: list-like results will be returned as a Series of those. However if the apply function returns a Series these are expanded to columns. .. versionadded:: 0.23.0 Returns ------- applied : Series or SparseDataFrame """
if not len(self.columns): return self axis = self._get_axis_number(axis) if isinstance(func, np.ufunc): new_series = {} for k, v in self.items(): applied = func(v) applied.fill_value = func(v.fill_value) new_series[k] = applied return self._constructor( new_series, index=self.index, columns=self.columns, default_fill_value=self._default_fill_value, default_kind=self._default_kind).__finalize__(self) from pandas.core.apply import frame_apply op = frame_apply(self, func=func, axis=axis, reduce=reduce, broadcast=broadcast, result_type=result_type) return op.get_result()
<SYSTEM_TASK:> Convert a conda package to its pip equivalent. <END_TASK> <USER_TASK:> Description: def conda_package_to_pip(package): """ Convert a conda package to its pip equivalent. In most cases they are the same, those are the exceptions: - Packages that should be excluded (in `EXCLUDE`) - Packages that should be renamed (in `RENAME`) - A package requiring a specific version, in conda is defined with a single equal (e.g. ``pandas=1.0``) and in pip with two (e.g. ``pandas==1.0``) """
if package in EXCLUDE: return package = re.sub('(?<=[^<>])=', '==', package).strip() for compare in ('<=', '>=', '=='): if compare not in package: continue pkg, version = package.split(compare) if pkg in RENAME: return ''.join((RENAME[pkg], compare, version)) break return package
<SYSTEM_TASK:> try to do platform conversion, allow ndarray or list here <END_TASK> <USER_TASK:> Description: def maybe_convert_platform(values): """ try to do platform conversion, allow ndarray or list here """
if isinstance(values, (list, tuple)): values = construct_1d_object_array_from_listlike(list(values)) if getattr(values, 'dtype', None) == np.object_: if hasattr(values, '_values'): values = values._values values = lib.maybe_convert_objects(values) return values
<SYSTEM_TASK:> A safe version of putmask that potentially upcasts the result. <END_TASK> <USER_TASK:> Description: def maybe_upcast_putmask(result, mask, other): """ A safe version of putmask that potentially upcasts the result. The result is replaced with the first N elements of other, where N is the number of True values in mask. If the length of other is shorter than N, other will be repeated. Parameters ---------- result : ndarray The destination array. This will be mutated in-place if no upcasting is necessary. mask : boolean ndarray other : ndarray or scalar The source array or value Returns ------- result : ndarray changed : boolean Set to true if the result array was upcasted Examples -------- >>> result, _ = maybe_upcast_putmask(np.arange(1,6), np.array([False, True, False, True, True]), np.arange(21,23)) >>> result array([1, 21, 3, 22, 21]) """
if not isinstance(result, np.ndarray): raise ValueError("The result input must be a ndarray.") if mask.any(): # Two conversions for date-like dtypes that can't be done automatically # in np.place: # NaN -> NaT # integer or integer array -> date-like array if is_datetimelike(result.dtype): if is_scalar(other): if isna(other): other = result.dtype.type('nat') elif is_integer(other): other = np.array(other, dtype=result.dtype) elif is_integer_dtype(other): other = np.array(other, dtype=result.dtype) def changeit(): # try to directly set by expanding our array to full # length of the boolean try: om = other[mask] om_at = om.astype(result.dtype) if (om == om_at).all(): new_result = result.values.copy() new_result[mask] = om_at result[:] = new_result return result, False except Exception: pass # we are forced to change the dtype of the result as the input # isn't compatible r, _ = maybe_upcast(result, fill_value=other, copy=True) np.place(r, mask, other) return r, True # we want to decide whether place will work # if we have nans in the False portion of our mask then we need to # upcast (possibly), otherwise we DON't want to upcast (e.g. if we # have values, say integers, in the success portion then it's ok to not # upcast) new_dtype, _ = maybe_promote(result.dtype, other) if new_dtype != result.dtype: # we have a scalar or len 0 ndarray # and its nan and we are changing some values if (is_scalar(other) or (isinstance(other, np.ndarray) and other.ndim < 1)): if isna(other): return changeit() # we have an ndarray and the masking has nans in it else: if isna(other).any(): return changeit() try: np.place(result, mask, other) except Exception: return changeit() return result, False
<SYSTEM_TASK:> interpret the dtype from a scalar or array. This is a convenience <END_TASK> <USER_TASK:> Description: def infer_dtype_from(val, pandas_dtype=False): """ interpret the dtype from a scalar or array. This is a convenience routines to infer dtype from a scalar or an array Parameters ---------- pandas_dtype : bool, default False whether to infer dtype including pandas extension types. If False, scalar/array belongs to pandas extension types is inferred as object """
if is_scalar(val): return infer_dtype_from_scalar(val, pandas_dtype=pandas_dtype) return infer_dtype_from_array(val, pandas_dtype=pandas_dtype)
<SYSTEM_TASK:> interpret the dtype from a scalar <END_TASK> <USER_TASK:> Description: def infer_dtype_from_scalar(val, pandas_dtype=False): """ interpret the dtype from a scalar Parameters ---------- pandas_dtype : bool, default False whether to infer dtype including pandas extension types. If False, scalar belongs to pandas extension types is inferred as object """
dtype = np.object_ # a 1-element ndarray if isinstance(val, np.ndarray): msg = "invalid ndarray passed to infer_dtype_from_scalar" if val.ndim != 0: raise ValueError(msg) dtype = val.dtype val = val.item() elif isinstance(val, str): # If we create an empty array using a string to infer # the dtype, NumPy will only allocate one character per entry # so this is kind of bad. Alternately we could use np.repeat # instead of np.empty (but then you still don't want things # coming out as np.str_! dtype = np.object_ elif isinstance(val, (np.datetime64, datetime)): val = tslibs.Timestamp(val) if val is tslibs.NaT or val.tz is None: dtype = np.dtype('M8[ns]') else: if pandas_dtype: dtype = DatetimeTZDtype(unit='ns', tz=val.tz) else: # return datetimetz as object return np.object_, val val = val.value elif isinstance(val, (np.timedelta64, timedelta)): val = tslibs.Timedelta(val).value dtype = np.dtype('m8[ns]') elif is_bool(val): dtype = np.bool_ elif is_integer(val): if isinstance(val, np.integer): dtype = type(val) else: dtype = np.int64 elif is_float(val): if isinstance(val, np.floating): dtype = type(val) else: dtype = np.float64 elif is_complex(val): dtype = np.complex_ elif pandas_dtype: if lib.is_period(val): dtype = PeriodDtype(freq=val.freq) val = val.ordinal return dtype, val
<SYSTEM_TASK:> infer the dtype from a scalar or array <END_TASK> <USER_TASK:> Description: def infer_dtype_from_array(arr, pandas_dtype=False): """ infer the dtype from a scalar or array Parameters ---------- arr : scalar or array pandas_dtype : bool, default False whether to infer dtype including pandas extension types. If False, array belongs to pandas extension types is inferred as object Returns ------- tuple (numpy-compat/pandas-compat dtype, array) Notes ----- if pandas_dtype=False. these infer to numpy dtypes exactly with the exception that mixed / object dtypes are not coerced by stringifying or conversion if pandas_dtype=True. datetime64tz-aware/categorical types will retain there character. Examples -------- >>> np.asarray([1, '1']) array(['1', '1'], dtype='<U21') >>> infer_dtype_from_array([1, '1']) (numpy.object_, [1, '1']) """
if isinstance(arr, np.ndarray): return arr.dtype, arr if not is_list_like(arr): arr = [arr] if pandas_dtype and is_extension_type(arr): return arr.dtype, arr elif isinstance(arr, ABCSeries): return arr.dtype, np.asarray(arr) # don't force numpy coerce with nan's inferred = lib.infer_dtype(arr, skipna=False) if inferred in ['string', 'bytes', 'unicode', 'mixed', 'mixed-integer']: return (np.object_, arr) arr = np.asarray(arr) return arr.dtype, arr
<SYSTEM_TASK:> Try to infer an object's dtype, for use in arithmetic ops <END_TASK> <USER_TASK:> Description: def maybe_infer_dtype_type(element): """Try to infer an object's dtype, for use in arithmetic ops Uses `element.dtype` if that's available. Objects implementing the iterator protocol are cast to a NumPy array, and from there the array's type is used. Parameters ---------- element : object Possibly has a `.dtype` attribute, and possibly the iterator protocol. Returns ------- tipo : type Examples -------- >>> from collections import namedtuple >>> Foo = namedtuple("Foo", "dtype") >>> maybe_infer_dtype_type(Foo(np.dtype("i8"))) numpy.int64 """
tipo = None if hasattr(element, 'dtype'): tipo = element.dtype elif is_list_like(element): element = np.asarray(element) tipo = element.dtype return tipo
<SYSTEM_TASK:> provide explicit type promotion and coercion <END_TASK> <USER_TASK:> Description: def maybe_upcast(values, fill_value=np.nan, dtype=None, copy=False): """ provide explicit type promotion and coercion Parameters ---------- values : the ndarray that we want to maybe upcast fill_value : what we want to fill with dtype : if None, then use the dtype of the values, else coerce to this type copy : if True always make a copy even if no upcast is required """
if is_extension_type(values): if copy: values = values.copy() else: if dtype is None: dtype = values.dtype new_dtype, fill_value = maybe_promote(dtype, fill_value) if new_dtype != values.dtype: values = values.astype(new_dtype) elif copy: values = values.copy() return values, fill_value
<SYSTEM_TASK:> coerce the indexer input array to the smallest dtype possible <END_TASK> <USER_TASK:> Description: def coerce_indexer_dtype(indexer, categories): """ coerce the indexer input array to the smallest dtype possible """
length = len(categories) if length < _int8_max: return ensure_int8(indexer) elif length < _int16_max: return ensure_int16(indexer) elif length < _int32_max: return ensure_int32(indexer) return ensure_int64(indexer)
<SYSTEM_TASK:> given a dtypes and a result set, coerce the result elements to the <END_TASK> <USER_TASK:> Description: def coerce_to_dtypes(result, dtypes): """ given a dtypes and a result set, coerce the result elements to the dtypes """
if len(result) != len(dtypes): raise AssertionError("_coerce_to_dtypes requires equal len arrays") def conv(r, dtype): try: if isna(r): pass elif dtype == _NS_DTYPE: r = tslibs.Timestamp(r) elif dtype == _TD_DTYPE: r = tslibs.Timedelta(r) elif dtype == np.bool_: # messy. non 0/1 integers do not get converted. if is_integer(r) and r not in [0, 1]: return int(r) r = bool(r) elif dtype.kind == 'f': r = float(r) elif dtype.kind == 'i': r = int(r) except Exception: pass return r return [conv(r, dtype) for r, dtype in zip(result, dtypes)]
<SYSTEM_TASK:> Find a common data type among the given dtypes. <END_TASK> <USER_TASK:> Description: def find_common_type(types): """ Find a common data type among the given dtypes. Parameters ---------- types : list of dtypes Returns ------- pandas extension or numpy dtype See Also -------- numpy.find_common_type """
if len(types) == 0: raise ValueError('no types given') first = types[0] # workaround for find_common_type([np.dtype('datetime64[ns]')] * 2) # => object if all(is_dtype_equal(first, t) for t in types[1:]): return first if any(isinstance(t, (PandasExtensionDtype, ExtensionDtype)) for t in types): return np.object # take lowest unit if all(is_datetime64_dtype(t) for t in types): return np.dtype('datetime64[ns]') if all(is_timedelta64_dtype(t) for t in types): return np.dtype('timedelta64[ns]') # don't mix bool / int or float or complex # this is different from numpy, which casts bool with float/int as int has_bools = any(is_bool_dtype(t) for t in types) if has_bools: for t in types: if is_integer_dtype(t) or is_float_dtype(t) or is_complex_dtype(t): return np.object return np.find_common_type(types, [])
<SYSTEM_TASK:> create np.ndarray of specified shape and dtype, filled with values <END_TASK> <USER_TASK:> Description: def cast_scalar_to_array(shape, value, dtype=None): """ create np.ndarray of specified shape and dtype, filled with values Parameters ---------- shape : tuple value : scalar value dtype : np.dtype, optional dtype to coerce Returns ------- ndarray of shape, filled with value, of specified / inferred dtype """
if dtype is None: dtype, fill_value = infer_dtype_from_scalar(value) else: fill_value = value values = np.empty(shape, dtype=dtype) values.fill(fill_value) return values
<SYSTEM_TASK:> Transform any list-like object in a 1-dimensional numpy array of object <END_TASK> <USER_TASK:> Description: def construct_1d_object_array_from_listlike(values): """ Transform any list-like object in a 1-dimensional numpy array of object dtype. Parameters ---------- values : any iterable which has a len() Raises ------ TypeError * If `values` does not have a len() Returns ------- 1-dimensional numpy array of dtype object """
# numpy will try to interpret nested lists as further dimensions, hence # making a 1D array that contains list-likes is a bit tricky: result = np.empty(len(values), dtype='object') result[:] = values return result
<SYSTEM_TASK:> Construct a new ndarray, coercing `values` to `dtype`, preserving NA. <END_TASK> <USER_TASK:> Description: def construct_1d_ndarray_preserving_na(values, dtype=None, copy=False): """ Construct a new ndarray, coercing `values` to `dtype`, preserving NA. Parameters ---------- values : Sequence dtype : numpy.dtype, optional copy : bool, default False Note that copies may still be made with ``copy=False`` if casting is required. Returns ------- arr : ndarray[dtype] Examples -------- >>> np.array([1.0, 2.0, None], dtype='str') array(['1.0', '2.0', 'None'], dtype='<U4') >>> construct_1d_ndarray_preserving_na([1.0, 2.0, None], dtype='str') """
subarr = np.array(values, dtype=dtype, copy=copy) if dtype is not None and dtype.kind in ("U", "S"): # GH-21083 # We can't just return np.array(subarr, dtype='str') since # NumPy will convert the non-string objects into strings # Including NA values. Se we have to go # string -> object -> update NA, which requires an # additional pass over the data. na_values = isna(values) subarr2 = subarr.astype(object) subarr2[na_values] = np.asarray(values, dtype=object)[na_values] subarr = subarr2 return subarr
<SYSTEM_TASK:> Make a scatter plot from two DataFrame columns <END_TASK> <USER_TASK:> Description: def scatter_plot(data, x, y, by=None, ax=None, figsize=None, grid=False, **kwargs): """ Make a scatter plot from two DataFrame columns Parameters ---------- data : DataFrame x : Column name for the x-axis values y : Column name for the y-axis values ax : Matplotlib axis object figsize : A tuple (width, height) in inches grid : Setting this to True will show the grid kwargs : other plotting keyword arguments To be passed to scatter function Returns ------- matplotlib.Figure """
import matplotlib.pyplot as plt kwargs.setdefault('edgecolors', 'none') def plot_group(group, ax): xvals = group[x].values yvals = group[y].values ax.scatter(xvals, yvals, **kwargs) ax.grid(grid) if by is not None: fig = _grouped_plot(plot_group, data, by=by, figsize=figsize, ax=ax) else: if ax is None: fig = plt.figure() ax = fig.add_subplot(111) else: fig = ax.get_figure() plot_group(data, ax) ax.set_ylabel(pprint_thing(y)) ax.set_xlabel(pprint_thing(x)) ax.grid(grid) return fig
<SYSTEM_TASK:> Make a histogram of the DataFrame's. <END_TASK> <USER_TASK:> Description: def hist_frame(data, column=None, by=None, grid=True, xlabelsize=None, xrot=None, ylabelsize=None, yrot=None, ax=None, sharex=False, sharey=False, figsize=None, layout=None, bins=10, **kwds): """ Make a histogram of the DataFrame's. A `histogram`_ is a representation of the distribution of data. This function calls :meth:`matplotlib.pyplot.hist`, on each series in the DataFrame, resulting in one histogram per column. .. _histogram: https://en.wikipedia.org/wiki/Histogram Parameters ---------- data : DataFrame The pandas object holding the data. column : string or sequence If passed, will be used to limit data to a subset of columns. by : object, optional If passed, then used to form histograms for separate groups. grid : bool, default True Whether to show axis grid lines. xlabelsize : int, default None If specified changes the x-axis label size. xrot : float, default None Rotation of x axis labels. For example, a value of 90 displays the x labels rotated 90 degrees clockwise. ylabelsize : int, default None If specified changes the y-axis label size. yrot : float, default None Rotation of y axis labels. For example, a value of 90 displays the y labels rotated 90 degrees clockwise. ax : Matplotlib axes object, default None The axes to plot the histogram on. sharex : bool, default True if ax is None else False In case subplots=True, share x axis and set some x axis labels to invisible; defaults to True if ax is None otherwise False if an ax is passed in. Note that passing in both an ax and sharex=True will alter all x axis labels for all subplots in a figure. sharey : bool, default False In case subplots=True, share y axis and set some y axis labels to invisible. figsize : tuple The size in inches of the figure to create. Uses the value in `matplotlib.rcParams` by default. layout : tuple, optional Tuple of (rows, columns) for the layout of the histograms. bins : integer or sequence, default 10 Number of histogram bins to be used. If an integer is given, bins + 1 bin edges are calculated and returned. If bins is a sequence, gives bin edges, including left edge of first bin and right edge of last bin. In this case, bins is returned unmodified. **kwds All other plotting keyword arguments to be passed to :meth:`matplotlib.pyplot.hist`. Returns ------- matplotlib.AxesSubplot or numpy.ndarray of them See Also -------- matplotlib.pyplot.hist : Plot a histogram using matplotlib. Examples -------- .. plot:: :context: close-figs This example draws a histogram based on the length and width of some animals, displayed in three bins >>> df = pd.DataFrame({ ... 'length': [1.5, 0.5, 1.2, 0.9, 3], ... 'width': [0.7, 0.2, 0.15, 0.2, 1.1] ... }, index= ['pig', 'rabbit', 'duck', 'chicken', 'horse']) >>> hist = df.hist(bins=3) """
_raise_if_no_mpl() _converter._WARN = False if by is not None: axes = grouped_hist(data, column=column, by=by, ax=ax, grid=grid, figsize=figsize, sharex=sharex, sharey=sharey, layout=layout, bins=bins, xlabelsize=xlabelsize, xrot=xrot, ylabelsize=ylabelsize, yrot=yrot, **kwds) return axes if column is not None: if not isinstance(column, (list, np.ndarray, ABCIndexClass)): column = [column] data = data[column] data = data._get_numeric_data() naxes = len(data.columns) fig, axes = _subplots(naxes=naxes, ax=ax, squeeze=False, sharex=sharex, sharey=sharey, figsize=figsize, layout=layout) _axes = _flatten(axes) for i, col in enumerate(com.try_sort(data.columns)): ax = _axes[i] ax.hist(data[col].dropna().values, bins=bins, **kwds) ax.set_title(col) ax.grid(grid) _set_ticks_props(axes, xlabelsize=xlabelsize, xrot=xrot, ylabelsize=ylabelsize, yrot=yrot) fig.subplots_adjust(wspace=0.3, hspace=0.3) return axes
<SYSTEM_TASK:> Draw histogram of the input series using matplotlib. <END_TASK> <USER_TASK:> Description: def hist_series(self, by=None, ax=None, grid=True, xlabelsize=None, xrot=None, ylabelsize=None, yrot=None, figsize=None, bins=10, **kwds): """ Draw histogram of the input series using matplotlib. Parameters ---------- by : object, optional If passed, then used to form histograms for separate groups ax : matplotlib axis object If not passed, uses gca() grid : bool, default True Whether to show axis grid lines xlabelsize : int, default None If specified changes the x-axis label size xrot : float, default None rotation of x axis labels ylabelsize : int, default None If specified changes the y-axis label size yrot : float, default None rotation of y axis labels figsize : tuple, default None figure size in inches by default bins : integer or sequence, default 10 Number of histogram bins to be used. If an integer is given, bins + 1 bin edges are calculated and returned. If bins is a sequence, gives bin edges, including left edge of first bin and right edge of last bin. In this case, bins is returned unmodified. bins : integer, default 10 Number of histogram bins to be used `**kwds` : keywords To be passed to the actual plotting function See Also -------- matplotlib.axes.Axes.hist : Plot a histogram using matplotlib. """
import matplotlib.pyplot as plt if by is None: if kwds.get('layout', None) is not None: raise ValueError("The 'layout' keyword is not supported when " "'by' is None") # hack until the plotting interface is a bit more unified fig = kwds.pop('figure', plt.gcf() if plt.get_fignums() else plt.figure(figsize=figsize)) if (figsize is not None and tuple(figsize) != tuple(fig.get_size_inches())): fig.set_size_inches(*figsize, forward=True) if ax is None: ax = fig.gca() elif ax.get_figure() != fig: raise AssertionError('passed axis not bound to passed figure') values = self.dropna().values ax.hist(values, bins=bins, **kwds) ax.grid(grid) axes = np.array([ax]) _set_ticks_props(axes, xlabelsize=xlabelsize, xrot=xrot, ylabelsize=ylabelsize, yrot=yrot) else: if 'figure' in kwds: raise ValueError("Cannot pass 'figure' when using the " "'by' argument, since a new 'Figure' instance " "will be created") axes = grouped_hist(self, by=by, ax=ax, grid=grid, figsize=figsize, bins=bins, xlabelsize=xlabelsize, xrot=xrot, ylabelsize=ylabelsize, yrot=yrot, **kwds) if hasattr(axes, 'ndim'): if axes.ndim == 1 and len(axes) == 1: return axes[0] return axes
<SYSTEM_TASK:> Make box plots from DataFrameGroupBy data. <END_TASK> <USER_TASK:> Description: def boxplot_frame_groupby(grouped, subplots=True, column=None, fontsize=None, rot=0, grid=True, ax=None, figsize=None, layout=None, sharex=False, sharey=True, **kwds): """ Make box plots from DataFrameGroupBy data. Parameters ---------- grouped : Grouped DataFrame subplots : bool * ``False`` - no subplots will be used * ``True`` - create a subplot for each group column : column name or list of names, or vector Can be any valid input to groupby fontsize : int or string rot : label rotation angle grid : Setting this to True will show the grid ax : Matplotlib axis object, default None figsize : A tuple (width, height) in inches layout : tuple (optional) (rows, columns) for the layout of the plot sharex : bool, default False Whether x-axes will be shared among subplots .. versionadded:: 0.23.1 sharey : bool, default True Whether y-axes will be shared among subplots .. versionadded:: 0.23.1 `**kwds` : Keyword Arguments All other plotting keyword arguments to be passed to matplotlib's boxplot function Returns ------- dict of key/value = group key/DataFrame.boxplot return value or DataFrame.boxplot return value in case subplots=figures=False Examples -------- >>> import itertools >>> tuples = [t for t in itertools.product(range(1000), range(4))] >>> index = pd.MultiIndex.from_tuples(tuples, names=['lvl0', 'lvl1']) >>> data = np.random.randn(len(index),4) >>> df = pd.DataFrame(data, columns=list('ABCD'), index=index) >>> >>> grouped = df.groupby(level='lvl1') >>> boxplot_frame_groupby(grouped) >>> >>> grouped = df.unstack(level='lvl1').groupby(level=0, axis=1) >>> boxplot_frame_groupby(grouped, subplots=False) """
_raise_if_no_mpl() _converter._WARN = False if subplots is True: naxes = len(grouped) fig, axes = _subplots(naxes=naxes, squeeze=False, ax=ax, sharex=sharex, sharey=sharey, figsize=figsize, layout=layout) axes = _flatten(axes) from pandas.core.series import Series ret = Series() for (key, group), ax in zip(grouped, axes): d = group.boxplot(ax=ax, column=column, fontsize=fontsize, rot=rot, grid=grid, **kwds) ax.set_title(pprint_thing(key)) ret.loc[key] = d fig.subplots_adjust(bottom=0.15, top=0.9, left=0.1, right=0.9, wspace=0.2) else: from pandas.core.reshape.concat import concat keys, frames = zip(*grouped) if grouped.axis == 0: df = concat(frames, keys=keys, axis=1) else: if len(frames) > 1: df = frames[0].join(frames[1::]) else: df = frames[0] ret = df.boxplot(column=column, fontsize=fontsize, rot=rot, grid=grid, ax=ax, figsize=figsize, layout=layout, **kwds) return ret
<SYSTEM_TASK:> check whether ax has data <END_TASK> <USER_TASK:> Description: def _has_plotted_object(self, ax): """check whether ax has data"""
return (len(ax.lines) != 0 or len(ax.artists) != 0 or len(ax.containers) != 0)
<SYSTEM_TASK:> Common post process for each axes <END_TASK> <USER_TASK:> Description: def _post_plot_logic_common(self, ax, data): """Common post process for each axes"""
def get_label(i): try: return pprint_thing(data.index[i]) except Exception: return '' if self.orientation == 'vertical' or self.orientation is None: if self._need_to_set_index: xticklabels = [get_label(x) for x in ax.get_xticks()] ax.set_xticklabels(xticklabels) self._apply_axis_properties(ax.xaxis, rot=self.rot, fontsize=self.fontsize) self._apply_axis_properties(ax.yaxis, fontsize=self.fontsize) if hasattr(ax, 'right_ax'): self._apply_axis_properties(ax.right_ax.yaxis, fontsize=self.fontsize) elif self.orientation == 'horizontal': if self._need_to_set_index: yticklabels = [get_label(y) for y in ax.get_yticks()] ax.set_yticklabels(yticklabels) self._apply_axis_properties(ax.yaxis, rot=self.rot, fontsize=self.fontsize) self._apply_axis_properties(ax.xaxis, fontsize=self.fontsize) if hasattr(ax, 'right_ax'): self._apply_axis_properties(ax.right_ax.yaxis, fontsize=self.fontsize) else: # pragma no cover raise ValueError
<SYSTEM_TASK:> Common post process unrelated to data <END_TASK> <USER_TASK:> Description: def _adorn_subplots(self): """Common post process unrelated to data"""
if len(self.axes) > 0: all_axes = self._get_subplots() nrows, ncols = self._get_axes_layout() _handle_shared_axes(axarr=all_axes, nplots=len(all_axes), naxes=nrows * ncols, nrows=nrows, ncols=ncols, sharex=self.sharex, sharey=self.sharey) for ax in self.axes: if self.yticks is not None: ax.set_yticks(self.yticks) if self.xticks is not None: ax.set_xticks(self.xticks) if self.ylim is not None: ax.set_ylim(self.ylim) if self.xlim is not None: ax.set_xlim(self.xlim) ax.grid(self.grid) if self.title: if self.subplots: if is_list_like(self.title): if len(self.title) != self.nseries: msg = ('The length of `title` must equal the number ' 'of columns if using `title` of type `list` ' 'and `subplots=True`.\n' 'length of title = {}\n' 'number of columns = {}').format( len(self.title), self.nseries) raise ValueError(msg) for (ax, title) in zip(self.axes, self.title): ax.set_title(title) else: self.fig.suptitle(self.title) else: if is_list_like(self.title): msg = ('Using `title` of type `list` is not supported ' 'unless `subplots=True` is passed') raise ValueError(msg) self.axes[0].set_title(self.title)
<SYSTEM_TASK:> Manage style and color based on column number and its label. <END_TASK> <USER_TASK:> Description: def _apply_style_colors(self, colors, kwds, col_num, label): """ Manage style and color based on column number and its label. Returns tuple of appropriate style and kwds which "color" may be added. """
style = None if self.style is not None: if isinstance(self.style, list): try: style = self.style[col_num] except IndexError: pass elif isinstance(self.style, dict): style = self.style.get(label, style) else: style = self.style has_color = 'color' in kwds or self.colormap is not None nocolor_style = style is None or re.match('[a-z]+', style) is None if (has_color or self.subplots) and nocolor_style: kwds['color'] = colors[col_num % len(colors)] return style, kwds
<SYSTEM_TASK:> Plot DataFrame columns as lines. <END_TASK> <USER_TASK:> Description: def line(self, x=None, y=None, **kwds): """ Plot DataFrame columns as lines. This function is useful to plot lines using DataFrame's values as coordinates. Parameters ---------- x : int or str, optional Columns to use for the horizontal axis. Either the location or the label of the columns to be used. By default, it will use the DataFrame indices. y : int, str, or list of them, optional The values to be plotted. Either the location or the label of the columns to be used. By default, it will use the remaining DataFrame numeric columns. **kwds Keyword arguments to pass on to :meth:`DataFrame.plot`. Returns ------- :class:`matplotlib.axes.Axes` or :class:`numpy.ndarray` Return an ndarray when ``subplots=True``. See Also -------- matplotlib.pyplot.plot : Plot y versus x as lines and/or markers. Examples -------- .. plot:: :context: close-figs The following example shows the populations for some animals over the years. >>> df = pd.DataFrame({ ... 'pig': [20, 18, 489, 675, 1776], ... 'horse': [4, 25, 281, 600, 1900] ... }, index=[1990, 1997, 2003, 2009, 2014]) >>> lines = df.plot.line() .. plot:: :context: close-figs An example with subplots, so an array of axes is returned. >>> axes = df.plot.line(subplots=True) >>> type(axes) <class 'numpy.ndarray'> .. plot:: :context: close-figs The following example shows the relationship between both populations. >>> lines = df.plot.line(x='pig', y='horse') """
return self(kind='line', x=x, y=y, **kwds)
<SYSTEM_TASK:> Vertical bar plot. <END_TASK> <USER_TASK:> Description: def bar(self, x=None, y=None, **kwds): """ Vertical bar plot. A bar plot is a plot that presents categorical data with rectangular bars with lengths proportional to the values that they represent. A bar plot shows comparisons among discrete categories. One axis of the plot shows the specific categories being compared, and the other axis represents a measured value. Parameters ---------- x : label or position, optional Allows plotting of one column versus another. If not specified, the index of the DataFrame is used. y : label or position, optional Allows plotting of one column versus another. If not specified, all numerical columns are used. **kwds Additional keyword arguments are documented in :meth:`DataFrame.plot`. Returns ------- matplotlib.axes.Axes or np.ndarray of them An ndarray is returned with one :class:`matplotlib.axes.Axes` per column when ``subplots=True``. See Also -------- DataFrame.plot.barh : Horizontal bar plot. DataFrame.plot : Make plots of a DataFrame. matplotlib.pyplot.bar : Make a bar plot with matplotlib. Examples -------- Basic plot. .. plot:: :context: close-figs >>> df = pd.DataFrame({'lab':['A', 'B', 'C'], 'val':[10, 30, 20]}) >>> ax = df.plot.bar(x='lab', y='val', rot=0) Plot a whole dataframe to a bar plot. Each column is assigned a distinct color, and each row is nested in a group along the horizontal axis. .. plot:: :context: close-figs >>> speed = [0.1, 17.5, 40, 48, 52, 69, 88] >>> lifespan = [2, 8, 70, 1.5, 25, 12, 28] >>> index = ['snail', 'pig', 'elephant', ... 'rabbit', 'giraffe', 'coyote', 'horse'] >>> df = pd.DataFrame({'speed': speed, ... 'lifespan': lifespan}, index=index) >>> ax = df.plot.bar(rot=0) Instead of nesting, the figure can be split by column with ``subplots=True``. In this case, a :class:`numpy.ndarray` of :class:`matplotlib.axes.Axes` are returned. .. plot:: :context: close-figs >>> axes = df.plot.bar(rot=0, subplots=True) >>> axes[1].legend(loc=2) # doctest: +SKIP Plot a single column. .. plot:: :context: close-figs >>> ax = df.plot.bar(y='speed', rot=0) Plot only selected categories for the DataFrame. .. plot:: :context: close-figs >>> ax = df.plot.bar(x='lifespan', rot=0) """
return self(kind='bar', x=x, y=y, **kwds)
<SYSTEM_TASK:> Make a horizontal bar plot. <END_TASK> <USER_TASK:> Description: def barh(self, x=None, y=None, **kwds): """ Make a horizontal bar plot. A horizontal bar plot is a plot that presents quantitative data with rectangular bars with lengths proportional to the values that they represent. A bar plot shows comparisons among discrete categories. One axis of the plot shows the specific categories being compared, and the other axis represents a measured value. Parameters ---------- x : label or position, default DataFrame.index Column to be used for categories. y : label or position, default All numeric columns in dataframe Columns to be plotted from the DataFrame. **kwds Keyword arguments to pass on to :meth:`DataFrame.plot`. Returns ------- :class:`matplotlib.axes.Axes` or numpy.ndarray of them See Also -------- DataFrame.plot.bar: Vertical bar plot. DataFrame.plot : Make plots of DataFrame using matplotlib. matplotlib.axes.Axes.bar : Plot a vertical bar plot using matplotlib. Examples -------- Basic example .. plot:: :context: close-figs >>> df = pd.DataFrame({'lab':['A', 'B', 'C'], 'val':[10, 30, 20]}) >>> ax = df.plot.barh(x='lab', y='val') Plot a whole DataFrame to a horizontal bar plot .. plot:: :context: close-figs >>> speed = [0.1, 17.5, 40, 48, 52, 69, 88] >>> lifespan = [2, 8, 70, 1.5, 25, 12, 28] >>> index = ['snail', 'pig', 'elephant', ... 'rabbit', 'giraffe', 'coyote', 'horse'] >>> df = pd.DataFrame({'speed': speed, ... 'lifespan': lifespan}, index=index) >>> ax = df.plot.barh() Plot a column of the DataFrame to a horizontal bar plot .. plot:: :context: close-figs >>> speed = [0.1, 17.5, 40, 48, 52, 69, 88] >>> lifespan = [2, 8, 70, 1.5, 25, 12, 28] >>> index = ['snail', 'pig', 'elephant', ... 'rabbit', 'giraffe', 'coyote', 'horse'] >>> df = pd.DataFrame({'speed': speed, ... 'lifespan': lifespan}, index=index) >>> ax = df.plot.barh(y='speed') Plot DataFrame versus the desired column .. plot:: :context: close-figs >>> speed = [0.1, 17.5, 40, 48, 52, 69, 88] >>> lifespan = [2, 8, 70, 1.5, 25, 12, 28] >>> index = ['snail', 'pig', 'elephant', ... 'rabbit', 'giraffe', 'coyote', 'horse'] >>> df = pd.DataFrame({'speed': speed, ... 'lifespan': lifespan}, index=index) >>> ax = df.plot.barh(x='lifespan') """
return self(kind='barh', x=x, y=y, **kwds)
<SYSTEM_TASK:> Draw one histogram of the DataFrame's columns. <END_TASK> <USER_TASK:> Description: def hist(self, by=None, bins=10, **kwds): """ Draw one histogram of the DataFrame's columns. A histogram is a representation of the distribution of data. This function groups the values of all given Series in the DataFrame into bins and draws all bins in one :class:`matplotlib.axes.Axes`. This is useful when the DataFrame's Series are in a similar scale. Parameters ---------- by : str or sequence, optional Column in the DataFrame to group by. bins : int, default 10 Number of histogram bins to be used. **kwds Additional keyword arguments are documented in :meth:`DataFrame.plot`. Returns ------- class:`matplotlib.AxesSubplot` Return a histogram plot. See Also -------- DataFrame.hist : Draw histograms per DataFrame's Series. Series.hist : Draw a histogram with Series' data. Examples -------- When we draw a dice 6000 times, we expect to get each value around 1000 times. But when we draw two dices and sum the result, the distribution is going to be quite different. A histogram illustrates those distributions. .. plot:: :context: close-figs >>> df = pd.DataFrame( ... np.random.randint(1, 7, 6000), ... columns = ['one']) >>> df['two'] = df['one'] + np.random.randint(1, 7, 6000) >>> ax = df.plot.hist(bins=12, alpha=0.5) """
return self(kind='hist', by=by, bins=bins, **kwds)
<SYSTEM_TASK:> Draw a stacked area plot. <END_TASK> <USER_TASK:> Description: def area(self, x=None, y=None, **kwds): """ Draw a stacked area plot. An area plot displays quantitative data visually. This function wraps the matplotlib area function. Parameters ---------- x : label or position, optional Coordinates for the X axis. By default uses the index. y : label or position, optional Column to plot. By default uses all columns. stacked : bool, default True Area plots are stacked by default. Set to False to create a unstacked plot. **kwds : optional Additional keyword arguments are documented in :meth:`DataFrame.plot`. Returns ------- matplotlib.axes.Axes or numpy.ndarray Area plot, or array of area plots if subplots is True. See Also -------- DataFrame.plot : Make plots of DataFrame using matplotlib / pylab. Examples -------- Draw an area plot based on basic business metrics: .. plot:: :context: close-figs >>> df = pd.DataFrame({ ... 'sales': [3, 2, 3, 9, 10, 6], ... 'signups': [5, 5, 6, 12, 14, 13], ... 'visits': [20, 42, 28, 62, 81, 50], ... }, index=pd.date_range(start='2018/01/01', end='2018/07/01', ... freq='M')) >>> ax = df.plot.area() Area plots are stacked by default. To produce an unstacked plot, pass ``stacked=False``: .. plot:: :context: close-figs >>> ax = df.plot.area(stacked=False) Draw an area plot for a single column: .. plot:: :context: close-figs >>> ax = df.plot.area(y='sales') Draw with a different `x`: .. plot:: :context: close-figs >>> df = pd.DataFrame({ ... 'sales': [3, 2, 3], ... 'visits': [20, 42, 28], ... 'day': [1, 2, 3], ... }) >>> ax = df.plot.area(x='day') """
return self(kind='area', x=x, y=y, **kwds)
<SYSTEM_TASK:> Create a scatter plot with varying marker point size and color. <END_TASK> <USER_TASK:> Description: def scatter(self, x, y, s=None, c=None, **kwds): """ Create a scatter plot with varying marker point size and color. The coordinates of each point are defined by two dataframe columns and filled circles are used to represent each point. This kind of plot is useful to see complex correlations between two variables. Points could be for instance natural 2D coordinates like longitude and latitude in a map or, in general, any pair of metrics that can be plotted against each other. Parameters ---------- x : int or str The column name or column position to be used as horizontal coordinates for each point. y : int or str The column name or column position to be used as vertical coordinates for each point. s : scalar or array_like, optional The size of each point. Possible values are: - A single scalar so all points have the same size. - A sequence of scalars, which will be used for each point's size recursively. For instance, when passing [2,14] all points size will be either 2 or 14, alternatively. c : str, int or array_like, optional The color of each point. Possible values are: - A single color string referred to by name, RGB or RGBA code, for instance 'red' or '#a98d19'. - A sequence of color strings referred to by name, RGB or RGBA code, which will be used for each point's color recursively. For instance ['green','yellow'] all points will be filled in green or yellow, alternatively. - A column name or position whose values will be used to color the marker points according to a colormap. **kwds Keyword arguments to pass on to :meth:`DataFrame.plot`. Returns ------- :class:`matplotlib.axes.Axes` or numpy.ndarray of them See Also -------- matplotlib.pyplot.scatter : Scatter plot using multiple input data formats. Examples -------- Let's see how to draw a scatter plot using coordinates from the values in a DataFrame's columns. .. plot:: :context: close-figs >>> df = pd.DataFrame([[5.1, 3.5, 0], [4.9, 3.0, 0], [7.0, 3.2, 1], ... [6.4, 3.2, 1], [5.9, 3.0, 2]], ... columns=['length', 'width', 'species']) >>> ax1 = df.plot.scatter(x='length', ... y='width', ... c='DarkBlue') And now with the color determined by a column as well. .. plot:: :context: close-figs >>> ax2 = df.plot.scatter(x='length', ... y='width', ... c='species', ... colormap='viridis') """
return self(kind='scatter', x=x, y=y, c=c, s=s, **kwds)
<SYSTEM_TASK:> Generate a hexagonal binning plot. <END_TASK> <USER_TASK:> Description: def hexbin(self, x, y, C=None, reduce_C_function=None, gridsize=None, **kwds): """ Generate a hexagonal binning plot. Generate a hexagonal binning plot of `x` versus `y`. If `C` is `None` (the default), this is a histogram of the number of occurrences of the observations at ``(x[i], y[i])``. If `C` is specified, specifies values at given coordinates ``(x[i], y[i])``. These values are accumulated for each hexagonal bin and then reduced according to `reduce_C_function`, having as default the NumPy's mean function (:meth:`numpy.mean`). (If `C` is specified, it must also be a 1-D sequence of the same length as `x` and `y`, or a column label.) Parameters ---------- x : int or str The column label or position for x points. y : int or str The column label or position for y points. C : int or str, optional The column label or position for the value of `(x, y)` point. reduce_C_function : callable, default `np.mean` Function of one argument that reduces all the values in a bin to a single number (e.g. `np.mean`, `np.max`, `np.sum`, `np.std`). gridsize : int or tuple of (int, int), default 100 The number of hexagons in the x-direction. The corresponding number of hexagons in the y-direction is chosen in a way that the hexagons are approximately regular. Alternatively, gridsize can be a tuple with two elements specifying the number of hexagons in the x-direction and the y-direction. **kwds Additional keyword arguments are documented in :meth:`DataFrame.plot`. Returns ------- matplotlib.AxesSubplot The matplotlib ``Axes`` on which the hexbin is plotted. See Also -------- DataFrame.plot : Make plots of a DataFrame. matplotlib.pyplot.hexbin : Hexagonal binning plot using matplotlib, the matplotlib function that is used under the hood. Examples -------- The following examples are generated with random data from a normal distribution. .. plot:: :context: close-figs >>> n = 10000 >>> df = pd.DataFrame({'x': np.random.randn(n), ... 'y': np.random.randn(n)}) >>> ax = df.plot.hexbin(x='x', y='y', gridsize=20) The next example uses `C` and `np.sum` as `reduce_C_function`. Note that `'observations'` values ranges from 1 to 5 but the result plot shows values up to more than 25. This is because of the `reduce_C_function`. .. plot:: :context: close-figs >>> n = 500 >>> df = pd.DataFrame({ ... 'coord_x': np.random.uniform(-3, 3, size=n), ... 'coord_y': np.random.uniform(30, 50, size=n), ... 'observations': np.random.randint(1,5, size=n) ... }) >>> ax = df.plot.hexbin(x='coord_x', ... y='coord_y', ... C='observations', ... reduce_C_function=np.sum, ... gridsize=10, ... cmap="viridis") """
if reduce_C_function is not None: kwds['reduce_C_function'] = reduce_C_function if gridsize is not None: kwds['gridsize'] = gridsize return self(kind='hexbin', x=x, y=y, C=C, **kwds)
<SYSTEM_TASK:> Return the union or intersection of indexes. <END_TASK> <USER_TASK:> Description: def _get_combined_index(indexes, intersect=False, sort=False): """ Return the union or intersection of indexes. Parameters ---------- indexes : list of Index or list objects When intersect=True, do not accept list of lists. intersect : bool, default False If True, calculate the intersection between indexes. Otherwise, calculate the union. sort : bool, default False Whether the result index should come out sorted or not. Returns ------- Index """
# TODO: handle index names! indexes = _get_distinct_objs(indexes) if len(indexes) == 0: index = Index([]) elif len(indexes) == 1: index = indexes[0] elif intersect: index = indexes[0] for other in indexes[1:]: index = index.intersection(other) else: index = _union_indexes(indexes, sort=sort) index = ensure_index(index) if sort: try: index = index.sort_values() except TypeError: pass return index
<SYSTEM_TASK:> Return the union of indexes. <END_TASK> <USER_TASK:> Description: def _union_indexes(indexes, sort=True): """ Return the union of indexes. The behavior of sort and names is not consistent. Parameters ---------- indexes : list of Index or list objects sort : bool, default True Whether the result index should come out sorted or not. Returns ------- Index """
if len(indexes) == 0: raise AssertionError('Must have at least 1 Index to union') if len(indexes) == 1: result = indexes[0] if isinstance(result, list): result = Index(sorted(result)) return result indexes, kind = _sanitize_and_check(indexes) def _unique_indices(inds): """ Convert indexes to lists and concatenate them, removing duplicates. The final dtype is inferred. Parameters ---------- inds : list of Index or list objects Returns ------- Index """ def conv(i): if isinstance(i, Index): i = i.tolist() return i return Index( lib.fast_unique_multiple_list([conv(i) for i in inds], sort=sort)) if kind == 'special': result = indexes[0] if hasattr(result, 'union_many'): return result.union_many(indexes[1:]) else: for other in indexes[1:]: result = result.union(other) return result elif kind == 'array': index = indexes[0] for other in indexes[1:]: if not index.equals(other): if sort is None: # TODO: remove once pd.concat sort default changes warnings.warn(_sort_msg, FutureWarning, stacklevel=8) sort = True return _unique_indices(indexes) name = _get_consensus_names(indexes)[0] if name != index.name: index = index._shallow_copy(name=name) return index else: # kind='list' return _unique_indices(indexes)