code
stringlengths
26
870k
docstring
stringlengths
1
65.6k
func_name
stringlengths
1
194
language
stringclasses
1 value
repo
stringlengths
8
68
path
stringlengths
5
194
url
stringlengths
46
254
license
stringclasses
4 values
def maybe_name_or_idx(idx, model): """ Give a name or an integer and return the name and integer location of the column in a design matrix. """ if idx is None: idx = lrange(model.exog.shape[1]) if isinstance(idx, int): exog_name = model.exog_names[idx] exog_idx = idx # anticipate index as list and recurse elif isinstance(idx, (tuple, list)): exog_name = [] exog_idx = [] for item in idx: exog_name_item, exog_idx_item = maybe_name_or_idx(item, model) exog_name.append(exog_name_item) exog_idx.append(exog_idx_item) else: # assume we've got a string variable exog_name = idx exog_idx = model.exog_names.index(idx) return exog_name, exog_idx
Give a name or an integer and return the name and integer location of the column in a design matrix.
maybe_name_or_idx
python
statsmodels/statsmodels
statsmodels/graphics/utils.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/utils.py
BSD-3-Clause
def get_data_names(series_or_dataframe): """ Input can be an array or pandas-like. Will handle 1d array-like but not 2d. Returns a str for 1d data or a list of strings for 2d data. """ names = getattr(series_or_dataframe, 'name', None) if not names: names = getattr(series_or_dataframe, 'columns', None) if not names: shape = getattr(series_or_dataframe, 'shape', [1]) nvars = 1 if len(shape) == 1 else series_or_dataframe.shape[1] names = ["X%d" for _ in range(nvars)] if nvars == 1: names = names[0] else: names = names.tolist() return names
Input can be an array or pandas-like. Will handle 1d array-like but not 2d. Returns a str for 1d data or a list of strings for 2d data.
get_data_names
python
statsmodels/statsmodels
statsmodels/graphics/utils.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/utils.py
BSD-3-Clause
def annotate_axes(index, labels, points, offset_points, size, ax, **kwargs): """ Annotate Axes with labels, points, offset_points according to the given index. """ for i in index: label = labels[i] point = points[i] offset = offset_points[i] ax.annotate(label, point, xytext=offset, textcoords="offset points", size=size, **kwargs) return ax
Annotate Axes with labels, points, offset_points according to the given index.
annotate_axes
python
statsmodels/statsmodels
statsmodels/graphics/utils.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/utils.py
BSD-3-Clause
def dot_plot(points, intervals=None, lines=None, sections=None, styles=None, marker_props=None, line_props=None, split_names=None, section_order=None, line_order=None, stacked=False, styles_order=None, striped=False, horizontal=True, show_names="both", fmt_left_name=None, fmt_right_name=None, show_section_titles=None, ax=None): """ Dot plotting (also known as forest and blobbogram). Produce a dotplot similar in style to those in Cleveland's "Visualizing Data" book ([1]_). These are also known as "forest plots". Parameters ---------- points : array_like The quantitative values to be plotted as markers. intervals : array_like The intervals to be plotted around the points. The elements of `intervals` are either scalars or sequences of length 2. A scalar indicates the half width of a symmetric interval. A sequence of length 2 contains the left and right half-widths (respectively) of a nonsymmetric interval. If None, no intervals are drawn. lines : array_like A grouping variable indicating which points/intervals are drawn on a common line. If None, each point/interval appears on its own line. sections : array_like A grouping variable indicating which lines are grouped into sections. If None, everything is drawn in a single section. styles : array_like A grouping label defining the plotting style of the markers and intervals. marker_props : dict A dictionary mapping style codes (the values in `styles`) to dictionaries defining key/value pairs to be passed as keyword arguments to `plot` when plotting markers. Useful keyword arguments are "color", "marker", and "ms" (marker size). line_props : dict A dictionary mapping style codes (the values in `styles`) to dictionaries defining key/value pairs to be passed as keyword arguments to `plot` when plotting interval lines. Useful keyword arguments are "color", "linestyle", "solid_capstyle", and "linewidth". split_names : str If not None, this is used to split the values of `lines` into substrings that are drawn in the left and right margins, respectively. If None, the values of `lines` are drawn in the left margin. section_order : array_like The section labels in the order in which they appear in the dotplot. line_order : array_like The line labels in the order in which they appear in the dotplot. stacked : bool If True, when multiple points or intervals are drawn on the same line, they are offset from each other. styles_order : array_like If stacked=True, this is the order in which the point styles on a given line are drawn from top to bottom (if horizontal is True) or from left to right (if horizontal is False). If None (default), the order is lexical. striped : bool If True, every other line is enclosed in a shaded box. horizontal : bool If True (default), the lines are drawn horizontally, otherwise they are drawn vertically. show_names : str Determines whether labels (names) are shown in the left and/or right margins (top/bottom margins if `horizontal` is True). If `both`, labels are drawn in both margins, if 'left', labels are drawn in the left or top margin. If `right`, labels are drawn in the right or bottom margin. fmt_left_name : callable The left/top margin names are passed through this function before drawing on the plot. fmt_right_name : callable The right/bottom marginnames are passed through this function before drawing on the plot. show_section_titles : bool or None If None, section titles are drawn only if there is more than one section. If False/True, section titles are never/always drawn, respectively. ax : matplotlib.axes The axes on which the dotplot is drawn. If None, a new axes is created. Returns ------- fig : Figure The figure given by `ax.figure` or a new instance. Notes ----- `points`, `intervals`, `lines`, `sections`, `styles` must all have the same length whenever present. References ---------- .. [1] Cleveland, William S. (1993). "Visualizing Data". Hobart Press. .. [2] Jacoby, William G. (2006) "The Dot Plot: A Graphical Display for Labeled Quantitative Values." The Political Methodologist 14(1): 6-14. Examples -------- This is a simple dotplot with one point per line: >>> dot_plot(points=point_values) This dotplot has labels on the lines (if elements in `label_values` are repeated, the corresponding points appear on the same line): >>> dot_plot(points=point_values, lines=label_values) """ import matplotlib.transforms as transforms fig, ax = utils.create_mpl_ax(ax) # Convert to numpy arrays if that is not what we are given. points = np.asarray(points) def asarray_or_none(x): return None if x is None else np.asarray(x) intervals = asarray_or_none(intervals) lines = asarray_or_none(lines) sections = asarray_or_none(sections) styles = asarray_or_none(styles) # Total number of points npoint = len(points) # Set default line values if needed if lines is None: lines = np.arange(npoint) # Set default section values if needed if sections is None: sections = np.zeros(npoint) # Set default style values if needed if styles is None: styles = np.zeros(npoint) # The vertical space (in inches) for a section title section_title_space = 0.5 # The number of sections nsect = len(set(sections)) if section_order is not None: nsect = len(set(section_order)) # The number of section titles if show_section_titles is False: draw_section_titles = False nsect_title = 0 elif show_section_titles is True: draw_section_titles = True nsect_title = nsect else: draw_section_titles = nsect > 1 nsect_title = nsect if nsect > 1 else 0 # The total vertical space devoted to section titles. # Unused, commented out # section_title_space * nsect_title # Add a bit of room so that points that fall at the axis limits # are not cut in half. ax.set_xmargin(0.02) ax.set_ymargin(0.02) if section_order is None: lines0 = list(set(sections)) lines0.sort() else: lines0 = section_order if line_order is None: lines1 = list(set(lines)) lines1.sort() else: lines1 = line_order # A map from (section,line) codes to index positions. lines_map = {} for i in range(npoint): if section_order is not None and sections[i] not in section_order: continue if line_order is not None and lines[i] not in line_order: continue ky = (sections[i], lines[i]) if ky not in lines_map: lines_map[ky] = [] lines_map[ky].append(i) # Get the size of the axes on the parent figure in inches bbox = ax.get_window_extent().transformed( fig.dpi_scale_trans.inverted()) awidth, aheight = bbox.width, bbox.height # The number of lines in the plot. nrows = len(lines_map) # The positions of the lowest and highest guideline in axes # coordinates (for horizontal dotplots), or the leftmost and # rightmost guidelines (for vertical dotplots). bottom, top = 0, 1 if horizontal: # x coordinate is data, y coordinate is axes trans = transforms.blended_transform_factory(ax.transData, ax.transAxes) else: # x coordinate is axes, y coordinate is data trans = transforms.blended_transform_factory(ax.transAxes, ax.transData) # Space used for a section title, in axes coordinates title_space_axes = section_title_space / aheight # Space between lines if horizontal: dpos = (top - bottom - nsect_title*title_space_axes) /\ float(nrows) else: dpos = (top - bottom) / float(nrows) # Determine the spacing for stacked points if styles_order is not None: style_codes = styles_order else: style_codes = list(set(styles)) style_codes.sort() # Order is top to bottom for horizontal plots, so need to # flip. if horizontal: style_codes = style_codes[::-1] # nval is the maximum number of points on one line. nval = len(style_codes) if nval > 1: stackd = dpos / (2.5*(float(nval)-1)) else: stackd = 0. # Map from style code to its integer position style_codes_map = {x: style_codes.index(x) for x in style_codes} # Setup default marker styles colors = ["r", "g", "b", "y", "k", "purple", "orange"] if marker_props is None: marker_props = {x: {} for x in style_codes} for j in range(nval): sc = style_codes[j] if "color" not in marker_props[sc]: marker_props[sc]["color"] = colors[j % len(colors)] if "marker" not in marker_props[sc]: marker_props[sc]["marker"] = "o" if "ms" not in marker_props[sc]: marker_props[sc]["ms"] = 10 if stackd == 0 else 6 # Setup default line styles if line_props is None: line_props = {x: {} for x in style_codes} for j in range(nval): sc = style_codes[j] if "color" not in line_props[sc]: line_props[sc]["color"] = "grey" if "linewidth" not in line_props[sc]: line_props[sc]["linewidth"] = 2 if stackd > 0 else 8 if horizontal: # The vertical position of the first line. pos = top - dpos/2 if nsect == 1 else top else: # The horizontal position of the first line. pos = bottom + dpos/2 # Points that have already been labeled labeled = set() # Positions of the y axis grid lines ticks = [] # Loop through the sections for k0 in lines0: # Draw a section title if draw_section_titles: if horizontal: y0 = pos + dpos/2 if k0 == lines0[0] else pos ax.fill_between((0, 1), (y0,y0), (pos-0.7*title_space_axes, pos-0.7*title_space_axes), color='darkgrey', transform=ax.transAxes, zorder=1) txt = ax.text(0.5, pos - 0.35*title_space_axes, k0, horizontalalignment='center', verticalalignment='center', transform=ax.transAxes) txt.set_fontweight("bold") pos -= title_space_axes else: m = len([k for k in lines_map if k[0] == k0]) ax.fill_between((pos-dpos/2+0.01, pos+(m-1)*dpos+dpos/2-0.01), (1.01,1.01), (1.06,1.06), color='darkgrey', transform=ax.transAxes, zorder=1, clip_on=False) txt = ax.text(pos + (m-1)*dpos/2, 1.02, k0, horizontalalignment='center', verticalalignment='bottom', transform=ax.transAxes) txt.set_fontweight("bold") jrow = 0 for k1 in lines1: # No data to plot if (k0, k1) not in lines_map: continue # Draw the guideline if horizontal: ax.axhline(pos, color='grey') else: ax.axvline(pos, color='grey') # Set up the labels if split_names is not None: us = k1.split(split_names) if len(us) >= 2: left_label, right_label = us[0], us[1] else: left_label, right_label = k1, None else: left_label, right_label = k1, None if fmt_left_name is not None: left_label = fmt_left_name(left_label) if fmt_right_name is not None: right_label = fmt_right_name(right_label) # Draw the stripe if striped and jrow % 2 == 0: if horizontal: ax.fill_between((0, 1), (pos-dpos/2, pos-dpos/2), (pos+dpos/2, pos+dpos/2), color='lightgrey', transform=ax.transAxes, zorder=0) else: ax.fill_between((pos-dpos/2, pos+dpos/2), (0, 0), (1, 1), color='lightgrey', transform=ax.transAxes, zorder=0) jrow += 1 # Draw the left margin label if show_names.lower() in ("left", "both"): if horizontal: ax.text(-0.1/awidth, pos, left_label, horizontalalignment="right", verticalalignment='center', transform=ax.transAxes, family='monospace') else: ax.text(pos, -0.1/aheight, left_label, horizontalalignment="center", verticalalignment='top', transform=ax.transAxes, family='monospace') # Draw the right margin label if show_names.lower() in ("right", "both"): if right_label is not None: if horizontal: ax.text(1 + 0.1/awidth, pos, right_label, horizontalalignment="left", verticalalignment='center', transform=ax.transAxes, family='monospace') else: ax.text(pos, 1 + 0.1/aheight, right_label, horizontalalignment="center", verticalalignment='bottom', transform=ax.transAxes, family='monospace') # Save the vertical position so that we can place the # tick marks ticks.append(pos) # Loop over the points in one line for ji,jp in enumerate(lines_map[(k0,k1)]): # Calculate the vertical offset yo = 0 if stacked: yo = -dpos/5 + style_codes_map[styles[jp]]*stackd pt = points[jp] # Plot the interval if intervals is not None: # Symmetric interval if np.isscalar(intervals[jp]): lcb, ucb = pt - intervals[jp],\ pt + intervals[jp] # Nonsymmetric interval else: lcb, ucb = pt - intervals[jp][0],\ pt + intervals[jp][1] # Draw the interval if horizontal: ax.plot([lcb, ucb], [pos+yo, pos+yo], '-', transform=trans, **line_props[styles[jp]]) else: ax.plot([pos+yo, pos+yo], [lcb, ucb], '-', transform=trans, **line_props[styles[jp]]) # Plot the point sl = styles[jp] sll = sl if sl not in labeled else None labeled.add(sl) if horizontal: ax.plot([pt,], [pos+yo,], ls='None', transform=trans, label=sll, **marker_props[sl]) else: ax.plot([pos+yo,], [pt,], ls='None', transform=trans, label=sll, **marker_props[sl]) if horizontal: pos -= dpos else: pos += dpos # Set up the axis if horizontal: ax.xaxis.set_ticks_position("bottom") ax.yaxis.set_ticks_position("none") ax.set_yticklabels([]) ax.spines['left'].set_color('none') ax.spines['right'].set_color('none') ax.spines['top'].set_color('none') ax.spines['bottom'].set_position(('axes', -0.1/aheight)) ax.set_ylim(0, 1) ax.yaxis.set_ticks(ticks) ax.autoscale_view(scaley=False, tight=True) else: ax.yaxis.set_ticks_position("left") ax.xaxis.set_ticks_position("none") ax.set_xticklabels([]) ax.spines['bottom'].set_color('none') ax.spines['right'].set_color('none') ax.spines['top'].set_color('none') ax.spines['left'].set_position(('axes', -0.1/awidth)) ax.set_xlim(0, 1) ax.xaxis.set_ticks(ticks) ax.autoscale_view(scalex=False, tight=True) return fig
Dot plotting (also known as forest and blobbogram). Produce a dotplot similar in style to those in Cleveland's "Visualizing Data" book ([1]_). These are also known as "forest plots". Parameters ---------- points : array_like The quantitative values to be plotted as markers. intervals : array_like The intervals to be plotted around the points. The elements of `intervals` are either scalars or sequences of length 2. A scalar indicates the half width of a symmetric interval. A sequence of length 2 contains the left and right half-widths (respectively) of a nonsymmetric interval. If None, no intervals are drawn. lines : array_like A grouping variable indicating which points/intervals are drawn on a common line. If None, each point/interval appears on its own line. sections : array_like A grouping variable indicating which lines are grouped into sections. If None, everything is drawn in a single section. styles : array_like A grouping label defining the plotting style of the markers and intervals. marker_props : dict A dictionary mapping style codes (the values in `styles`) to dictionaries defining key/value pairs to be passed as keyword arguments to `plot` when plotting markers. Useful keyword arguments are "color", "marker", and "ms" (marker size). line_props : dict A dictionary mapping style codes (the values in `styles`) to dictionaries defining key/value pairs to be passed as keyword arguments to `plot` when plotting interval lines. Useful keyword arguments are "color", "linestyle", "solid_capstyle", and "linewidth". split_names : str If not None, this is used to split the values of `lines` into substrings that are drawn in the left and right margins, respectively. If None, the values of `lines` are drawn in the left margin. section_order : array_like The section labels in the order in which they appear in the dotplot. line_order : array_like The line labels in the order in which they appear in the dotplot. stacked : bool If True, when multiple points or intervals are drawn on the same line, they are offset from each other. styles_order : array_like If stacked=True, this is the order in which the point styles on a given line are drawn from top to bottom (if horizontal is True) or from left to right (if horizontal is False). If None (default), the order is lexical. striped : bool If True, every other line is enclosed in a shaded box. horizontal : bool If True (default), the lines are drawn horizontally, otherwise they are drawn vertically. show_names : str Determines whether labels (names) are shown in the left and/or right margins (top/bottom margins if `horizontal` is True). If `both`, labels are drawn in both margins, if 'left', labels are drawn in the left or top margin. If `right`, labels are drawn in the right or bottom margin. fmt_left_name : callable The left/top margin names are passed through this function before drawing on the plot. fmt_right_name : callable The right/bottom marginnames are passed through this function before drawing on the plot. show_section_titles : bool or None If None, section titles are drawn only if there is more than one section. If False/True, section titles are never/always drawn, respectively. ax : matplotlib.axes The axes on which the dotplot is drawn. If None, a new axes is created. Returns ------- fig : Figure The figure given by `ax.figure` or a new instance. Notes ----- `points`, `intervals`, `lines`, `sections`, `styles` must all have the same length whenever present. References ---------- .. [1] Cleveland, William S. (1993). "Visualizing Data". Hobart Press. .. [2] Jacoby, William G. (2006) "The Dot Plot: A Graphical Display for Labeled Quantitative Values." The Political Methodologist 14(1): 6-14. Examples -------- This is a simple dotplot with one point per line: >>> dot_plot(points=point_values) This dotplot has labels on the lines (if elements in `label_values` are repeated, the corresponding points appear on the same line): >>> dot_plot(points=point_values, lines=label_values)
dot_plot
python
statsmodels/statsmodels
statsmodels/graphics/dotplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/dotplots.py
BSD-3-Clause
def interaction_plot(x, trace, response, func="mean", ax=None, plottype='b', xlabel=None, ylabel=None, colors=None, markers=None, linestyles=None, legendloc='best', legendtitle=None, **kwargs): """ Interaction plot for factor level statistics. Note. If categorial factors are supplied levels will be internally recoded to integers. This ensures matplotlib compatibility. Uses a DataFrame to calculate an `aggregate` statistic for each level of the factor or group given by `trace`. Parameters ---------- x : array_like The `x` factor levels constitute the x-axis. If a `pandas.Series` is given its name will be used in `xlabel` if `xlabel` is None. trace : array_like The `trace` factor levels will be drawn as lines in the plot. If `trace` is a `pandas.Series` its name will be used as the `legendtitle` if `legendtitle` is None. response : array_like The reponse or dependent variable. If a `pandas.Series` is given its name will be used in `ylabel` if `ylabel` is None. func : function Anything accepted by `pandas.DataFrame.aggregate`. This is applied to the response variable grouped by the trace levels. ax : axes, optional Matplotlib axes instance plottype : str {'line', 'scatter', 'both'}, optional The type of plot to return. Can be 'l', 's', or 'b' xlabel : str, optional Label to use for `x`. Default is 'X'. If `x` is a `pandas.Series` it will use the series names. ylabel : str, optional Label to use for `response`. Default is 'func of response'. If `response` is a `pandas.Series` it will use the series names. colors : list, optional If given, must have length == number of levels in trace. markers : list, optional If given, must have length == number of levels in trace linestyles : list, optional If given, must have length == number of levels in trace. legendloc : {None, str, int} Location passed to the legend command. legendtitle : {None, str} Title of the legend. **kwargs These will be passed to the plot command used either plot or scatter. If you want to control the overall plotting options, use kwargs. Returns ------- Figure The figure given by `ax.figure` or a new instance. Examples -------- >>> import numpy as np >>> np.random.seed(12345) >>> weight = np.random.randint(1,4,size=60) >>> duration = np.random.randint(1,3,size=60) >>> days = np.log(np.random.randint(1,30, size=60)) >>> fig = interaction_plot(weight, duration, days, ... colors=['red','blue'], markers=['D','^'], ms=10) >>> import matplotlib.pyplot as plt >>> plt.show() .. plot:: import numpy as np from statsmodels.graphics.factorplots import interaction_plot np.random.seed(12345) weight = np.random.randint(1,4,size=60) duration = np.random.randint(1,3,size=60) days = np.log(np.random.randint(1,30, size=60)) fig = interaction_plot(weight, duration, days, colors=['red','blue'], markers=['D','^'], ms=10) import matplotlib.pyplot as plt #plt.show() """ from pandas import DataFrame fig, ax = utils.create_mpl_ax(ax) response_name = ylabel or getattr(response, 'name', 'response') func_name = getattr(func, "__name__", str(func)) ylabel = f'{func_name} of {response_name}' xlabel = xlabel or getattr(x, 'name', 'X') legendtitle = legendtitle or getattr(trace, 'name', 'Trace') ax.set_ylabel(ylabel) ax.set_xlabel(xlabel) x_values = x_levels = None if isinstance(x[0], str): x_levels = [val for val in np.unique(x)] x_values = lrange(len(x_levels)) x = _recode(x, dict(zip(x_levels, x_values))) data = DataFrame(dict(x=x, trace=trace, response=response)) plot_data = data.groupby(['trace', 'x']).aggregate(func).reset_index() # return data # check plot args n_trace = len(plot_data['trace'].unique()) linestyles = ['-'] * n_trace if linestyles is None else linestyles markers = ['.'] * n_trace if markers is None else markers colors = rainbow(n_trace) if colors is None else colors if len(linestyles) != n_trace: raise ValueError("Must be a linestyle for each trace level") if len(markers) != n_trace: raise ValueError("Must be a marker for each trace level") if len(colors) != n_trace: raise ValueError("Must be a color for each trace level") if plottype == 'both' or plottype == 'b': for i, (values, group) in enumerate(plot_data.groupby('trace')): # trace label label = str(group['trace'].values[0]) ax.plot(group['x'], group['response'], color=colors[i], marker=markers[i], label=label, linestyle=linestyles[i], **kwargs) elif plottype == 'line' or plottype == 'l': for i, (values, group) in enumerate(plot_data.groupby('trace')): # trace label label = str(group['trace'].values[0]) ax.plot(group['x'], group['response'], color=colors[i], label=label, linestyle=linestyles[i], **kwargs) elif plottype == 'scatter' or plottype == 's': for i, (values, group) in enumerate(plot_data.groupby('trace')): # trace label label = str(group['trace'].values[0]) ax.scatter(group['x'], group['response'], color=colors[i], label=label, marker=markers[i], **kwargs) else: raise ValueError("Plot type %s not understood" % plottype) ax.legend(loc=legendloc, title=legendtitle) ax.margins(.1) if all([x_levels, x_values]): ax.set_xticks(x_values) ax.set_xticklabels(x_levels) return fig
Interaction plot for factor level statistics. Note. If categorial factors are supplied levels will be internally recoded to integers. This ensures matplotlib compatibility. Uses a DataFrame to calculate an `aggregate` statistic for each level of the factor or group given by `trace`. Parameters ---------- x : array_like The `x` factor levels constitute the x-axis. If a `pandas.Series` is given its name will be used in `xlabel` if `xlabel` is None. trace : array_like The `trace` factor levels will be drawn as lines in the plot. If `trace` is a `pandas.Series` its name will be used as the `legendtitle` if `legendtitle` is None. response : array_like The reponse or dependent variable. If a `pandas.Series` is given its name will be used in `ylabel` if `ylabel` is None. func : function Anything accepted by `pandas.DataFrame.aggregate`. This is applied to the response variable grouped by the trace levels. ax : axes, optional Matplotlib axes instance plottype : str {'line', 'scatter', 'both'}, optional The type of plot to return. Can be 'l', 's', or 'b' xlabel : str, optional Label to use for `x`. Default is 'X'. If `x` is a `pandas.Series` it will use the series names. ylabel : str, optional Label to use for `response`. Default is 'func of response'. If `response` is a `pandas.Series` it will use the series names. colors : list, optional If given, must have length == number of levels in trace. markers : list, optional If given, must have length == number of levels in trace linestyles : list, optional If given, must have length == number of levels in trace. legendloc : {None, str, int} Location passed to the legend command. legendtitle : {None, str} Title of the legend. **kwargs These will be passed to the plot command used either plot or scatter. If you want to control the overall plotting options, use kwargs. Returns ------- Figure The figure given by `ax.figure` or a new instance. Examples -------- >>> import numpy as np >>> np.random.seed(12345) >>> weight = np.random.randint(1,4,size=60) >>> duration = np.random.randint(1,3,size=60) >>> days = np.log(np.random.randint(1,30, size=60)) >>> fig = interaction_plot(weight, duration, days, ... colors=['red','blue'], markers=['D','^'], ms=10) >>> import matplotlib.pyplot as plt >>> plt.show() .. plot:: import numpy as np from statsmodels.graphics.factorplots import interaction_plot np.random.seed(12345) weight = np.random.randint(1,4,size=60) duration = np.random.randint(1,3,size=60) days = np.log(np.random.randint(1,30, size=60)) fig = interaction_plot(weight, duration, days, colors=['red','blue'], markers=['D','^'], ms=10) import matplotlib.pyplot as plt #plt.show()
interaction_plot
python
statsmodels/statsmodels
statsmodels/graphics/factorplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/factorplots.py
BSD-3-Clause
def _recode(x, levels): """ Recode categorial data to int factor. Parameters ---------- x : array_like array like object supporting with numpy array methods of categorially coded data. levels : dict mapping of labels to integer-codings Returns ------- out : instance numpy.ndarray """ from pandas import Series name = None index = None if isinstance(x, Series): name = x.name index = x.index x = x.values if x.dtype.type not in [np.str_, np.object_]: raise ValueError('This is not a categorial factor.' ' Array of str type required.') elif not isinstance(levels, dict): raise ValueError('This is not a valid value for levels.' ' Dict required.') elif not (np.unique(x) == np.unique(list(levels.keys()))).all(): raise ValueError('The levels do not match the array values.') else: out = np.empty(x.shape[0], dtype=int) for level, coding in levels.items(): out[x == level] = coding if name: out = Series(out, name=name, index=index) return out
Recode categorial data to int factor. Parameters ---------- x : array_like array like object supporting with numpy array methods of categorially coded data. levels : dict mapping of labels to integer-codings Returns ------- out : instance numpy.ndarray
_recode
python
statsmodels/statsmodels
statsmodels/graphics/factorplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/factorplots.py
BSD-3-Clause
def plot_acf( x, ax=None, lags=None, *, alpha=0.05, use_vlines=True, adjusted=False, fft=False, missing="none", title="Autocorrelation", zero=True, auto_ylims=False, bartlett_confint=True, vlines_kwargs=None, **kwargs, ): """ Plot the autocorrelation function Plots lags on the horizontal and the correlations on vertical axis. Parameters ---------- x : array_like Array of time-series values ax : AxesSubplot, optional If given, this subplot is used to plot in instead of a new figure being created. lags : {int, array_like}, optional An int or array of lag values, used on horizontal axis. Uses np.arange(lags) when lags is an int. If not provided, ``lags=np.arange(len(corr))`` is used. alpha : scalar, optional If a number is given, the confidence intervals for the given level are returned. For instance if alpha=.05, 95 % confidence intervals are returned where the standard deviation is computed according to Bartlett's formula. The confidence intervals centered at 0 to simplify detecting which estaimated autocorrelations are significantly different from 0. If None, no confidence intervals are plotted. use_vlines : bool, optional If True, vertical lines and markers are plotted. If False, only markers are plotted. The default marker is 'o'; it can be overridden with a ``marker`` kwarg. adjusted : bool If True, then denominators for autocovariance are n-k, otherwise n fft : bool, optional If True, computes the ACF via FFT. missing : str, optional A string in ['none', 'raise', 'conservative', 'drop'] specifying how the NaNs are to be treated. title : str, optional Title to place on plot. Default is 'Autocorrelation' zero : bool, optional Flag indicating whether to include the 0-lag autocorrelation. Default is True. auto_ylims : bool, optional If True, adjusts automatically the y-axis limits to ACF values. bartlett_confint : bool, default True Confidence intervals for ACF values are generally placed at 2 standard errors around r_k. The formula used for standard error depends upon the situation. If the autocorrelations are being used to test for randomness of residuals as part of the ARIMA routine, the standard errors are determined assuming the residuals are white noise. The approximate formula for any lag is that standard error of each r_k = 1/sqrt(N). See section 9.4 of [1] for more details on the 1/sqrt(N) result. For more elementary discussion, see section 5.3.2 in [2]. For the ACF of raw data, the standard error at a lag k is found as if the right model was an MA(k-1). This allows the possible interpretation that if all autocorrelations past a certain lag are within the limits, the model might be an MA of order defined by the last significant autocorrelation. In this case, a moving average model is assumed for the data and the standard errors for the confidence intervals should be generated using Bartlett's formula. For more details on Bartlett formula result, see section 7.2 in [1]. vlines_kwargs : dict, optional Optional dictionary of keyword arguments that are passed to vlines. **kwargs : kwargs, optional Optional keyword arguments that are directly passed on to the Matplotlib ``plot`` and ``axhline`` functions. Returns ------- Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected. See Also -------- statsmodels.tsa.stattools.acf matplotlib.pyplot.xcorr matplotlib.pyplot.acorr Notes ----- Adapted from matplotlib's `xcorr`. Data are plotted as ``plot(lags, corr, **kwargs)`` kwargs is used to pass matplotlib optional arguments to both the line tracing the autocorrelations and for the horizontal line at 0. These options must be valid for a Line2D object. vlines_kwargs is used to pass additional optional arguments to the vertical lines connecting each autocorrelation to the axis. These options must be valid for a LineCollection object. References ---------- [1] Brockwell and Davis, 1987. Time Series Theory and Methods [2] Brockwell and Davis, 2010. Introduction to Time Series and Forecasting, 2nd edition. Examples -------- >>> import pandas as pd >>> import matplotlib.pyplot as plt >>> import statsmodels.api as sm >>> dta = sm.datasets.sunspots.load_pandas().data >>> dta.index = pd.Index(sm.tsa.datetools.dates_from_range('1700', '2008')) >>> del dta["YEAR"] >>> sm.graphics.tsa.plot_acf(dta.values.squeeze(), lags=40) >>> plt.show() .. plot:: plots/graphics_tsa_plot_acf.py """ fig, ax = utils.create_mpl_ax(ax) lags, nlags, irregular = _prepare_data_corr_plot(x, lags, zero) vlines_kwargs = {} if vlines_kwargs is None else vlines_kwargs confint = None # acf has different return type based on alpha acf_x = acf( x, nlags=nlags, alpha=alpha, fft=fft, bartlett_confint=bartlett_confint, adjusted=adjusted, missing=missing, ) if alpha is not None: acf_x, confint = acf_x[:2] _plot_corr( ax, title, acf_x, confint, lags, irregular, use_vlines, vlines_kwargs, auto_ylims=auto_ylims, **kwargs, ) return fig
Plot the autocorrelation function Plots lags on the horizontal and the correlations on vertical axis. Parameters ---------- x : array_like Array of time-series values ax : AxesSubplot, optional If given, this subplot is used to plot in instead of a new figure being created. lags : {int, array_like}, optional An int or array of lag values, used on horizontal axis. Uses np.arange(lags) when lags is an int. If not provided, ``lags=np.arange(len(corr))`` is used. alpha : scalar, optional If a number is given, the confidence intervals for the given level are returned. For instance if alpha=.05, 95 % confidence intervals are returned where the standard deviation is computed according to Bartlett's formula. The confidence intervals centered at 0 to simplify detecting which estaimated autocorrelations are significantly different from 0. If None, no confidence intervals are plotted. use_vlines : bool, optional If True, vertical lines and markers are plotted. If False, only markers are plotted. The default marker is 'o'; it can be overridden with a ``marker`` kwarg. adjusted : bool If True, then denominators for autocovariance are n-k, otherwise n fft : bool, optional If True, computes the ACF via FFT. missing : str, optional A string in ['none', 'raise', 'conservative', 'drop'] specifying how the NaNs are to be treated. title : str, optional Title to place on plot. Default is 'Autocorrelation' zero : bool, optional Flag indicating whether to include the 0-lag autocorrelation. Default is True. auto_ylims : bool, optional If True, adjusts automatically the y-axis limits to ACF values. bartlett_confint : bool, default True Confidence intervals for ACF values are generally placed at 2 standard errors around r_k. The formula used for standard error depends upon the situation. If the autocorrelations are being used to test for randomness of residuals as part of the ARIMA routine, the standard errors are determined assuming the residuals are white noise. The approximate formula for any lag is that standard error of each r_k = 1/sqrt(N). See section 9.4 of [1] for more details on the 1/sqrt(N) result. For more elementary discussion, see section 5.3.2 in [2]. For the ACF of raw data, the standard error at a lag k is found as if the right model was an MA(k-1). This allows the possible interpretation that if all autocorrelations past a certain lag are within the limits, the model might be an MA of order defined by the last significant autocorrelation. In this case, a moving average model is assumed for the data and the standard errors for the confidence intervals should be generated using Bartlett's formula. For more details on Bartlett formula result, see section 7.2 in [1]. vlines_kwargs : dict, optional Optional dictionary of keyword arguments that are passed to vlines. **kwargs : kwargs, optional Optional keyword arguments that are directly passed on to the Matplotlib ``plot`` and ``axhline`` functions. Returns ------- Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected. See Also -------- statsmodels.tsa.stattools.acf matplotlib.pyplot.xcorr matplotlib.pyplot.acorr Notes ----- Adapted from matplotlib's `xcorr`. Data are plotted as ``plot(lags, corr, **kwargs)`` kwargs is used to pass matplotlib optional arguments to both the line tracing the autocorrelations and for the horizontal line at 0. These options must be valid for a Line2D object. vlines_kwargs is used to pass additional optional arguments to the vertical lines connecting each autocorrelation to the axis. These options must be valid for a LineCollection object. References ---------- [1] Brockwell and Davis, 1987. Time Series Theory and Methods [2] Brockwell and Davis, 2010. Introduction to Time Series and Forecasting, 2nd edition. Examples -------- >>> import pandas as pd >>> import matplotlib.pyplot as plt >>> import statsmodels.api as sm >>> dta = sm.datasets.sunspots.load_pandas().data >>> dta.index = pd.Index(sm.tsa.datetools.dates_from_range('1700', '2008')) >>> del dta["YEAR"] >>> sm.graphics.tsa.plot_acf(dta.values.squeeze(), lags=40) >>> plt.show() .. plot:: plots/graphics_tsa_plot_acf.py
plot_acf
python
statsmodels/statsmodels
statsmodels/graphics/tsaplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/tsaplots.py
BSD-3-Clause
def plot_pacf( x, ax=None, lags=None, alpha=0.05, method="ywm", use_vlines=True, title="Partial Autocorrelation", zero=True, vlines_kwargs=None, **kwargs, ): """ Plot the partial autocorrelation function Parameters ---------- x : array_like Array of time-series values ax : AxesSubplot, optional If given, this subplot is used to plot in instead of a new figure being created. lags : {int, array_like}, optional An int or array of lag values, used on horizontal axis. Uses np.arange(lags) when lags is an int. If not provided, ``lags=np.arange(len(corr))`` is used. alpha : float, optional If a number is given, the confidence intervals for the given level are returned. For instance if alpha=.05, 95 % confidence intervals are returned where the standard deviation is computed according to 1/sqrt(len(x)) method : str Specifies which method for the calculations to use: - "ywm" or "ywmle" : Yule-Walker without adjustment. Default. - "yw" or "ywadjusted" : Yule-Walker with sample-size adjustment in denominator for acovf. Default. - "ols" : regression of time series on lags of it and on constant. - "ols-inefficient" : regression of time series on lags using a single common sample to estimate all pacf coefficients. - "ols-adjusted" : regression of time series on lags with a bias adjustment. - "ld" or "ldadjusted" : Levinson-Durbin recursion with bias correction. - "ldb" or "ldbiased" : Levinson-Durbin recursion without bias correction. use_vlines : bool, optional If True, vertical lines and markers are plotted. If False, only markers are plotted. The default marker is 'o'; it can be overridden with a ``marker`` kwarg. title : str, optional Title to place on plot. Default is 'Partial Autocorrelation' zero : bool, optional Flag indicating whether to include the 0-lag autocorrelation. Default is True. vlines_kwargs : dict, optional Optional dictionary of keyword arguments that are passed to vlines. **kwargs : kwargs, optional Optional keyword arguments that are directly passed on to the Matplotlib ``plot`` and ``axhline`` functions. Returns ------- Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected. See Also -------- statsmodels.tsa.stattools.pacf matplotlib.pyplot.xcorr matplotlib.pyplot.acorr Notes ----- Plots lags on the horizontal and the correlations on vertical axis. Adapted from matplotlib's `xcorr`. Data are plotted as ``plot(lags, corr, **kwargs)`` kwargs is used to pass matplotlib optional arguments to both the line tracing the autocorrelations and for the horizontal line at 0. These options must be valid for a Line2D object. vlines_kwargs is used to pass additional optional arguments to the vertical lines connecting each autocorrelation to the axis. These options must be valid for a LineCollection object. Examples -------- >>> import pandas as pd >>> import matplotlib.pyplot as plt >>> import statsmodels.api as sm >>> dta = sm.datasets.sunspots.load_pandas().data >>> dta.index = pd.Index(sm.tsa.datetools.dates_from_range('1700', '2008')) >>> del dta["YEAR"] >>> sm.graphics.tsa.plot_pacf(dta.values.squeeze(), lags=40, method="ywm") >>> plt.show() .. plot:: plots/graphics_tsa_plot_pacf.py """ fig, ax = utils.create_mpl_ax(ax) vlines_kwargs = {} if vlines_kwargs is None else vlines_kwargs lags, nlags, irregular = _prepare_data_corr_plot(x, lags, zero) confint = None if alpha is None: acf_x = pacf(x, nlags=nlags, alpha=alpha, method=method) else: acf_x, confint = pacf(x, nlags=nlags, alpha=alpha, method=method) _plot_corr( ax, title, acf_x, confint, lags, irregular, use_vlines, vlines_kwargs, **kwargs, ) return fig
Plot the partial autocorrelation function Parameters ---------- x : array_like Array of time-series values ax : AxesSubplot, optional If given, this subplot is used to plot in instead of a new figure being created. lags : {int, array_like}, optional An int or array of lag values, used on horizontal axis. Uses np.arange(lags) when lags is an int. If not provided, ``lags=np.arange(len(corr))`` is used. alpha : float, optional If a number is given, the confidence intervals for the given level are returned. For instance if alpha=.05, 95 % confidence intervals are returned where the standard deviation is computed according to 1/sqrt(len(x)) method : str Specifies which method for the calculations to use: - "ywm" or "ywmle" : Yule-Walker without adjustment. Default. - "yw" or "ywadjusted" : Yule-Walker with sample-size adjustment in denominator for acovf. Default. - "ols" : regression of time series on lags of it and on constant. - "ols-inefficient" : regression of time series on lags using a single common sample to estimate all pacf coefficients. - "ols-adjusted" : regression of time series on lags with a bias adjustment. - "ld" or "ldadjusted" : Levinson-Durbin recursion with bias correction. - "ldb" or "ldbiased" : Levinson-Durbin recursion without bias correction. use_vlines : bool, optional If True, vertical lines and markers are plotted. If False, only markers are plotted. The default marker is 'o'; it can be overridden with a ``marker`` kwarg. title : str, optional Title to place on plot. Default is 'Partial Autocorrelation' zero : bool, optional Flag indicating whether to include the 0-lag autocorrelation. Default is True. vlines_kwargs : dict, optional Optional dictionary of keyword arguments that are passed to vlines. **kwargs : kwargs, optional Optional keyword arguments that are directly passed on to the Matplotlib ``plot`` and ``axhline`` functions. Returns ------- Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected. See Also -------- statsmodels.tsa.stattools.pacf matplotlib.pyplot.xcorr matplotlib.pyplot.acorr Notes ----- Plots lags on the horizontal and the correlations on vertical axis. Adapted from matplotlib's `xcorr`. Data are plotted as ``plot(lags, corr, **kwargs)`` kwargs is used to pass matplotlib optional arguments to both the line tracing the autocorrelations and for the horizontal line at 0. These options must be valid for a Line2D object. vlines_kwargs is used to pass additional optional arguments to the vertical lines connecting each autocorrelation to the axis. These options must be valid for a LineCollection object. Examples -------- >>> import pandas as pd >>> import matplotlib.pyplot as plt >>> import statsmodels.api as sm >>> dta = sm.datasets.sunspots.load_pandas().data >>> dta.index = pd.Index(sm.tsa.datetools.dates_from_range('1700', '2008')) >>> del dta["YEAR"] >>> sm.graphics.tsa.plot_pacf(dta.values.squeeze(), lags=40, method="ywm") >>> plt.show() .. plot:: plots/graphics_tsa_plot_pacf.py
plot_pacf
python
statsmodels/statsmodels
statsmodels/graphics/tsaplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/tsaplots.py
BSD-3-Clause
def plot_ccf( x, y, *, ax=None, lags=None, negative_lags=False, alpha=0.05, use_vlines=True, adjusted=False, fft=False, title="Cross-correlation", auto_ylims=False, vlines_kwargs=None, **kwargs, ): """ Plot the cross-correlation function Correlations between ``x`` and the lags of ``y`` are calculated. The lags are shown on the horizontal axis and the correlations on the vertical axis. Parameters ---------- x, y : array_like Arrays of time-series values. ax : AxesSubplot, optional If given, this subplot is used to plot in, otherwise a new figure with one subplot is created. lags : {int, array_like}, optional An int or array of lag values, used on the horizontal axis. Uses ``np.arange(lags)`` when lags is an int. If not provided, ``lags=np.arange(len(corr))`` is used. negative_lags: bool, optional If True, negative lags are shown on the horizontal axis. alpha : scalar, optional If a number is given, the confidence intervals for the given level are plotted, e.g. if alpha=.05, 95 % confidence intervals are shown. If None, confidence intervals are not shown on the plot. use_vlines : bool, optional If True, shows vertical lines and markers for the correlation values. If False, only shows markers. The default marker is 'o'; it can be overridden with a ``marker`` kwarg. adjusted : bool If True, then denominators for cross-correlations are n-k, otherwise n. fft : bool, optional If True, computes the CCF via FFT. title : str, optional Title to place on plot. Default is 'Cross-correlation'. auto_ylims : bool, optional If True, adjusts automatically the vertical axis limits to CCF values. vlines_kwargs : dict, optional Optional dictionary of keyword arguments that are passed to vlines. **kwargs : kwargs, optional Optional keyword arguments that are directly passed on to the Matplotlib ``plot`` and ``axhline`` functions. Returns ------- Figure The figure where the plot is drawn. This is either an existing figure if the `ax` argument is provided, or a newly created figure if `ax` is None. See Also -------- statsmodels.graphics.tsaplots.plot_acf Examples -------- >>> import pandas as pd >>> import matplotlib.pyplot as plt >>> import statsmodels.api as sm >>> dta = sm.datasets.macrodata.load_pandas().data >>> diffed = dta.diff().dropna() >>> sm.graphics.tsa.plot_ccf(diffed["unemp"], diffed["infl"]) >>> plt.show() """ fig, ax = utils.create_mpl_ax(ax) lags, nlags, irregular = _prepare_data_corr_plot(x, lags, True) vlines_kwargs = {} if vlines_kwargs is None else vlines_kwargs if negative_lags: lags = -lags ccf_res = ccf( x, y, adjusted=adjusted, fft=fft, alpha=alpha, nlags=nlags + 1 ) if alpha is not None: ccf_xy, confint = ccf_res else: ccf_xy = ccf_res confint = None _plot_corr( ax, title, ccf_xy, confint, lags, irregular, use_vlines, vlines_kwargs, auto_ylims=auto_ylims, skip_lag0_confint=False, **kwargs, ) return fig
Plot the cross-correlation function Correlations between ``x`` and the lags of ``y`` are calculated. The lags are shown on the horizontal axis and the correlations on the vertical axis. Parameters ---------- x, y : array_like Arrays of time-series values. ax : AxesSubplot, optional If given, this subplot is used to plot in, otherwise a new figure with one subplot is created. lags : {int, array_like}, optional An int or array of lag values, used on the horizontal axis. Uses ``np.arange(lags)`` when lags is an int. If not provided, ``lags=np.arange(len(corr))`` is used. negative_lags: bool, optional If True, negative lags are shown on the horizontal axis. alpha : scalar, optional If a number is given, the confidence intervals for the given level are plotted, e.g. if alpha=.05, 95 % confidence intervals are shown. If None, confidence intervals are not shown on the plot. use_vlines : bool, optional If True, shows vertical lines and markers for the correlation values. If False, only shows markers. The default marker is 'o'; it can be overridden with a ``marker`` kwarg. adjusted : bool If True, then denominators for cross-correlations are n-k, otherwise n. fft : bool, optional If True, computes the CCF via FFT. title : str, optional Title to place on plot. Default is 'Cross-correlation'. auto_ylims : bool, optional If True, adjusts automatically the vertical axis limits to CCF values. vlines_kwargs : dict, optional Optional dictionary of keyword arguments that are passed to vlines. **kwargs : kwargs, optional Optional keyword arguments that are directly passed on to the Matplotlib ``plot`` and ``axhline`` functions. Returns ------- Figure The figure where the plot is drawn. This is either an existing figure if the `ax` argument is provided, or a newly created figure if `ax` is None. See Also -------- statsmodels.graphics.tsaplots.plot_acf Examples -------- >>> import pandas as pd >>> import matplotlib.pyplot as plt >>> import statsmodels.api as sm >>> dta = sm.datasets.macrodata.load_pandas().data >>> diffed = dta.diff().dropna() >>> sm.graphics.tsa.plot_ccf(diffed["unemp"], diffed["infl"]) >>> plt.show()
plot_ccf
python
statsmodels/statsmodels
statsmodels/graphics/tsaplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/tsaplots.py
BSD-3-Clause
def plot_accf_grid( x, *, varnames=None, fig=None, lags=None, negative_lags=True, alpha=0.05, use_vlines=True, adjusted=False, fft=False, missing="none", zero=True, auto_ylims=False, bartlett_confint=False, vlines_kwargs=None, **kwargs, ): """ Plot auto/cross-correlation grid Plots lags on the horizontal axis and the correlations on the vertical axis of each graph. Parameters ---------- x : array_like 2D array of time-series values: rows are observations, columns are variables. varnames: sequence of str, optional Variable names to use in plot titles. If ``x`` is a pandas dataframe and ``varnames`` is provided, it overrides the column names of the dataframe. If ``varnames`` is not provided and ``x`` is not a dataframe, variable names ``x[0]``, ``x[1]``, etc. are generated. fig : Matplotlib figure instance, optional If given, this figure is used to plot in, otherwise a new figure is created. lags : {int, array_like}, optional An int or array of lag values, used on horizontal axes. Uses ``np.arange(lags)`` when lags is an int. If not provided, ``lags=np.arange(len(corr))`` is used. negative_lags: bool, optional If True, negative lags are shown on the horizontal axes of plots below the main diagonal. alpha : scalar, optional If a number is given, the confidence intervals for the given level are plotted, e.g. if alpha=.05, 95 % confidence intervals are shown. If None, confidence intervals are not shown on the plot. use_vlines : bool, optional If True, shows vertical lines and markers for the correlation values. If False, only shows markers. The default marker is 'o'; it can be overridden with a ``marker`` kwarg. adjusted : bool If True, then denominators for correlations are n-k, otherwise n. fft : bool, optional If True, computes the ACF via FFT. missing : str, optional A string in ['none', 'raise', 'conservative', 'drop'] specifying how NaNs are to be treated. zero : bool, optional Flag indicating whether to include the 0-lag autocorrelations (which are always equal to 1). Default is True. auto_ylims : bool, optional If True, adjusts automatically the vertical axis limits to correlation values. bartlett_confint : bool, default False If True, use Bartlett's formula to calculate confidence intervals in auto-correlation plots. See the description of ``plot_acf`` for details. This argument does not affect cross-correlation plots. vlines_kwargs : dict, optional Optional dictionary of keyword arguments that are passed to vlines. **kwargs : kwargs, optional Optional keyword arguments that are directly passed on to the Matplotlib ``plot`` and ``axhline`` functions. Returns ------- Figure If `fig` is None, the created figure. Otherwise, `fig` is returned. Plots on the grid show the cross-correlation of the row variable with the lags of the column variable. See Also -------- statsmodels.graphics.tsaplots.plot_acf statsmodels.graphics.tsaplots.plot_ccf Examples -------- >>> import pandas as pd >>> import matplotlib.pyplot as plt >>> import statsmodels.api as sm >>> dta = sm.datasets.macrodata.load_pandas().data >>> diffed = dta.diff().dropna() >>> sm.graphics.tsa.plot_accf_grid(diffed[["unemp", "infl"]]) >>> plt.show() """ from statsmodels.tools.data import _is_using_pandas array_like(x, "x", ndim=2) m = x.shape[1] fig = utils.create_mpl_fig(fig) gs = fig.add_gridspec(m, m) if _is_using_pandas(x, None): varnames = varnames or list(x.columns) def get_var(i): return x.iloc[:, i] else: varnames = varnames or [f'x[{i}]' for i in range(m)] x = np.asarray(x) def get_var(i): return x[:, i] for i in range(m): for j in range(m): ax = fig.add_subplot(gs[i, j]) if i == j: plot_acf( get_var(i), ax=ax, title=f'ACF({varnames[i]})', lags=lags, alpha=alpha, use_vlines=use_vlines, adjusted=adjusted, fft=fft, missing=missing, zero=zero, auto_ylims=auto_ylims, bartlett_confint=bartlett_confint, vlines_kwargs=vlines_kwargs, **kwargs, ) else: plot_ccf( get_var(i), get_var(j), ax=ax, title=f'CCF({varnames[i]}, {varnames[j]})', lags=lags, negative_lags=negative_lags and i > j, alpha=alpha, use_vlines=use_vlines, adjusted=adjusted, fft=fft, auto_ylims=auto_ylims, vlines_kwargs=vlines_kwargs, **kwargs, ) return fig
Plot auto/cross-correlation grid Plots lags on the horizontal axis and the correlations on the vertical axis of each graph. Parameters ---------- x : array_like 2D array of time-series values: rows are observations, columns are variables. varnames: sequence of str, optional Variable names to use in plot titles. If ``x`` is a pandas dataframe and ``varnames`` is provided, it overrides the column names of the dataframe. If ``varnames`` is not provided and ``x`` is not a dataframe, variable names ``x[0]``, ``x[1]``, etc. are generated. fig : Matplotlib figure instance, optional If given, this figure is used to plot in, otherwise a new figure is created. lags : {int, array_like}, optional An int or array of lag values, used on horizontal axes. Uses ``np.arange(lags)`` when lags is an int. If not provided, ``lags=np.arange(len(corr))`` is used. negative_lags: bool, optional If True, negative lags are shown on the horizontal axes of plots below the main diagonal. alpha : scalar, optional If a number is given, the confidence intervals for the given level are plotted, e.g. if alpha=.05, 95 % confidence intervals are shown. If None, confidence intervals are not shown on the plot. use_vlines : bool, optional If True, shows vertical lines and markers for the correlation values. If False, only shows markers. The default marker is 'o'; it can be overridden with a ``marker`` kwarg. adjusted : bool If True, then denominators for correlations are n-k, otherwise n. fft : bool, optional If True, computes the ACF via FFT. missing : str, optional A string in ['none', 'raise', 'conservative', 'drop'] specifying how NaNs are to be treated. zero : bool, optional Flag indicating whether to include the 0-lag autocorrelations (which are always equal to 1). Default is True. auto_ylims : bool, optional If True, adjusts automatically the vertical axis limits to correlation values. bartlett_confint : bool, default False If True, use Bartlett's formula to calculate confidence intervals in auto-correlation plots. See the description of ``plot_acf`` for details. This argument does not affect cross-correlation plots. vlines_kwargs : dict, optional Optional dictionary of keyword arguments that are passed to vlines. **kwargs : kwargs, optional Optional keyword arguments that are directly passed on to the Matplotlib ``plot`` and ``axhline`` functions. Returns ------- Figure If `fig` is None, the created figure. Otherwise, `fig` is returned. Plots on the grid show the cross-correlation of the row variable with the lags of the column variable. See Also -------- statsmodels.graphics.tsaplots.plot_acf statsmodels.graphics.tsaplots.plot_ccf Examples -------- >>> import pandas as pd >>> import matplotlib.pyplot as plt >>> import statsmodels.api as sm >>> dta = sm.datasets.macrodata.load_pandas().data >>> diffed = dta.diff().dropna() >>> sm.graphics.tsa.plot_accf_grid(diffed[["unemp", "infl"]]) >>> plt.show()
plot_accf_grid
python
statsmodels/statsmodels
statsmodels/graphics/tsaplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/tsaplots.py
BSD-3-Clause
def seasonal_plot(grouped_x, xticklabels, ylabel=None, ax=None): """ Consider using one of month_plot or quarter_plot unless you need irregular plotting. Parameters ---------- grouped_x : iterable of DataFrames Should be a GroupBy object (or similar pair of group_names and groups as DataFrames) with a DatetimeIndex or PeriodIndex xticklabels : list of str List of season labels, one for each group. ylabel : str Lable for y axis ax : AxesSubplot, optional If given, this subplot is used to plot in instead of a new figure being created. """ fig, ax = utils.create_mpl_ax(ax) start = 0 ticks = [] for season, df in grouped_x: df = df.copy() # or sort balks for series. may be better way df.sort_index() nobs = len(df) x_plot = np.arange(start, start + nobs) ticks.append(x_plot.mean()) ax.plot(x_plot, df.values, "k") ax.hlines( df.values.mean(), x_plot[0], x_plot[-1], colors="r", linewidth=3 ) start += nobs ax.set_xticks(ticks) ax.set_xticklabels(xticklabels) ax.set_ylabel(ylabel) ax.margins(0.1, 0.05) return fig
Consider using one of month_plot or quarter_plot unless you need irregular plotting. Parameters ---------- grouped_x : iterable of DataFrames Should be a GroupBy object (or similar pair of group_names and groups as DataFrames) with a DatetimeIndex or PeriodIndex xticklabels : list of str List of season labels, one for each group. ylabel : str Lable for y axis ax : AxesSubplot, optional If given, this subplot is used to plot in instead of a new figure being created.
seasonal_plot
python
statsmodels/statsmodels
statsmodels/graphics/tsaplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/tsaplots.py
BSD-3-Clause
def month_plot(x, dates=None, ylabel=None, ax=None): """ Seasonal plot of monthly data. Parameters ---------- x : array_like Seasonal data to plot. If dates is None, x must be a pandas object with a PeriodIndex or DatetimeIndex with a monthly frequency. dates : array_like, optional If `x` is not a pandas object, then dates must be supplied. ylabel : str, optional The label for the y-axis. Will attempt to use the `name` attribute of the Series. ax : Axes, optional Existing axes instance. Returns ------- Figure If `ax` is provided, the Figure instance attached to `ax`. Otherwise a new Figure instance. Examples -------- >>> import statsmodels.api as sm >>> import pandas as pd >>> dta = sm.datasets.elnino.load_pandas().data >>> dta['YEAR'] = dta.YEAR.astype(int).astype(str) >>> dta = dta.set_index('YEAR').T.unstack() >>> dates = pd.to_datetime(list(map(lambda x: '-'.join(x) + '-1', ... dta.index.values))) >>> dta.index = pd.DatetimeIndex(dates, freq='MS') >>> fig = sm.graphics.tsa.month_plot(dta) .. plot:: plots/graphics_tsa_month_plot.py """ if dates is None: from statsmodels.tools.data import _check_period_index _check_period_index(x, freq="M") else: x = pd.Series(x, index=pd.PeriodIndex(dates, freq="M")) # there's no zero month xticklabels = list(calendar.month_abbr)[1:] return seasonal_plot( x.groupby(lambda y: y.month), xticklabels, ylabel=ylabel, ax=ax )
Seasonal plot of monthly data. Parameters ---------- x : array_like Seasonal data to plot. If dates is None, x must be a pandas object with a PeriodIndex or DatetimeIndex with a monthly frequency. dates : array_like, optional If `x` is not a pandas object, then dates must be supplied. ylabel : str, optional The label for the y-axis. Will attempt to use the `name` attribute of the Series. ax : Axes, optional Existing axes instance. Returns ------- Figure If `ax` is provided, the Figure instance attached to `ax`. Otherwise a new Figure instance. Examples -------- >>> import statsmodels.api as sm >>> import pandas as pd >>> dta = sm.datasets.elnino.load_pandas().data >>> dta['YEAR'] = dta.YEAR.astype(int).astype(str) >>> dta = dta.set_index('YEAR').T.unstack() >>> dates = pd.to_datetime(list(map(lambda x: '-'.join(x) + '-1', ... dta.index.values))) >>> dta.index = pd.DatetimeIndex(dates, freq='MS') >>> fig = sm.graphics.tsa.month_plot(dta) .. plot:: plots/graphics_tsa_month_plot.py
month_plot
python
statsmodels/statsmodels
statsmodels/graphics/tsaplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/tsaplots.py
BSD-3-Clause
def quarter_plot(x, dates=None, ylabel=None, ax=None): """ Seasonal plot of quarterly data Parameters ---------- x : array_like Seasonal data to plot. If dates is None, x must be a pandas object with a PeriodIndex or DatetimeIndex with a monthly frequency. dates : array_like, optional If `x` is not a pandas object, then dates must be supplied. ylabel : str, optional The label for the y-axis. Will attempt to use the `name` attribute of the Series. ax : matplotlib.axes, optional Existing axes instance. Returns ------- Figure If `ax` is provided, the Figure instance attached to `ax`. Otherwise a new Figure instance. Examples -------- >>> import statsmodels.api as sm >>> import pandas as pd >>> dta = sm.datasets.elnino.load_pandas().data >>> dta['YEAR'] = dta.YEAR.astype(int).astype(str) >>> dta = dta.set_index('YEAR').T.unstack() >>> dates = pd.to_datetime(list(map(lambda x: '-'.join(x) + '-1', ... dta.index.values))) >>> dta.index = dates.to_period('Q') >>> fig = sm.graphics.tsa.quarter_plot(dta) .. plot:: plots/graphics_tsa_quarter_plot.py """ if dates is None: from statsmodels.tools.data import _check_period_index _check_period_index(x, freq="Q") else: x = pd.Series(x, index=pd.PeriodIndex(dates, freq="Q")) xticklabels = ["q1", "q2", "q3", "q4"] return seasonal_plot( x.groupby(lambda y: y.quarter), xticklabels, ylabel=ylabel, ax=ax )
Seasonal plot of quarterly data Parameters ---------- x : array_like Seasonal data to plot. If dates is None, x must be a pandas object with a PeriodIndex or DatetimeIndex with a monthly frequency. dates : array_like, optional If `x` is not a pandas object, then dates must be supplied. ylabel : str, optional The label for the y-axis. Will attempt to use the `name` attribute of the Series. ax : matplotlib.axes, optional Existing axes instance. Returns ------- Figure If `ax` is provided, the Figure instance attached to `ax`. Otherwise a new Figure instance. Examples -------- >>> import statsmodels.api as sm >>> import pandas as pd >>> dta = sm.datasets.elnino.load_pandas().data >>> dta['YEAR'] = dta.YEAR.astype(int).astype(str) >>> dta = dta.set_index('YEAR').T.unstack() >>> dates = pd.to_datetime(list(map(lambda x: '-'.join(x) + '-1', ... dta.index.values))) >>> dta.index = dates.to_period('Q') >>> fig = sm.graphics.tsa.quarter_plot(dta) .. plot:: plots/graphics_tsa_quarter_plot.py
quarter_plot
python
statsmodels/statsmodels
statsmodels/graphics/tsaplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/tsaplots.py
BSD-3-Clause
def plot_predict( result, start=None, end=None, dynamic=False, alpha=0.05, ax=None, **predict_kwargs, ): """ Parameters ---------- result : Result Any model result supporting ``get_prediction``. start : int, str, or datetime, optional Zero-indexed observation number at which to start forecasting, i.e., the first forecast is start. Can also be a date string to parse or a datetime type. Default is the the zeroth observation. end : int, str, or datetime, optional Zero-indexed observation number at which to end forecasting, i.e., the last forecast is end. Can also be a date string to parse or a datetime type. However, if the dates index does not have a fixed frequency, end must be an integer index if you want out of sample prediction. Default is the last observation in the sample. dynamic : bool, int, str, or datetime, optional Integer offset relative to `start` at which to begin dynamic prediction. Can also be an absolute date string to parse or a datetime type (these are not interpreted as offsets). Prior to this observation, true endogenous values will be used for prediction; starting with this observation and continuing through the end of prediction, forecasted endogenous values will be used instead. alpha : {float, None} The tail probability not covered by the confidence interval. Must be in (0, 1). Confidence interval is constructed assuming normally distributed shocks. If None, figure will not show the confidence interval. ax : AxesSubplot matplotlib Axes instance to use **predict_kwargs Any additional keyword arguments to pass to ``result.get_prediction``. Returns ------- Figure matplotlib Figure containing the prediction plot """ from statsmodels.graphics.utils import _import_mpl, create_mpl_ax _ = _import_mpl() fig, ax = create_mpl_ax(ax) from statsmodels.tsa.base.prediction import PredictionResults # use predict so you set dates pred: PredictionResults = result.get_prediction( start=start, end=end, dynamic=dynamic, **predict_kwargs ) mean = pred.predicted_mean if isinstance(mean, (pd.Series, pd.DataFrame)): x = mean.index mean.plot(ax=ax, label="forecast") else: x = np.arange(mean.shape[0]) ax.plot(x, mean, label="forecast") if alpha is not None: label = f"{1-alpha:.0%} confidence interval" ci = pred.conf_int(alpha) conf_int = np.asarray(ci) ax.fill_between( x, conf_int[:, 0], conf_int[:, 1], color="gray", alpha=0.5, label=label, ) ax.legend(loc="best") return fig
Parameters ---------- result : Result Any model result supporting ``get_prediction``. start : int, str, or datetime, optional Zero-indexed observation number at which to start forecasting, i.e., the first forecast is start. Can also be a date string to parse or a datetime type. Default is the the zeroth observation. end : int, str, or datetime, optional Zero-indexed observation number at which to end forecasting, i.e., the last forecast is end. Can also be a date string to parse or a datetime type. However, if the dates index does not have a fixed frequency, end must be an integer index if you want out of sample prediction. Default is the last observation in the sample. dynamic : bool, int, str, or datetime, optional Integer offset relative to `start` at which to begin dynamic prediction. Can also be an absolute date string to parse or a datetime type (these are not interpreted as offsets). Prior to this observation, true endogenous values will be used for prediction; starting with this observation and continuing through the end of prediction, forecasted endogenous values will be used instead. alpha : {float, None} The tail probability not covered by the confidence interval. Must be in (0, 1). Confidence interval is constructed assuming normally distributed shocks. If None, figure will not show the confidence interval. ax : AxesSubplot matplotlib Axes instance to use **predict_kwargs Any additional keyword arguments to pass to ``result.get_prediction``. Returns ------- Figure matplotlib Figure containing the prediction plot
plot_predict
python
statsmodels/statsmodels
statsmodels/graphics/tsaplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/tsaplots.py
BSD-3-Clause
def add_lowess(ax, lines_idx=0, frac=.2, **lowess_kwargs): """ Add Lowess line to a plot. Parameters ---------- ax : AxesSubplot The Axes to which to add the plot lines_idx : int This is the line on the existing plot to which you want to add a smoothed lowess line. frac : float The fraction of the points to use when doing the lowess fit. lowess_kwargs Additional keyword arguments are passes to lowess. Returns ------- Figure The figure that holds the instance. """ y0 = ax.get_lines()[lines_idx]._y x0 = ax.get_lines()[lines_idx]._x lres = lowess(y0, x0, frac=frac, **lowess_kwargs) ax.plot(lres[:, 0], lres[:, 1], 'r', lw=1.5) return ax.figure
Add Lowess line to a plot. Parameters ---------- ax : AxesSubplot The Axes to which to add the plot lines_idx : int This is the line on the existing plot to which you want to add a smoothed lowess line. frac : float The fraction of the points to use when doing the lowess fit. lowess_kwargs Additional keyword arguments are passes to lowess. Returns ------- Figure The figure that holds the instance.
add_lowess
python
statsmodels/statsmodels
statsmodels/graphics/regressionplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/regressionplots.py
BSD-3-Clause
def plot_fit(results, exog_idx, y_true=None, ax=None, vlines=True, **kwargs): """ Plot fit against one regressor. This creates one graph with the scatterplot of observed values compared to fitted values. Parameters ---------- results : Results A result instance with resid, model.endog and model.exog as attributes. exog_idx : {int, str} Name or index of regressor in exog matrix. y_true : array_like. optional If this is not None, then the array is added to the plot. ax : AxesSubplot, optional If given, this subplot is used to plot in instead of a new figure being created. vlines : bool, optional If this not True, then the uncertainty (pointwise prediction intervals) of the fit is not plotted. **kwargs The keyword arguments are passed to the plot command for the fitted values points. Returns ------- Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected. Examples -------- Load the Statewide Crime data set and perform linear regression with `poverty` and `hs_grad` as variables and `murder` as the response >>> import statsmodels.api as sm >>> import matplotlib.pyplot as plt >>> data = sm.datasets.statecrime.load_pandas().data >>> murder = data['murder'] >>> X = data[['poverty', 'hs_grad']] >>> X["constant"] = 1 >>> y = murder >>> model = sm.OLS(y, X) >>> results = model.fit() Create a plot just for the variable 'Poverty.' Note that vertical bars representing uncertainty are plotted since vlines is true >>> fig, ax = plt.subplots() >>> fig = sm.graphics.plot_fit(results, 0, ax=ax) >>> ax.set_ylabel("Murder Rate") >>> ax.set_xlabel("Poverty Level") >>> ax.set_title("Linear Regression") >>> plt.show() .. plot:: plots/graphics_plot_fit_ex.py """ fig, ax = utils.create_mpl_ax(ax) exog_name, exog_idx = utils.maybe_name_or_idx(exog_idx, results.model) results = maybe_unwrap_results(results) #maybe add option for wendog, wexog y = results.model.endog x1 = results.model.exog[:, exog_idx] x1_argsort = np.argsort(x1) y = y[x1_argsort] x1 = x1[x1_argsort] ax.plot(x1, y, 'bo', label=results.model.endog_names) if y_true is not None: ax.plot(x1, y_true[x1_argsort], 'b-', label='True values') title = 'Fitted values versus %s' % exog_name ax.plot(x1, results.fittedvalues[x1_argsort], 'D', color='r', label='fitted', **kwargs) if vlines is True: _, iv_l, iv_u = wls_prediction_std(results) ax.vlines(x1, iv_l[x1_argsort], iv_u[x1_argsort], linewidth=1, color='k', alpha=.7) #ax.fill_between(x1, iv_l[x1_argsort], iv_u[x1_argsort], alpha=0.1, # color='k') ax.set_title(title) ax.set_xlabel(exog_name) ax.set_ylabel(results.model.endog_names) ax.legend(loc='best', numpoints=1) return fig
Plot fit against one regressor. This creates one graph with the scatterplot of observed values compared to fitted values. Parameters ---------- results : Results A result instance with resid, model.endog and model.exog as attributes. exog_idx : {int, str} Name or index of regressor in exog matrix. y_true : array_like. optional If this is not None, then the array is added to the plot. ax : AxesSubplot, optional If given, this subplot is used to plot in instead of a new figure being created. vlines : bool, optional If this not True, then the uncertainty (pointwise prediction intervals) of the fit is not plotted. **kwargs The keyword arguments are passed to the plot command for the fitted values points. Returns ------- Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected. Examples -------- Load the Statewide Crime data set and perform linear regression with `poverty` and `hs_grad` as variables and `murder` as the response >>> import statsmodels.api as sm >>> import matplotlib.pyplot as plt >>> data = sm.datasets.statecrime.load_pandas().data >>> murder = data['murder'] >>> X = data[['poverty', 'hs_grad']] >>> X["constant"] = 1 >>> y = murder >>> model = sm.OLS(y, X) >>> results = model.fit() Create a plot just for the variable 'Poverty.' Note that vertical bars representing uncertainty are plotted since vlines is true >>> fig, ax = plt.subplots() >>> fig = sm.graphics.plot_fit(results, 0, ax=ax) >>> ax.set_ylabel("Murder Rate") >>> ax.set_xlabel("Poverty Level") >>> ax.set_title("Linear Regression") >>> plt.show() .. plot:: plots/graphics_plot_fit_ex.py
plot_fit
python
statsmodels/statsmodels
statsmodels/graphics/regressionplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/regressionplots.py
BSD-3-Clause
def plot_regress_exog(results, exog_idx, fig=None): """Plot regression results against one regressor. This plots four graphs in a 2 by 2 figure: 'endog versus exog', 'residuals versus exog', 'fitted versus exog' and 'fitted plus residual versus exog' Parameters ---------- results : result instance A result instance with resid, model.endog and model.exog as attributes. exog_idx : int or str Name or index of regressor in exog matrix. fig : Figure, optional If given, this figure is simply returned. Otherwise a new figure is created. Returns ------- Figure The value of `fig` if provided. Otherwise a new instance. Examples -------- Load the Statewide Crime data set and build a model with regressors including the rate of high school graduation (hs_grad), population in urban areas (urban), households below poverty line (poverty), and single person households (single). Outcome variable is the murder rate (murder). Build a 2 by 2 figure based on poverty showing fitted versus actual murder rate, residuals versus the poverty rate, partial regression plot of poverty, and CCPR plot for poverty rate. >>> import statsmodels.api as sm >>> import matplotlib.pyplot as plt >>> import statsmodels.formula.api as smf >>> fig = plt.figure(figsize=(8, 6)) >>> crime_data = sm.datasets.statecrime.load_pandas() >>> results = smf.ols('murder ~ hs_grad + urban + poverty + single', ... data=crime_data.data).fit() >>> sm.graphics.plot_regress_exog(results, 'poverty', fig=fig) >>> plt.show() .. plot:: plots/graphics_regression_regress_exog.py """ fig = utils.create_mpl_fig(fig) exog_name, exog_idx = utils.maybe_name_or_idx(exog_idx, results.model) results = maybe_unwrap_results(results) #maybe add option for wendog, wexog y_name = results.model.endog_names x1 = results.model.exog[:, exog_idx] prstd, iv_l, iv_u = wls_prediction_std(results) ax = fig.add_subplot(2, 2, 1) ax.plot(x1, results.model.endog, 'o', color='b', alpha=0.9, label=y_name) ax.plot(x1, results.fittedvalues, 'D', color='r', label='fitted', alpha=.5) ax.vlines(x1, iv_l, iv_u, linewidth=1, color='k', alpha=.7) ax.set_title('Y and Fitted vs. X', fontsize='large') ax.set_xlabel(exog_name) ax.set_ylabel(y_name) ax.legend(loc='best') ax = fig.add_subplot(2, 2, 2) ax.plot(x1, results.resid, 'o') ax.axhline(y=0, color='black') ax.set_title('Residuals versus %s' % exog_name, fontsize='large') ax.set_xlabel(exog_name) ax.set_ylabel("resid") ax = fig.add_subplot(2, 2, 3) exog_noti = np.ones(results.model.exog.shape[1], bool) exog_noti[exog_idx] = False exog_others = results.model.exog[:, exog_noti] from pandas import Series fig = plot_partregress(results.model.data.orig_endog, Series(x1, name=exog_name, index=results.model.data.row_labels), exog_others, obs_labels=False, ax=ax) ax.set_title('Partial regression plot', fontsize='large') #ax.set_ylabel("Fitted values") #ax.set_xlabel(exog_name) ax = fig.add_subplot(2, 2, 4) fig = plot_ccpr(results, exog_idx, ax=ax) ax.set_title('CCPR Plot', fontsize='large') #ax.set_xlabel(exog_name) #ax.set_ylabel("Fitted values + resids") fig.suptitle('Regression Plots for %s' % exog_name, fontsize="large") fig.tight_layout() fig.subplots_adjust(top=.90) return fig
Plot regression results against one regressor. This plots four graphs in a 2 by 2 figure: 'endog versus exog', 'residuals versus exog', 'fitted versus exog' and 'fitted plus residual versus exog' Parameters ---------- results : result instance A result instance with resid, model.endog and model.exog as attributes. exog_idx : int or str Name or index of regressor in exog matrix. fig : Figure, optional If given, this figure is simply returned. Otherwise a new figure is created. Returns ------- Figure The value of `fig` if provided. Otherwise a new instance. Examples -------- Load the Statewide Crime data set and build a model with regressors including the rate of high school graduation (hs_grad), population in urban areas (urban), households below poverty line (poverty), and single person households (single). Outcome variable is the murder rate (murder). Build a 2 by 2 figure based on poverty showing fitted versus actual murder rate, residuals versus the poverty rate, partial regression plot of poverty, and CCPR plot for poverty rate. >>> import statsmodels.api as sm >>> import matplotlib.pyplot as plt >>> import statsmodels.formula.api as smf >>> fig = plt.figure(figsize=(8, 6)) >>> crime_data = sm.datasets.statecrime.load_pandas() >>> results = smf.ols('murder ~ hs_grad + urban + poverty + single', ... data=crime_data.data).fit() >>> sm.graphics.plot_regress_exog(results, 'poverty', fig=fig) >>> plt.show() .. plot:: plots/graphics_regression_regress_exog.py
plot_regress_exog
python
statsmodels/statsmodels
statsmodels/graphics/regressionplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/regressionplots.py
BSD-3-Clause
def _partial_regression(endog, exog_i, exog_others): """Partial regression. regress endog on exog_i conditional on exog_others uses OLS Parameters ---------- endog : array_like exog : array_like exog_others : array_like Returns ------- res1c : OLS results instance (res1a, res1b) : tuple of OLS results instances results from regression of endog on exog_others and of exog_i on exog_others """ #FIXME: This function does not appear to be used. res1a = OLS(endog, exog_others).fit() res1b = OLS(exog_i, exog_others).fit() res1c = OLS(res1a.resid, res1b.resid).fit() return res1c, (res1a, res1b)
Partial regression. regress endog on exog_i conditional on exog_others uses OLS Parameters ---------- endog : array_like exog : array_like exog_others : array_like Returns ------- res1c : OLS results instance (res1a, res1b) : tuple of OLS results instances results from regression of endog on exog_others and of exog_i on exog_others
_partial_regression
python
statsmodels/statsmodels
statsmodels/graphics/regressionplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/regressionplots.py
BSD-3-Clause
def plot_partregress(endog, exog_i, exog_others, data=None, title_kwargs={}, obs_labels=True, label_kwargs={}, ax=None, ret_coords=False, eval_env=1, **kwargs): """Plot partial regression for a single regressor. Parameters ---------- endog : {ndarray, str} The endogenous or response variable. If string is given, you can use a arbitrary translations as with a formula. exog_i : {ndarray, str} The exogenous, explanatory variable. If string is given, you can use a arbitrary translations as with a formula. exog_others : {ndarray, list[str]} Any other exogenous, explanatory variables. If a list of strings is given, each item is a term in formula. You can use a arbitrary translations as with a formula. The effect of these variables will be removed by OLS regression. data : {DataFrame, dict} Some kind of data structure with names if the other variables are given as strings. title_kwargs : dict Keyword arguments to pass on for the title. The key to control the fonts is fontdict. obs_labels : {bool, array_like} Whether or not to annotate the plot points with their observation labels. If obs_labels is a boolean, the point labels will try to do the right thing. First it will try to use the index of data, then fall back to the index of exog_i. Alternatively, you may give an array-like object corresponding to the observation numbers. label_kwargs : dict Keyword arguments that control annotate for the observation labels. ax : AxesSubplot, optional If given, this subplot is used to plot in instead of a new figure being created. ret_coords : bool If True will return the coordinates of the points in the plot. You can use this to add your own annotations. eval_env : int Patsy eval environment if user functions and formulas are used in defining endog or exog. **kwargs The keyword arguments passed to plot for the points. Returns ------- fig : Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected. coords : list, optional If ret_coords is True, return a tuple of arrays (x_coords, y_coords). See Also -------- plot_partregress_grid : Plot partial regression for a set of regressors. Notes ----- The slope of the fitted line is the that of `exog_i` in the full multiple regression. The individual points can be used to assess the influence of points on the estimated coefficient. Examples -------- Load the Statewide Crime data set and plot partial regression of the rate of high school graduation (hs_grad) on the murder rate(murder). The effects of the percent of the population living in urban areas (urban), below the poverty line (poverty) , and in a single person household (single) are removed by OLS regression. >>> import statsmodels.api as sm >>> import matplotlib.pyplot as plt >>> crime_data = sm.datasets.statecrime.load_pandas() >>> sm.graphics.plot_partregress(endog='murder', exog_i='hs_grad', ... exog_others=['urban', 'poverty', 'single'], ... data=crime_data.data, obs_labels=False) >>> plt.show() .. plot:: plots/graphics_regression_partregress.py More detailed examples can be found in the Regression Plots notebook on the examples page. """ #NOTE: there is no interaction between possible missing data and #obs_labels yet, so this will need to be tweaked a bit for this case fig, ax = utils.create_mpl_ax(ax) # strings, use patsy to transform to data if isinstance(endog, str): endog = FormulaManager().get_matrices(endog + "-1", data, eval_env=eval_env, pandas=False) mgr = FormulaManager() if isinstance(exog_others, str): RHS = mgr.get_matrices(exog_others, data, eval_env=eval_env, pandas=False) elif isinstance(exog_others, list): RHS = "+".join(exog_others) RHS = mgr.get_matrices(RHS, data, eval_env=eval_env, pandas=False) else: RHS = exog_others RHS_isemtpy = False if isinstance(RHS, np.ndarray) and RHS.size==0: RHS_isemtpy = True elif isinstance(RHS, pd.DataFrame) and RHS.empty: RHS_isemtpy = True if isinstance(exog_i, str): exog_i = mgr.get_matrices(exog_i + "-1", data, eval_env=eval_env, pandas=False) # all arrays or pandas-like if RHS_isemtpy: endog = np.asarray(endog) exog_i = np.asarray(exog_i) ax.plot(endog, exog_i, 'o', **kwargs) fitted_line = OLS(endog, exog_i).fit() x_axis_endog_name = 'x' if isinstance(exog_i, np.ndarray) else exog_i.name y_axis_endog_name = 'y' if isinstance(endog, np.ndarray) else endog.model_spec.column_names[0] else: res_yaxis = OLS(endog, RHS).fit() res_xaxis = OLS(exog_i, RHS).fit() xaxis_resid = res_xaxis.resid yaxis_resid = res_yaxis.resid x_axis_endog_name = res_xaxis.model.endog_names y_axis_endog_name = res_yaxis.model.endog_names ax.plot(xaxis_resid, yaxis_resid, 'o', **kwargs) fitted_line = OLS(yaxis_resid, xaxis_resid).fit() fig = abline_plot(0, np.asarray(fitted_line.params)[0], color='k', ax=ax) if x_axis_endog_name == 'y': # for no names regression will just get a y x_axis_endog_name = 'x' # this is misleading, so use x ax.set_xlabel("e(%s | X)" % x_axis_endog_name) ax.set_ylabel("e(%s | X)" % y_axis_endog_name) ax.set_title('Partial Regression Plot', **title_kwargs) # NOTE: if we want to get super fancy, we could annotate if a point is # clicked using this widget # http://stackoverflow.com/questions/4652439/ # is-there-a-matplotlib-equivalent-of-matlabs-datacursormode/ # 4674445#4674445 if obs_labels is True: if data is not None: obs_labels = data.index elif hasattr(exog_i, "index"): obs_labels = exog_i.index else: obs_labels = res_xaxis.model.data.row_labels #NOTE: row_labels can be None. #Maybe we should fix this to never be the case. if obs_labels is None: obs_labels = lrange(len(exog_i)) if obs_labels is not False: # could be array_like if len(obs_labels) != len(exog_i): raise ValueError("obs_labels does not match length of exog_i") label_kwargs.update(dict(ha="center", va="bottom")) ax = utils.annotate_axes(lrange(len(obs_labels)), obs_labels, lzip(res_xaxis.resid, res_yaxis.resid), [(0, 5)] * len(obs_labels), "x-large", ax=ax, **label_kwargs) if ret_coords: return fig, (res_xaxis.resid, res_yaxis.resid) else: return fig
Plot partial regression for a single regressor. Parameters ---------- endog : {ndarray, str} The endogenous or response variable. If string is given, you can use a arbitrary translations as with a formula. exog_i : {ndarray, str} The exogenous, explanatory variable. If string is given, you can use a arbitrary translations as with a formula. exog_others : {ndarray, list[str]} Any other exogenous, explanatory variables. If a list of strings is given, each item is a term in formula. You can use a arbitrary translations as with a formula. The effect of these variables will be removed by OLS regression. data : {DataFrame, dict} Some kind of data structure with names if the other variables are given as strings. title_kwargs : dict Keyword arguments to pass on for the title. The key to control the fonts is fontdict. obs_labels : {bool, array_like} Whether or not to annotate the plot points with their observation labels. If obs_labels is a boolean, the point labels will try to do the right thing. First it will try to use the index of data, then fall back to the index of exog_i. Alternatively, you may give an array-like object corresponding to the observation numbers. label_kwargs : dict Keyword arguments that control annotate for the observation labels. ax : AxesSubplot, optional If given, this subplot is used to plot in instead of a new figure being created. ret_coords : bool If True will return the coordinates of the points in the plot. You can use this to add your own annotations. eval_env : int Patsy eval environment if user functions and formulas are used in defining endog or exog. **kwargs The keyword arguments passed to plot for the points. Returns ------- fig : Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected. coords : list, optional If ret_coords is True, return a tuple of arrays (x_coords, y_coords). See Also -------- plot_partregress_grid : Plot partial regression for a set of regressors. Notes ----- The slope of the fitted line is the that of `exog_i` in the full multiple regression. The individual points can be used to assess the influence of points on the estimated coefficient. Examples -------- Load the Statewide Crime data set and plot partial regression of the rate of high school graduation (hs_grad) on the murder rate(murder). The effects of the percent of the population living in urban areas (urban), below the poverty line (poverty) , and in a single person household (single) are removed by OLS regression. >>> import statsmodels.api as sm >>> import matplotlib.pyplot as plt >>> crime_data = sm.datasets.statecrime.load_pandas() >>> sm.graphics.plot_partregress(endog='murder', exog_i='hs_grad', ... exog_others=['urban', 'poverty', 'single'], ... data=crime_data.data, obs_labels=False) >>> plt.show() .. plot:: plots/graphics_regression_partregress.py More detailed examples can be found in the Regression Plots notebook on the examples page.
plot_partregress
python
statsmodels/statsmodels
statsmodels/graphics/regressionplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/regressionplots.py
BSD-3-Clause
def plot_partregress_grid(results, exog_idx=None, grid=None, fig=None): """ Plot partial regression for a set of regressors. Parameters ---------- results : Results instance A regression model results instance. exog_idx : {None, list[int], list[str]} The indices or column names of the exog used in the plot, default is all. grid : {None, tuple[int]} If grid is given, then it is used for the arrangement of the subplots. The format of grid is (nrows, ncols). If grid is None, then ncol is one, if there are only 2 subplots, and the number of columns is two otherwise. fig : Figure, optional If given, this figure is simply returned. Otherwise a new figure is created. Returns ------- Figure If `fig` is None, the created figure. Otherwise `fig` itself. See Also -------- plot_partregress : Plot partial regression for a single regressor. plot_ccpr : Plot CCPR against one regressor Notes ----- A subplot is created for each explanatory variable given by exog_idx. The partial regression plot shows the relationship between the response and the given explanatory variable after removing the effect of all other explanatory variables in exog. References ---------- See http://www.itl.nist.gov/div898/software/dataplot/refman1/auxillar/partregr.htm Examples -------- Using the state crime dataset separately plot the effect of the each variable on the on the outcome, murder rate while accounting for the effect of all other variables in the model visualized with a grid of partial regression plots. >>> from statsmodels.graphics.regressionplots import plot_partregress_grid >>> import statsmodels.api as sm >>> import matplotlib.pyplot as plt >>> import statsmodels.formula.api as smf >>> fig = plt.figure(figsize=(8, 6)) >>> crime_data = sm.datasets.statecrime.load_pandas() >>> results = smf.ols('murder ~ hs_grad + urban + poverty + single', ... data=crime_data.data).fit() >>> plot_partregress_grid(results, fig=fig) >>> plt.show() .. plot:: plots/graphics_regression_partregress_grid.py """ import pandas fig = utils.create_mpl_fig(fig) exog_name, exog_idx = utils.maybe_name_or_idx(exog_idx, results.model) # TODO: maybe add option for using wendog, wexog instead y = pandas.Series(results.model.endog, name=results.model.endog_names) exog = results.model.exog k_vars = exog.shape[1] # this function does not make sense if k_vars=1 nrows = (len(exog_idx) + 1) // 2 ncols = 1 if nrows == len(exog_idx) else 2 if grid is not None: nrows, ncols = grid if ncols > 1: title_kwargs = {"fontdict": {"fontsize": 'small'}} # for indexing purposes other_names = np.array(results.model.exog_names) for i, idx in enumerate(exog_idx): others = lrange(k_vars) others.pop(idx) exog_others = pandas.DataFrame(exog[:, others], columns=other_names[others]) ax = fig.add_subplot(nrows, ncols, i + 1) plot_partregress(y, pandas.Series(exog[:, idx], name=other_names[idx]), exog_others, ax=ax, title_kwargs=title_kwargs, obs_labels=False) ax.set_title("") fig.suptitle("Partial Regression Plot", fontsize="large") fig.tight_layout() fig.subplots_adjust(top=.95) return fig
Plot partial regression for a set of regressors. Parameters ---------- results : Results instance A regression model results instance. exog_idx : {None, list[int], list[str]} The indices or column names of the exog used in the plot, default is all. grid : {None, tuple[int]} If grid is given, then it is used for the arrangement of the subplots. The format of grid is (nrows, ncols). If grid is None, then ncol is one, if there are only 2 subplots, and the number of columns is two otherwise. fig : Figure, optional If given, this figure is simply returned. Otherwise a new figure is created. Returns ------- Figure If `fig` is None, the created figure. Otherwise `fig` itself. See Also -------- plot_partregress : Plot partial regression for a single regressor. plot_ccpr : Plot CCPR against one regressor Notes ----- A subplot is created for each explanatory variable given by exog_idx. The partial regression plot shows the relationship between the response and the given explanatory variable after removing the effect of all other explanatory variables in exog. References ---------- See http://www.itl.nist.gov/div898/software/dataplot/refman1/auxillar/partregr.htm Examples -------- Using the state crime dataset separately plot the effect of the each variable on the on the outcome, murder rate while accounting for the effect of all other variables in the model visualized with a grid of partial regression plots. >>> from statsmodels.graphics.regressionplots import plot_partregress_grid >>> import statsmodels.api as sm >>> import matplotlib.pyplot as plt >>> import statsmodels.formula.api as smf >>> fig = plt.figure(figsize=(8, 6)) >>> crime_data = sm.datasets.statecrime.load_pandas() >>> results = smf.ols('murder ~ hs_grad + urban + poverty + single', ... data=crime_data.data).fit() >>> plot_partregress_grid(results, fig=fig) >>> plt.show() .. plot:: plots/graphics_regression_partregress_grid.py
plot_partregress_grid
python
statsmodels/statsmodels
statsmodels/graphics/regressionplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/regressionplots.py
BSD-3-Clause
def plot_ccpr(results, exog_idx, ax=None): """ Plot CCPR against one regressor. Generates a component and component-plus-residual (CCPR) plot. Parameters ---------- results : result instance A regression results instance. exog_idx : {int, str} Exogenous, explanatory variable. If string is given, it should be the variable name that you want to use, and you can use arbitrary translations as with a formula. ax : AxesSubplot, optional If given, it is used to plot in instead of a new figure being created. Returns ------- Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected. See Also -------- plot_ccpr_grid : Creates CCPR plot for multiple regressors in a plot grid. Notes ----- The CCPR plot provides a way to judge the effect of one regressor on the response variable by taking into account the effects of the other independent variables. The partial residuals plot is defined as Residuals + B_i*X_i versus X_i. The component adds the B_i*X_i versus X_i to show where the fitted line would lie. Care should be taken if X_i is highly correlated with any of the other independent variables. If this is the case, the variance evident in the plot will be an underestimate of the true variance. References ---------- http://www.itl.nist.gov/div898/software/dataplot/refman1/auxillar/ccpr.htm Examples -------- Using the state crime dataset plot the effect of the rate of single households ('single') on the murder rate while accounting for high school graduation rate ('hs_grad'), percentage of people in an urban area, and rate of poverty ('poverty'). >>> import statsmodels.api as sm >>> import matplotlib.pyplot as plt >>> import statsmodels.formula.api as smf >>> crime_data = sm.datasets.statecrime.load_pandas() >>> results = smf.ols('murder ~ hs_grad + urban + poverty + single', ... data=crime_data.data).fit() >>> sm.graphics.plot_ccpr(results, 'single') >>> plt.show() .. plot:: plots/graphics_regression_ccpr.py """ fig, ax = utils.create_mpl_ax(ax) exog_name, exog_idx = utils.maybe_name_or_idx(exog_idx, results.model) results = maybe_unwrap_results(results) x1 = results.model.exog[:, exog_idx] #namestr = ' for %s' % self.name if self.name else '' x1beta = x1*results.params[exog_idx] ax.plot(x1, x1beta + results.resid, 'o') from statsmodels.tools.tools import add_constant mod = OLS(x1beta, add_constant(x1)).fit() params = mod.params fig = abline_plot(*params, **dict(ax=ax)) #ax.plot(x1, x1beta, '-') ax.set_title('Component and component plus residual plot') ax.set_ylabel("Residual + %s*beta_%d" % (exog_name, exog_idx)) ax.set_xlabel("%s" % exog_name) return fig
Plot CCPR against one regressor. Generates a component and component-plus-residual (CCPR) plot. Parameters ---------- results : result instance A regression results instance. exog_idx : {int, str} Exogenous, explanatory variable. If string is given, it should be the variable name that you want to use, and you can use arbitrary translations as with a formula. ax : AxesSubplot, optional If given, it is used to plot in instead of a new figure being created. Returns ------- Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected. See Also -------- plot_ccpr_grid : Creates CCPR plot for multiple regressors in a plot grid. Notes ----- The CCPR plot provides a way to judge the effect of one regressor on the response variable by taking into account the effects of the other independent variables. The partial residuals plot is defined as Residuals + B_i*X_i versus X_i. The component adds the B_i*X_i versus X_i to show where the fitted line would lie. Care should be taken if X_i is highly correlated with any of the other independent variables. If this is the case, the variance evident in the plot will be an underestimate of the true variance. References ---------- http://www.itl.nist.gov/div898/software/dataplot/refman1/auxillar/ccpr.htm Examples -------- Using the state crime dataset plot the effect of the rate of single households ('single') on the murder rate while accounting for high school graduation rate ('hs_grad'), percentage of people in an urban area, and rate of poverty ('poverty'). >>> import statsmodels.api as sm >>> import matplotlib.pyplot as plt >>> import statsmodels.formula.api as smf >>> crime_data = sm.datasets.statecrime.load_pandas() >>> results = smf.ols('murder ~ hs_grad + urban + poverty + single', ... data=crime_data.data).fit() >>> sm.graphics.plot_ccpr(results, 'single') >>> plt.show() .. plot:: plots/graphics_regression_ccpr.py
plot_ccpr
python
statsmodels/statsmodels
statsmodels/graphics/regressionplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/regressionplots.py
BSD-3-Clause
def plot_ccpr_grid(results, exog_idx=None, grid=None, fig=None): """ Generate CCPR plots against a set of regressors, plot in a grid. Generates a grid of component and component-plus-residual (CCPR) plots. Parameters ---------- results : result instance A results instance with exog and params. exog_idx : None or list of int The indices or column names of the exog used in the plot. grid : None or tuple of int (nrows, ncols) If grid is given, then it is used for the arrangement of the subplots. If grid is None, then ncol is one, if there are only 2 subplots, and the number of columns is two otherwise. fig : Figure, optional If given, this figure is simply returned. Otherwise a new figure is created. Returns ------- Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected. See Also -------- plot_ccpr : Creates CCPR plot for a single regressor. Notes ----- Partial residual plots are formed as:: Res + Betahat(i)*Xi versus Xi and CCPR adds:: Betahat(i)*Xi versus Xi References ---------- See http://www.itl.nist.gov/div898/software/dataplot/refman1/auxillar/ccpr.htm Examples -------- Using the state crime dataset separately plot the effect of the each variable on the on the outcome, murder rate while accounting for the effect of all other variables in the model. >>> import statsmodels.api as sm >>> import matplotlib.pyplot as plt >>> import statsmodels.formula.api as smf >>> fig = plt.figure(figsize=(8, 8)) >>> crime_data = sm.datasets.statecrime.load_pandas() >>> results = smf.ols('murder ~ hs_grad + urban + poverty + single', ... data=crime_data.data).fit() >>> sm.graphics.plot_ccpr_grid(results, fig=fig) >>> plt.show() .. plot:: plots/graphics_regression_ccpr_grid.py """ fig = utils.create_mpl_fig(fig) exog_name, exog_idx = utils.maybe_name_or_idx(exog_idx, results.model) if grid is not None: nrows, ncols = grid else: if len(exog_idx) > 2: nrows = int(np.ceil(len(exog_idx)/2.)) ncols = 2 else: nrows = len(exog_idx) ncols = 1 seen_constant = 0 for i, idx in enumerate(exog_idx): if results.model.exog[:, idx].var() == 0: seen_constant = 1 continue ax = fig.add_subplot(nrows, ncols, i+1-seen_constant) fig = plot_ccpr(results, exog_idx=idx, ax=ax) ax.set_title("") fig.suptitle("Component-Component Plus Residual Plot", fontsize="large") fig.tight_layout() fig.subplots_adjust(top=.95) return fig
Generate CCPR plots against a set of regressors, plot in a grid. Generates a grid of component and component-plus-residual (CCPR) plots. Parameters ---------- results : result instance A results instance with exog and params. exog_idx : None or list of int The indices or column names of the exog used in the plot. grid : None or tuple of int (nrows, ncols) If grid is given, then it is used for the arrangement of the subplots. If grid is None, then ncol is one, if there are only 2 subplots, and the number of columns is two otherwise. fig : Figure, optional If given, this figure is simply returned. Otherwise a new figure is created. Returns ------- Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected. See Also -------- plot_ccpr : Creates CCPR plot for a single regressor. Notes ----- Partial residual plots are formed as:: Res + Betahat(i)*Xi versus Xi and CCPR adds:: Betahat(i)*Xi versus Xi References ---------- See http://www.itl.nist.gov/div898/software/dataplot/refman1/auxillar/ccpr.htm Examples -------- Using the state crime dataset separately plot the effect of the each variable on the on the outcome, murder rate while accounting for the effect of all other variables in the model. >>> import statsmodels.api as sm >>> import matplotlib.pyplot as plt >>> import statsmodels.formula.api as smf >>> fig = plt.figure(figsize=(8, 8)) >>> crime_data = sm.datasets.statecrime.load_pandas() >>> results = smf.ols('murder ~ hs_grad + urban + poverty + single', ... data=crime_data.data).fit() >>> sm.graphics.plot_ccpr_grid(results, fig=fig) >>> plt.show() .. plot:: plots/graphics_regression_ccpr_grid.py
plot_ccpr_grid
python
statsmodels/statsmodels
statsmodels/graphics/regressionplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/regressionplots.py
BSD-3-Clause
def abline_plot(intercept=None, slope=None, horiz=None, vert=None, model_results=None, ax=None, **kwargs): """ Plot a line given an intercept and slope. Parameters ---------- intercept : float The intercept of the line. slope : float The slope of the line. horiz : float or array_like Data for horizontal lines on the y-axis. vert : array_like Data for verterical lines on the x-axis. model_results : statsmodels results instance Any object that has a two-value `params` attribute. Assumed that it is (intercept, slope). ax : axes, optional Matplotlib axes instance. **kwargs Options passed to matplotlib.pyplot.plt. Returns ------- Figure The figure given by `ax.figure` or a new instance. Examples -------- >>> import numpy as np >>> import statsmodels.api as sm >>> np.random.seed(12345) >>> X = sm.add_constant(np.random.normal(0, 20, size=30)) >>> y = np.dot(X, [25, 3.5]) + np.random.normal(0, 30, size=30) >>> mod = sm.OLS(y,X).fit() >>> fig = sm.graphics.abline_plot(model_results=mod) >>> ax = fig.axes[0] >>> ax.scatter(X[:,1], y) >>> ax.margins(.1) >>> import matplotlib.pyplot as plt >>> plt.show() .. plot:: plots/graphics_regression_abline.py """ if ax is not None: # get axis limits first thing, do not change these x = ax.get_xlim() else: x = None fig, ax = utils.create_mpl_ax(ax) if model_results: intercept, slope = model_results.params if x is None: x = [model_results.model.exog[:, 1].min(), model_results.model.exog[:, 1].max()] else: if not (intercept is not None and slope is not None): raise ValueError("specify slope and intercepty or model_results") if x is None: x = ax.get_xlim() data_y = [x[0]*slope+intercept, x[1]*slope+intercept] ax.set_xlim(x) #ax.set_ylim(y) from matplotlib.lines import Line2D class ABLine2D(Line2D): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.id_xlim_callback = None self.id_ylim_callback = None def remove(self): ax = self.axes if self.id_xlim_callback: ax.callbacks.disconnect(self.id_xlim_callback) if self.id_ylim_callback: ax.callbacks.disconnect(self.id_ylim_callback) super().remove() def update_datalim(self, ax): ax.set_autoscale_on(False) children = ax.get_children() ablines = [child for child in children if child is self] abline = ablines[0] x = ax.get_xlim() y = [x[0] * slope + intercept, x[1] * slope + intercept] abline.set_data(x, y) ax.figure.canvas.draw() # TODO: how to intercept something like a margins call and adjust? line = ABLine2D(x, data_y, **kwargs) ax.add_line(line) line.id_xlim_callback = ax.callbacks.connect('xlim_changed', line.update_datalim) line.id_ylim_callback = ax.callbacks.connect('ylim_changed', line.update_datalim) if horiz: ax.hline(horiz) if vert: ax.vline(vert) return fig
Plot a line given an intercept and slope. Parameters ---------- intercept : float The intercept of the line. slope : float The slope of the line. horiz : float or array_like Data for horizontal lines on the y-axis. vert : array_like Data for verterical lines on the x-axis. model_results : statsmodels results instance Any object that has a two-value `params` attribute. Assumed that it is (intercept, slope). ax : axes, optional Matplotlib axes instance. **kwargs Options passed to matplotlib.pyplot.plt. Returns ------- Figure The figure given by `ax.figure` or a new instance. Examples -------- >>> import numpy as np >>> import statsmodels.api as sm >>> np.random.seed(12345) >>> X = sm.add_constant(np.random.normal(0, 20, size=30)) >>> y = np.dot(X, [25, 3.5]) + np.random.normal(0, 30, size=30) >>> mod = sm.OLS(y,X).fit() >>> fig = sm.graphics.abline_plot(model_results=mod) >>> ax = fig.axes[0] >>> ax.scatter(X[:,1], y) >>> ax.margins(.1) >>> import matplotlib.pyplot as plt >>> plt.show() .. plot:: plots/graphics_regression_abline.py
abline_plot
python
statsmodels/statsmodels
statsmodels/graphics/regressionplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/regressionplots.py
BSD-3-Clause
def ceres_resids(results, focus_exog, frac=0.66, cond_means=None): """ Calculate the CERES residuals (Conditional Expectation Partial Residuals) for a fitted model. Parameters ---------- results : model results instance The fitted model for which the CERES residuals are calculated. focus_exog : int The column of results.model.exog used as the 'focus variable'. frac : float, optional Lowess smoothing parameter for estimating the conditional means. Not used if `cond_means` is provided. cond_means : array_like, optional If provided, the columns of this array are the conditional means E[exog | focus exog], where exog ranges over some or all of the columns of exog other than focus exog. If this is an empty nx0 array, the conditional means are treated as being zero. If None, the conditional means are estimated. Returns ------- An array containing the CERES residuals. Notes ----- If `cond_means` is not provided, it is obtained by smoothing each column of exog (except the focus column) against the focus column. Currently only supports GLM, GEE, and OLS models. """ model = results.model if not isinstance(model, (GLM, GEE, OLS)): raise ValueError("ceres residuals not available for %s" % model.__class__.__name__) focus_exog, focus_col = utils.maybe_name_or_idx(focus_exog, model) # Indices of non-focus columns ix_nf = range(len(results.params)) ix_nf = list(ix_nf) ix_nf.pop(focus_col) nnf = len(ix_nf) # Estimate the conditional means if not provided. if cond_means is None: # Below we calculate E[x | focus] where x is each column other # than the focus column. We do not want the intercept when we do # this so we remove it here. pexog = model.exog[:, ix_nf] pexog -= pexog.mean(0) u, s, vt = np.linalg.svd(pexog, 0) ii = np.flatnonzero(s > 1e-6) pexog = u[:, ii] fcol = model.exog[:, focus_col] cond_means = np.empty((len(fcol), pexog.shape[1])) for j in range(pexog.shape[1]): # Get the fitted values for column i given the other # columns (skip the intercept). y0 = pexog[:, j] cf = lowess(y0, fcol, frac=frac, return_sorted=False) cond_means[:, j] = cf new_exog = np.concatenate((model.exog[:, ix_nf], cond_means), axis=1) # Refit the model using the adjusted exog values klass = model.__class__ init_kwargs = model._get_init_kwds() new_model = klass(model.endog, new_exog, **init_kwargs) new_result = new_model.fit() # The partial residual, with respect to l(x2) (notation of Cook 1998) presid = model.endog - new_result.fittedvalues if isinstance(model, (GLM, GEE)): presid *= model.family.link.deriv(new_result.fittedvalues) if new_exog.shape[1] > nnf: presid += np.dot(new_exog[:, nnf:], new_result.params[nnf:]) return presid
Calculate the CERES residuals (Conditional Expectation Partial Residuals) for a fitted model. Parameters ---------- results : model results instance The fitted model for which the CERES residuals are calculated. focus_exog : int The column of results.model.exog used as the 'focus variable'. frac : float, optional Lowess smoothing parameter for estimating the conditional means. Not used if `cond_means` is provided. cond_means : array_like, optional If provided, the columns of this array are the conditional means E[exog | focus exog], where exog ranges over some or all of the columns of exog other than focus exog. If this is an empty nx0 array, the conditional means are treated as being zero. If None, the conditional means are estimated. Returns ------- An array containing the CERES residuals. Notes ----- If `cond_means` is not provided, it is obtained by smoothing each column of exog (except the focus column) against the focus column. Currently only supports GLM, GEE, and OLS models.
ceres_resids
python
statsmodels/statsmodels
statsmodels/graphics/regressionplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/regressionplots.py
BSD-3-Clause
def partial_resids(results, focus_exog): """ Returns partial residuals for a fitted model with respect to a 'focus predictor'. Parameters ---------- results : results instance A fitted regression model. focus col : int The column index of model.exog with respect to which the partial residuals are calculated. Returns ------- An array of partial residuals. References ---------- RD Cook and R Croos-Dabrera (1998). Partial residual plots in generalized linear models. Journal of the American Statistical Association, 93:442. """ # TODO: could be a method of results # TODO: see Cook et al (1998) for a more general definition # The calculation follows equation (8) from Cook's paper. model = results.model resid = model.endog - results.predict() if isinstance(model, (GLM, GEE)): resid *= model.family.link.deriv(results.fittedvalues) elif isinstance(model, (OLS, GLS, WLS)): pass # No need to do anything else: raise ValueError("Partial residuals for '%s' not implemented." % type(model)) if type(focus_exog) is str: focus_col = model.exog_names.index(focus_exog) else: focus_col = focus_exog focus_val = results.params[focus_col] * model.exog[:, focus_col] return focus_val + resid
Returns partial residuals for a fitted model with respect to a 'focus predictor'. Parameters ---------- results : results instance A fitted regression model. focus col : int The column index of model.exog with respect to which the partial residuals are calculated. Returns ------- An array of partial residuals. References ---------- RD Cook and R Croos-Dabrera (1998). Partial residual plots in generalized linear models. Journal of the American Statistical Association, 93:442.
partial_resids
python
statsmodels/statsmodels
statsmodels/graphics/regressionplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/regressionplots.py
BSD-3-Clause
def added_variable_resids(results, focus_exog, resid_type=None, use_glm_weights=True, fit_kwargs=None): """ Residualize the endog variable and a 'focus' exog variable in a regression model with respect to the other exog variables. Parameters ---------- results : regression results instance A fitted model including the focus exog and all other predictors of interest. focus_exog : {int, str} The column of results.model.exog or a variable name that is to be residualized against the other predictors. resid_type : str The type of residuals to use for the dependent variable. If None, uses `resid_deviance` for GLM/GEE and `resid` otherwise. use_glm_weights : bool Only used if the model is a GLM or GEE. If True, the residuals for the focus predictor are computed using WLS, with the weights obtained from the IRLS calculations for fitting the GLM. If False, unweighted regression is used. fit_kwargs : dict, optional Keyword arguments to be passed to fit when refitting the model. Returns ------- endog_resid : array_like The residuals for the original exog focus_exog_resid : array_like The residuals for the focus predictor Notes ----- The 'focus variable' residuals are always obtained using linear regression. Currently only GLM, GEE, and OLS models are supported. """ model = results.model if not isinstance(model, (GEE, GLM, OLS)): raise ValueError("model type %s not supported for added variable residuals" % model.__class__.__name__) exog = model.exog endog = model.endog focus_exog, focus_col = utils.maybe_name_or_idx(focus_exog, model) focus_exog_vals = exog[:, focus_col] # Default residuals if resid_type is None: if isinstance(model, (GEE, GLM)): resid_type = "resid_deviance" else: resid_type = "resid" ii = range(exog.shape[1]) ii = list(ii) ii.pop(focus_col) reduced_exog = exog[:, ii] start_params = results.params[ii] klass = model.__class__ kwargs = model._get_init_kwds() new_model = klass(endog, reduced_exog, **kwargs) args = {"start_params": start_params} if fit_kwargs is not None: args.update(fit_kwargs) new_result = new_model.fit(**args) if not getattr(new_result, "converged", True): raise ValueError("fit did not converge when calculating added variable residuals") try: endog_resid = getattr(new_result, resid_type) except AttributeError: raise ValueError("'%s' residual type not available" % resid_type) import statsmodels.regression.linear_model as lm if isinstance(model, (GLM, GEE)) and use_glm_weights: weights = model.family.weights(results.fittedvalues) if hasattr(model, "data_weights"): weights = weights * model.data_weights lm_results = lm.WLS(focus_exog_vals, reduced_exog, weights).fit() else: lm_results = lm.OLS(focus_exog_vals, reduced_exog).fit() focus_exog_resid = lm_results.resid return endog_resid, focus_exog_resid
Residualize the endog variable and a 'focus' exog variable in a regression model with respect to the other exog variables. Parameters ---------- results : regression results instance A fitted model including the focus exog and all other predictors of interest. focus_exog : {int, str} The column of results.model.exog or a variable name that is to be residualized against the other predictors. resid_type : str The type of residuals to use for the dependent variable. If None, uses `resid_deviance` for GLM/GEE and `resid` otherwise. use_glm_weights : bool Only used if the model is a GLM or GEE. If True, the residuals for the focus predictor are computed using WLS, with the weights obtained from the IRLS calculations for fitting the GLM. If False, unweighted regression is used. fit_kwargs : dict, optional Keyword arguments to be passed to fit when refitting the model. Returns ------- endog_resid : array_like The residuals for the original exog focus_exog_resid : array_like The residuals for the focus predictor Notes ----- The 'focus variable' residuals are always obtained using linear regression. Currently only GLM, GEE, and OLS models are supported.
added_variable_resids
python
statsmodels/statsmodels
statsmodels/graphics/regressionplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/regressionplots.py
BSD-3-Clause
def violinplot(data, ax=None, labels=None, positions=None, side='both', show_boxplot=True, plot_opts=None): """ Make a violin plot of each dataset in the `data` sequence. A violin plot is a boxplot combined with a kernel density estimate of the probability density function per point. Parameters ---------- data : sequence[array_like] Data arrays, one array per value in `positions`. ax : AxesSubplot, optional If given, this subplot is used to plot in instead of a new figure being created. labels : list[str], optional Tick labels for the horizontal axis. If not given, integers ``1..len(data)`` are used. positions : array_like, optional Position array, used as the horizontal axis of the plot. If not given, spacing of the violins will be equidistant. side : {'both', 'left', 'right'}, optional How to plot the violin. Default is 'both'. The 'left', 'right' options can be used to create asymmetric violin plots. show_boxplot : bool, optional Whether or not to show normal box plots on top of the violins. Default is True. plot_opts : dict, optional A dictionary with plotting options. Any of the following can be provided, if not present in `plot_opts` the defaults will be used:: - 'violin_fc', MPL color. Fill color for violins. Default is 'y'. - 'violin_ec', MPL color. Edge color for violins. Default is 'k'. - 'violin_lw', scalar. Edge linewidth for violins. Default is 1. - 'violin_alpha', float. Transparancy of violins. Default is 0.5. - 'cutoff', bool. If True, limit violin range to data range. Default is False. - 'cutoff_val', scalar. Where to cut off violins if `cutoff` is True. Default is 1.5 standard deviations. - 'cutoff_type', {'std', 'abs'}. Whether cutoff value is absolute, or in standard deviations. Default is 'std'. - 'violin_width' : float. Relative width of violins. Max available space is 1, default is 0.8. - 'label_fontsize', MPL fontsize. Adjusts fontsize only if given. - 'label_rotation', scalar. Adjusts label rotation only if given. Specify in degrees. - 'bw_factor', Adjusts the scipy gaussian_kde kernel. default: None. Options for scalar or callable. Returns ------- Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected. See Also -------- beanplot : Bean plot, builds on `violinplot`. matplotlib.pyplot.boxplot : Standard boxplot. Notes ----- The appearance of violins can be customized with `plot_opts`. If customization of boxplot elements is required, set `show_boxplot` to False and plot it on top of the violins by calling the Matplotlib `boxplot` function directly. For example:: violinplot(data, ax=ax, show_boxplot=False) ax.boxplot(data, sym='cv', whis=2.5) It can happen that the axis labels or tick labels fall outside the plot area, especially with rotated labels on the horizontal axis. With Matplotlib 1.1 or higher, this can easily be fixed by calling ``ax.tight_layout()``. With older Matplotlib one has to use ``plt.rc`` or ``plt.rcParams`` to fix this, for example:: plt.rc('figure.subplot', bottom=0.25) violinplot(data, ax=ax) References ---------- J.L. Hintze and R.D. Nelson, "Violin Plots: A Box Plot-Density Trace Synergism", The American Statistician, Vol. 52, pp.181-84, 1998. Examples -------- We use the American National Election Survey 1996 dataset, which has Party Identification of respondents as independent variable and (among other data) age as dependent variable. >>> data = sm.datasets.anes96.load_pandas() >>> party_ID = np.arange(7) >>> labels = ["Strong Democrat", "Weak Democrat", "Independent-Democrat", ... "Independent-Indpendent", "Independent-Republican", ... "Weak Republican", "Strong Republican"] Group age by party ID, and create a violin plot with it: >>> plt.rcParams['figure.subplot.bottom'] = 0.23 # keep labels visible >>> age = [data.exog['age'][data.endog == id] for id in party_ID] >>> fig = plt.figure() >>> ax = fig.add_subplot(111) >>> sm.graphics.violinplot(age, ax=ax, labels=labels, ... plot_opts={'cutoff_val':5, 'cutoff_type':'abs', ... 'label_fontsize':'small', ... 'label_rotation':30}) >>> ax.set_xlabel("Party identification of respondent.") >>> ax.set_ylabel("Age") >>> plt.show() .. plot:: plots/graphics_boxplot_violinplot.py """ plot_opts = {} if plot_opts is None else plot_opts if max([np.size(arr) for arr in data]) == 0: msg = "No Data to make Violin: Try again!" raise ValueError(msg) fig, ax = utils.create_mpl_ax(ax) data = list(map(np.asarray, data)) if positions is None: positions = np.arange(len(data)) + 1 # Determine available horizontal space for each individual violin. pos_span = np.max(positions) - np.min(positions) width = np.min([0.15 * np.max([pos_span, 1.]), plot_opts.get('violin_width', 0.8) / 2.]) # Plot violins. for pos_data, pos in zip(data, positions): _single_violin(ax, pos, pos_data, width, side, plot_opts) if show_boxplot: try: ax.boxplot( data, notch=1, positions=positions, orientation='vertical' ) except TypeError: # Remove after Matplotlib 3.10 is the minimum ax.boxplot(data, notch=1, positions=positions, vert=1) # Set ticks and tick labels of horizontal axis. _set_ticks_labels(ax, data, labels, positions, plot_opts) return fig
Make a violin plot of each dataset in the `data` sequence. A violin plot is a boxplot combined with a kernel density estimate of the probability density function per point. Parameters ---------- data : sequence[array_like] Data arrays, one array per value in `positions`. ax : AxesSubplot, optional If given, this subplot is used to plot in instead of a new figure being created. labels : list[str], optional Tick labels for the horizontal axis. If not given, integers ``1..len(data)`` are used. positions : array_like, optional Position array, used as the horizontal axis of the plot. If not given, spacing of the violins will be equidistant. side : {'both', 'left', 'right'}, optional How to plot the violin. Default is 'both'. The 'left', 'right' options can be used to create asymmetric violin plots. show_boxplot : bool, optional Whether or not to show normal box plots on top of the violins. Default is True. plot_opts : dict, optional A dictionary with plotting options. Any of the following can be provided, if not present in `plot_opts` the defaults will be used:: - 'violin_fc', MPL color. Fill color for violins. Default is 'y'. - 'violin_ec', MPL color. Edge color for violins. Default is 'k'. - 'violin_lw', scalar. Edge linewidth for violins. Default is 1. - 'violin_alpha', float. Transparancy of violins. Default is 0.5. - 'cutoff', bool. If True, limit violin range to data range. Default is False. - 'cutoff_val', scalar. Where to cut off violins if `cutoff` is True. Default is 1.5 standard deviations. - 'cutoff_type', {'std', 'abs'}. Whether cutoff value is absolute, or in standard deviations. Default is 'std'. - 'violin_width' : float. Relative width of violins. Max available space is 1, default is 0.8. - 'label_fontsize', MPL fontsize. Adjusts fontsize only if given. - 'label_rotation', scalar. Adjusts label rotation only if given. Specify in degrees. - 'bw_factor', Adjusts the scipy gaussian_kde kernel. default: None. Options for scalar or callable. Returns ------- Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected. See Also -------- beanplot : Bean plot, builds on `violinplot`. matplotlib.pyplot.boxplot : Standard boxplot. Notes ----- The appearance of violins can be customized with `plot_opts`. If customization of boxplot elements is required, set `show_boxplot` to False and plot it on top of the violins by calling the Matplotlib `boxplot` function directly. For example:: violinplot(data, ax=ax, show_boxplot=False) ax.boxplot(data, sym='cv', whis=2.5) It can happen that the axis labels or tick labels fall outside the plot area, especially with rotated labels on the horizontal axis. With Matplotlib 1.1 or higher, this can easily be fixed by calling ``ax.tight_layout()``. With older Matplotlib one has to use ``plt.rc`` or ``plt.rcParams`` to fix this, for example:: plt.rc('figure.subplot', bottom=0.25) violinplot(data, ax=ax) References ---------- J.L. Hintze and R.D. Nelson, "Violin Plots: A Box Plot-Density Trace Synergism", The American Statistician, Vol. 52, pp.181-84, 1998. Examples -------- We use the American National Election Survey 1996 dataset, which has Party Identification of respondents as independent variable and (among other data) age as dependent variable. >>> data = sm.datasets.anes96.load_pandas() >>> party_ID = np.arange(7) >>> labels = ["Strong Democrat", "Weak Democrat", "Independent-Democrat", ... "Independent-Indpendent", "Independent-Republican", ... "Weak Republican", "Strong Republican"] Group age by party ID, and create a violin plot with it: >>> plt.rcParams['figure.subplot.bottom'] = 0.23 # keep labels visible >>> age = [data.exog['age'][data.endog == id] for id in party_ID] >>> fig = plt.figure() >>> ax = fig.add_subplot(111) >>> sm.graphics.violinplot(age, ax=ax, labels=labels, ... plot_opts={'cutoff_val':5, 'cutoff_type':'abs', ... 'label_fontsize':'small', ... 'label_rotation':30}) >>> ax.set_xlabel("Party identification of respondent.") >>> ax.set_ylabel("Age") >>> plt.show() .. plot:: plots/graphics_boxplot_violinplot.py
violinplot
python
statsmodels/statsmodels
statsmodels/graphics/boxplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/boxplots.py
BSD-3-Clause
def _violin_range(pos_data, plot_opts): """Return array with correct range, with which violins can be plotted.""" cutoff = plot_opts.get('cutoff', False) cutoff_type = plot_opts.get('cutoff_type', 'std') cutoff_val = plot_opts.get('cutoff_val', 1.5) s = 0.0 if not cutoff: if cutoff_type == 'std': s = cutoff_val * np.std(pos_data) else: s = cutoff_val x_lower = kde.dataset.min() - s x_upper = kde.dataset.max() + s return np.linspace(x_lower, x_upper, 100)
Return array with correct range, with which violins can be plotted.
_single_violin._violin_range
python
statsmodels/statsmodels
statsmodels/graphics/boxplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/boxplots.py
BSD-3-Clause
def _single_violin(ax, pos, pos_data, width, side, plot_opts): """""" bw_factor = plot_opts.get('bw_factor', None) def _violin_range(pos_data, plot_opts): """Return array with correct range, with which violins can be plotted.""" cutoff = plot_opts.get('cutoff', False) cutoff_type = plot_opts.get('cutoff_type', 'std') cutoff_val = plot_opts.get('cutoff_val', 1.5) s = 0.0 if not cutoff: if cutoff_type == 'std': s = cutoff_val * np.std(pos_data) else: s = cutoff_val x_lower = kde.dataset.min() - s x_upper = kde.dataset.max() + s return np.linspace(x_lower, x_upper, 100) pos_data = np.asarray(pos_data) # Kernel density estimate for data at this position. kde = gaussian_kde(pos_data, bw_method=bw_factor) # Create violin for pos, scaled to the available space. xvals = _violin_range(pos_data, plot_opts) violin = kde.evaluate(xvals) violin = width * violin / violin.max() if side == 'both': envelope_l, envelope_r = (-violin + pos, violin + pos) elif side == 'right': envelope_l, envelope_r = (pos, violin + pos) elif side == 'left': envelope_l, envelope_r = (-violin + pos, pos) else: msg = "`side` parameter should be one of {'left', 'right', 'both'}." raise ValueError(msg) # Draw the violin. ax.fill_betweenx(xvals, envelope_l, envelope_r, facecolor=plot_opts.get('violin_fc', '#66c2a5'), edgecolor=plot_opts.get('violin_ec', 'k'), lw=plot_opts.get('violin_lw', 1), alpha=plot_opts.get('violin_alpha', 0.5)) return xvals, violin
bw_factor = plot_opts.get('bw_factor', None) def _violin_range(pos_data, plot_opts): """Return array with correct range, with which violins can be plotted.
_single_violin
python
statsmodels/statsmodels
statsmodels/graphics/boxplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/boxplots.py
BSD-3-Clause
def _set_ticks_labels(ax, data, labels, positions, plot_opts): """Set ticks and labels on horizontal axis.""" # Set xticks and limits. ax.set_xlim([np.min(positions) - 0.5, np.max(positions) + 0.5]) ax.set_xticks(positions) label_fontsize = plot_opts.get('label_fontsize') label_rotation = plot_opts.get('label_rotation') if label_fontsize or label_rotation: from matplotlib.artist import setp if labels is not None: if not len(labels) == len(data): msg = "Length of `labels` should equal length of `data`." raise ValueError(msg) xticknames = ax.set_xticklabels(labels) if label_fontsize: setp(xticknames, fontsize=label_fontsize) if label_rotation: setp(xticknames, rotation=label_rotation) return
Set ticks and labels on horizontal axis.
_set_ticks_labels
python
statsmodels/statsmodels
statsmodels/graphics/boxplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/boxplots.py
BSD-3-Clause
def beanplot(data, ax=None, labels=None, positions=None, side='both', jitter=False, plot_opts={}): """ Bean plot of each dataset in a sequence. A bean plot is a combination of a `violinplot` (kernel density estimate of the probability density function per point) with a line-scatter plot of all individual data points. Parameters ---------- data : sequence[array_like] Data arrays, one array per value in `positions`. ax : AxesSubplot If given, this subplot is used to plot in instead of a new figure being created. labels : list[str], optional Tick labels for the horizontal axis. If not given, integers ``1..len(data)`` are used. positions : array_like, optional Position array, used as the horizontal axis of the plot. If not given, spacing of the violins will be equidistant. side : {'both', 'left', 'right'}, optional How to plot the violin. Default is 'both'. The 'left', 'right' options can be used to create asymmetric violin plots. jitter : bool, optional If True, jitter markers within violin instead of plotting regular lines around the center. This can be useful if the data is very dense. plot_opts : dict, optional A dictionary with plotting options. All the options for `violinplot` can be specified, they will simply be passed to `violinplot`. Options specific to `beanplot` are: - 'violin_width' : float. Relative width of violins. Max available space is 1, default is 0.8. - 'bean_color', MPL color. Color of bean plot lines. Default is 'k'. Also used for jitter marker edge color if `jitter` is True. - 'bean_size', scalar. Line length as a fraction of maximum length. Default is 0.5. - 'bean_lw', scalar. Linewidth, default is 0.5. - 'bean_show_mean', bool. If True (default), show mean as a line. - 'bean_show_median', bool. If True (default), show median as a marker. - 'bean_mean_color', MPL color. Color of mean line. Default is 'b'. - 'bean_mean_lw', scalar. Linewidth of mean line, default is 2. - 'bean_mean_size', scalar. Line length as a fraction of maximum length. Default is 0.5. - 'bean_median_color', MPL color. Color of median marker. Default is 'r'. - 'bean_median_marker', MPL marker. Marker type, default is '+'. - 'jitter_marker', MPL marker. Marker type for ``jitter=True``. Default is 'o'. - 'jitter_marker_size', int. Marker size. Default is 4. - 'jitter_fc', MPL color. Jitter marker face color. Default is None. - 'bean_legend_text', str. If given, add a legend with given text. Returns ------- Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected. See Also -------- violinplot : Violin plot, also used internally in `beanplot`. matplotlib.pyplot.boxplot : Standard boxplot. References ---------- P. Kampstra, "Beanplot: A Boxplot Alternative for Visual Comparison of Distributions", J. Stat. Soft., Vol. 28, pp. 1-9, 2008. Examples -------- We use the American National Election Survey 1996 dataset, which has Party Identification of respondents as independent variable and (among other data) age as dependent variable. >>> data = sm.datasets.anes96.load_pandas() >>> party_ID = np.arange(7) >>> labels = ["Strong Democrat", "Weak Democrat", "Independent-Democrat", ... "Independent-Indpendent", "Independent-Republican", ... "Weak Republican", "Strong Republican"] Group age by party ID, and create a violin plot with it: >>> plt.rcParams['figure.subplot.bottom'] = 0.23 # keep labels visible >>> age = [data.exog['age'][data.endog == id] for id in party_ID] >>> fig = plt.figure() >>> ax = fig.add_subplot(111) >>> sm.graphics.beanplot(age, ax=ax, labels=labels, ... plot_opts={'cutoff_val':5, 'cutoff_type':'abs', ... 'label_fontsize':'small', ... 'label_rotation':30}) >>> ax.set_xlabel("Party identification of respondent.") >>> ax.set_ylabel("Age") >>> plt.show() .. plot:: plots/graphics_boxplot_beanplot.py """ fig, ax = utils.create_mpl_ax(ax) data = list(map(np.asarray, data)) if positions is None: positions = np.arange(len(data)) + 1 # Determine available horizontal space for each individual violin. pos_span = np.max(positions) - np.min(positions) violin_width = np.min([0.15 * np.max([pos_span, 1.]), plot_opts.get('violin_width', 0.8) / 2.]) bean_width = np.min([0.15 * np.max([pos_span, 1.]), plot_opts.get('bean_size', 0.5) / 2.]) bean_mean_width = np.min([0.15 * np.max([pos_span, 1.]), plot_opts.get('bean_mean_size', 0.5) / 2.]) legend_txt = plot_opts.get('bean_legend_text', None) for pos_data, pos in zip(data, positions): # Draw violins. xvals, violin = _single_violin(ax, pos, pos_data, violin_width, side, plot_opts) if jitter: # Draw data points at random coordinates within violin envelope. jitter_coord = pos + _jitter_envelope(pos_data, xvals, violin, side) ax.plot(jitter_coord, pos_data, ls='', marker=plot_opts.get('jitter_marker', 'o'), ms=plot_opts.get('jitter_marker_size', 4), mec=plot_opts.get('bean_color', 'k'), mew=1, mfc=plot_opts.get('jitter_fc', 'none'), label=legend_txt) else: # Draw bean lines. ax.hlines(pos_data, pos - bean_width, pos + bean_width, lw=plot_opts.get('bean_lw', 0.5), color=plot_opts.get('bean_color', 'k'), label=legend_txt) # Show legend if required. if legend_txt is not None: _show_legend(ax) legend_txt = None # ensure we get one entry per call to beanplot # Draw mean line. if plot_opts.get('bean_show_mean', True): ax.hlines(np.mean(pos_data), pos - bean_mean_width, pos + bean_mean_width, lw=plot_opts.get('bean_mean_lw', 2.), color=plot_opts.get('bean_mean_color', 'b')) # Draw median marker. if plot_opts.get('bean_show_median', True): ax.plot(pos, np.median(pos_data), marker=plot_opts.get('bean_median_marker', '+'), color=plot_opts.get('bean_median_color', 'r')) # Set ticks and tick labels of horizontal axis. _set_ticks_labels(ax, data, labels, positions, plot_opts) return fig
Bean plot of each dataset in a sequence. A bean plot is a combination of a `violinplot` (kernel density estimate of the probability density function per point) with a line-scatter plot of all individual data points. Parameters ---------- data : sequence[array_like] Data arrays, one array per value in `positions`. ax : AxesSubplot If given, this subplot is used to plot in instead of a new figure being created. labels : list[str], optional Tick labels for the horizontal axis. If not given, integers ``1..len(data)`` are used. positions : array_like, optional Position array, used as the horizontal axis of the plot. If not given, spacing of the violins will be equidistant. side : {'both', 'left', 'right'}, optional How to plot the violin. Default is 'both'. The 'left', 'right' options can be used to create asymmetric violin plots. jitter : bool, optional If True, jitter markers within violin instead of plotting regular lines around the center. This can be useful if the data is very dense. plot_opts : dict, optional A dictionary with plotting options. All the options for `violinplot` can be specified, they will simply be passed to `violinplot`. Options specific to `beanplot` are: - 'violin_width' : float. Relative width of violins. Max available space is 1, default is 0.8. - 'bean_color', MPL color. Color of bean plot lines. Default is 'k'. Also used for jitter marker edge color if `jitter` is True. - 'bean_size', scalar. Line length as a fraction of maximum length. Default is 0.5. - 'bean_lw', scalar. Linewidth, default is 0.5. - 'bean_show_mean', bool. If True (default), show mean as a line. - 'bean_show_median', bool. If True (default), show median as a marker. - 'bean_mean_color', MPL color. Color of mean line. Default is 'b'. - 'bean_mean_lw', scalar. Linewidth of mean line, default is 2. - 'bean_mean_size', scalar. Line length as a fraction of maximum length. Default is 0.5. - 'bean_median_color', MPL color. Color of median marker. Default is 'r'. - 'bean_median_marker', MPL marker. Marker type, default is '+'. - 'jitter_marker', MPL marker. Marker type for ``jitter=True``. Default is 'o'. - 'jitter_marker_size', int. Marker size. Default is 4. - 'jitter_fc', MPL color. Jitter marker face color. Default is None. - 'bean_legend_text', str. If given, add a legend with given text. Returns ------- Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected. See Also -------- violinplot : Violin plot, also used internally in `beanplot`. matplotlib.pyplot.boxplot : Standard boxplot. References ---------- P. Kampstra, "Beanplot: A Boxplot Alternative for Visual Comparison of Distributions", J. Stat. Soft., Vol. 28, pp. 1-9, 2008. Examples -------- We use the American National Election Survey 1996 dataset, which has Party Identification of respondents as independent variable and (among other data) age as dependent variable. >>> data = sm.datasets.anes96.load_pandas() >>> party_ID = np.arange(7) >>> labels = ["Strong Democrat", "Weak Democrat", "Independent-Democrat", ... "Independent-Indpendent", "Independent-Republican", ... "Weak Republican", "Strong Republican"] Group age by party ID, and create a violin plot with it: >>> plt.rcParams['figure.subplot.bottom'] = 0.23 # keep labels visible >>> age = [data.exog['age'][data.endog == id] for id in party_ID] >>> fig = plt.figure() >>> ax = fig.add_subplot(111) >>> sm.graphics.beanplot(age, ax=ax, labels=labels, ... plot_opts={'cutoff_val':5, 'cutoff_type':'abs', ... 'label_fontsize':'small', ... 'label_rotation':30}) >>> ax.set_xlabel("Party identification of respondent.") >>> ax.set_ylabel("Age") >>> plt.show() .. plot:: plots/graphics_boxplot_beanplot.py
beanplot
python
statsmodels/statsmodels
statsmodels/graphics/boxplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/boxplots.py
BSD-3-Clause
def _jitter_envelope(pos_data, xvals, violin, side): """Determine envelope for jitter markers.""" if side == 'both': low, high = (-1., 1.) elif side == 'right': low, high = (0, 1.) elif side == 'left': low, high = (-1., 0) else: raise ValueError("`side` input incorrect: %s" % side) jitter_envelope = np.interp(pos_data, xvals, violin) jitter_coord = jitter_envelope * np.random.uniform(low=low, high=high, size=pos_data.size) return jitter_coord
Determine envelope for jitter markers.
_jitter_envelope
python
statsmodels/statsmodels
statsmodels/graphics/boxplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/boxplots.py
BSD-3-Clause
def _show_legend(ax): """Utility function to show legend.""" leg = ax.legend(loc=1, shadow=True, fancybox=True, labelspacing=0.2, borderpad=0.15) ltext = leg.get_texts() llines = leg.get_lines() leg.get_frame() from matplotlib.artist import setp setp(ltext, fontsize='small') setp(llines, linewidth=1)
Utility function to show legend.
_show_legend
python
statsmodels/statsmodels
statsmodels/graphics/boxplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/boxplots.py
BSD-3-Clause
def _normalize_split(proportion): """ return a list of proportions of the available space given the division if only a number is given, it will assume a split in two pieces """ if not iterable(proportion): if proportion == 0: proportion = array([0.0, 1.0]) elif proportion >= 1: proportion = array([1.0, 0.0]) elif proportion < 0: raise ValueError("proportions should be positive," "given value: {}".format(proportion)) else: proportion = array([proportion, 1.0 - proportion]) proportion = np.asarray(proportion, dtype=float) if np.any(proportion < 0): raise ValueError("proportions should be positive," "given value: {}".format(proportion)) if np.allclose(proportion, 0): raise ValueError( "at least one proportion should be greater than zero" "given value: {}".format(proportion) ) # ok, data are meaningful, so go on if len(proportion) < 2: return array([0.0, 1.0]) left = r_[0, cumsum(proportion)] left /= left[-1] * 1.0 return left
return a list of proportions of the available space given the division if only a number is given, it will assume a split in two pieces
_normalize_split
python
statsmodels/statsmodels
statsmodels/graphics/mosaicplot.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/mosaicplot.py
BSD-3-Clause
def _split_rect(x, y, width, height, proportion, horizontal=True, gap=0.05): """ Split the given rectangle in n segments whose proportion is specified along the given axis if a gap is inserted, they will be separated by a certain amount of space, retaining the relative proportion between them a gap of 1 correspond to a plot that is half void and the remaining half space is proportionally divided among the pieces. """ x, y, w, h = float(x), float(y), float(width), float(height) if (w < 0) or (h < 0): raise ValueError("dimension of the square less than" "zero w={} h={}".format(w, h)) proportions = _normalize_split(proportion) # extract the starting point and the dimension of each subdivision # in respect to the unit square starting = proportions[:-1] amplitude = proportions[1:] - starting # how much each extrema is going to be displaced due to gaps starting += gap * np.arange(len(proportions) - 1) # how much the squares plus the gaps are extended extension = starting[-1] + amplitude[-1] - starting[0] # normalize everything for fit again in the original dimension starting /= extension amplitude /= extension # bring everything to the original square starting = (x if horizontal else y) + starting * (w if horizontal else h) amplitude = amplitude * (w if horizontal else h) # create each 4-tuple for each new block results = [(s, y, a, h) if horizontal else (x, s, w, a) for s, a in zip(starting, amplitude)] return results
Split the given rectangle in n segments whose proportion is specified along the given axis if a gap is inserted, they will be separated by a certain amount of space, retaining the relative proportion between them a gap of 1 correspond to a plot that is half void and the remaining half space is proportionally divided among the pieces.
_split_rect
python
statsmodels/statsmodels
statsmodels/graphics/mosaicplot.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/mosaicplot.py
BSD-3-Clause
def _reduce_dict(count_dict, partial_key): """ Make partial sum on a counter dict. Given a match for the beginning of the category, it will sum each value. """ L = len(partial_key) count = sum(v for k, v in count_dict.items() if k[:L] == partial_key) return count
Make partial sum on a counter dict. Given a match for the beginning of the category, it will sum each value.
_reduce_dict
python
statsmodels/statsmodels
statsmodels/graphics/mosaicplot.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/mosaicplot.py
BSD-3-Clause
def _key_splitting(rect_dict, keys, values, key_subset, horizontal, gap): """ Given a dictionary where each entry is a rectangle, a list of key and value (count of elements in each category) it split each rect accordingly, as long as the key start with the tuple key_subset. The other keys are returned without modification. """ result = {} L = len(key_subset) for name, (x, y, w, h) in rect_dict.items(): if key_subset == name[:L]: # split base on the values given divisions = _split_rect(x, y, w, h, values, horizontal, gap) for key, rect in zip(keys, divisions): result[name + (key,)] = rect else: result[name] = (x, y, w, h) return result
Given a dictionary where each entry is a rectangle, a list of key and value (count of elements in each category) it split each rect accordingly, as long as the key start with the tuple key_subset. The other keys are returned without modification.
_key_splitting
python
statsmodels/statsmodels
statsmodels/graphics/mosaicplot.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/mosaicplot.py
BSD-3-Clause
def _tuplify(obj): """convert an object in a tuple of strings (even if it is not iterable, like a single integer number, but keep the string healthy) """ if np.iterable(obj) and not isinstance(obj, str): res = tuple(str(o) for o in obj) else: res = (str(obj),) return res
convert an object in a tuple of strings (even if it is not iterable, like a single integer number, but keep the string healthy)
_tuplify
python
statsmodels/statsmodels
statsmodels/graphics/mosaicplot.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/mosaicplot.py
BSD-3-Clause
def _categories_level(keys): """use the Ordered dict to implement a simple ordered set return each level of each category [[key_1_level_1,key_2_level_1],[key_1_level_2,key_2_level_2]] """ res = [] for i in zip(*(keys)): tuplefied = _tuplify(i) res.append(list({j: None for j in tuplefied})) return res
use the Ordered dict to implement a simple ordered set return each level of each category [[key_1_level_1,key_2_level_1],[key_1_level_2,key_2_level_2]]
_categories_level
python
statsmodels/statsmodels
statsmodels/graphics/mosaicplot.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/mosaicplot.py
BSD-3-Clause
def _hierarchical_split(count_dict, horizontal=True, gap=0.05): """ Split a square in a hierarchical way given a contingency table. Hierarchically split the unit square in alternate directions in proportion to the subdivision contained in the contingency table count_dict. This is the function that actually perform the tiling for the creation of the mosaic plot. If the gap array has been specified it will insert a corresponding amount of space (proportional to the unit length), while retaining the proportionality of the tiles. Parameters ---------- count_dict : dict Dictionary containing the contingency table. Each category should contain a non-negative number with a tuple as index. It expects that all the combination of keys to be represents; if that is not true, will automatically consider the missing values as 0 horizontal : bool The starting direction of the split (by default along the horizontal axis) gap : float or array of floats The list of gaps to be applied on each subdivision. If the length of the given array is less of the number of subcategories (or if it's a single number) it will extend it with exponentially decreasing gaps Returns ------- base_rect : dict A dictionary containing the result of the split. To each key is associated a 4-tuple of coordinates that are required to create the corresponding rectangle: 0 - x position of the lower left corner 1 - y position of the lower left corner 2 - width of the rectangle 3 - height of the rectangle """ # this is the unit square that we are going to divide base_rect = dict([(tuple(), (0, 0, 1, 1))]) # get the list of each possible value for each level categories_levels = _categories_level(list(count_dict.keys())) L = len(categories_levels) # recreate the gaps vector starting from an int if not np.iterable(gap): gap = [gap / 1.5 ** idx for idx in range(L)] # extend if it's too short if len(gap) < L: last = gap[-1] gap = list(*gap) + [last / 1.5 ** idx for idx in range(L)] # trim if it's too long gap = gap[:L] # put the count dictionay in order for the keys # this will allow some code simplification count_ordered = {k: count_dict[k] for k in list(product(*categories_levels))} for cat_idx, cat_enum in enumerate(categories_levels): # get the partial key up to the actual level base_keys = list(product(*categories_levels[:cat_idx])) for key in base_keys: # for each partial and each value calculate how many # observation we have in the counting dictionary part_count = [_reduce_dict(count_ordered, key + (partial,)) for partial in cat_enum] # reduce the gap for subsequents levels new_gap = gap[cat_idx] # split the given subkeys in the rectangle dictionary base_rect = _key_splitting(base_rect, cat_enum, part_count, key, horizontal, new_gap) horizontal = not horizontal return base_rect
Split a square in a hierarchical way given a contingency table. Hierarchically split the unit square in alternate directions in proportion to the subdivision contained in the contingency table count_dict. This is the function that actually perform the tiling for the creation of the mosaic plot. If the gap array has been specified it will insert a corresponding amount of space (proportional to the unit length), while retaining the proportionality of the tiles. Parameters ---------- count_dict : dict Dictionary containing the contingency table. Each category should contain a non-negative number with a tuple as index. It expects that all the combination of keys to be represents; if that is not true, will automatically consider the missing values as 0 horizontal : bool The starting direction of the split (by default along the horizontal axis) gap : float or array of floats The list of gaps to be applied on each subdivision. If the length of the given array is less of the number of subcategories (or if it's a single number) it will extend it with exponentially decreasing gaps Returns ------- base_rect : dict A dictionary containing the result of the split. To each key is associated a 4-tuple of coordinates that are required to create the corresponding rectangle: 0 - x position of the lower left corner 1 - y position of the lower left corner 2 - width of the rectangle 3 - height of the rectangle
_hierarchical_split
python
statsmodels/statsmodels
statsmodels/graphics/mosaicplot.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/mosaicplot.py
BSD-3-Clause
def _single_hsv_to_rgb(hsv): """Transform a color from the hsv space to the rgb.""" from matplotlib.colors import hsv_to_rgb return hsv_to_rgb(array(hsv).reshape(1, 1, 3)).reshape(3)
Transform a color from the hsv space to the rgb.
_single_hsv_to_rgb
python
statsmodels/statsmodels
statsmodels/graphics/mosaicplot.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/mosaicplot.py
BSD-3-Clause
def _create_default_properties(data): """"Create the default properties of the mosaic given the data first it will varies the color hue (first category) then the color saturation (second category) and then the color value (third category). If a fourth category is found, it will put decoration on the rectangle. Does not manage more than four level of categories """ categories_levels = _categories_level(list(data.keys())) Nlevels = len(categories_levels) # first level, the hue L = len(categories_levels[0]) # hue = np.linspace(1.0, 0.0, L+1)[:-1] hue = np.linspace(0.0, 1.0, L + 2)[:-2] # second level, the saturation L = len(categories_levels[1]) if Nlevels > 1 else 1 saturation = np.linspace(0.5, 1.0, L + 1)[:-1] # third level, the value L = len(categories_levels[2]) if Nlevels > 2 else 1 value = np.linspace(0.5, 1.0, L + 1)[:-1] # fourth level, the hatch L = len(categories_levels[3]) if Nlevels > 3 else 1 hatch = ['', '/', '-', '|', '+'][:L + 1] # convert in list and merge with the levels hue = lzip(list(hue), categories_levels[0]) saturation = lzip(list(saturation), categories_levels[1] if Nlevels > 1 else ['']) value = lzip(list(value), categories_levels[2] if Nlevels > 2 else ['']) hatch = lzip(list(hatch), categories_levels[3] if Nlevels > 3 else ['']) # create the properties dictionary properties = {} for h, s, v, t in product(hue, saturation, value, hatch): hv, hn = h sv, sn = s vv, vn = v tv, tn = t level = (hn,) + ((sn,) if sn else tuple()) level = level + ((vn,) if vn else tuple()) level = level + ((tn,) if tn else tuple()) hsv = array([hv, sv, vv]) prop = {'color': _single_hsv_to_rgb(hsv), 'hatch': tv, 'lw': 0} properties[level] = prop return properties
Create the default properties of the mosaic given the data first it will varies the color hue (first category) then the color saturation (second category) and then the color value (third category). If a fourth category is found, it will put decoration on the rectangle. Does not manage more than four level of categories
_create_default_properties
python
statsmodels/statsmodels
statsmodels/graphics/mosaicplot.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/mosaicplot.py
BSD-3-Clause
def _normalize_data(data, index): """normalize the data to a dict with tuples of strings as keys right now it works with: 0 - dictionary (or equivalent mappable) 1 - pandas.Series with simple or hierarchical indexes 2 - numpy.ndarrays 3 - everything that can be converted to a numpy array 4 - pandas.DataFrame (via the _normalize_dataframe function) """ # if data is a dataframe we need to take a completely new road # before coming back here. Use the hasattr to avoid importing # pandas explicitly if hasattr(data, 'pivot') and hasattr(data, 'groupby'): data = _normalize_dataframe(data, index) index = None # can it be used as a dictionary? try: items = list(data.items()) except AttributeError: # ok, I cannot use the data as a dictionary # Try to convert it to a numpy array, or die trying data = np.asarray(data) temp = {} for idx in np.ndindex(data.shape): name = tuple(i for i in idx) temp[name] = data[idx] data = temp items = list(data.items()) # make all the keys a tuple, even if simple numbers data = {_tuplify(k): v for k, v in items} categories_levels = _categories_level(list(data.keys())) # fill the void in the counting dictionary indexes = product(*categories_levels) contingency = {k: data.get(k, 0) for k in indexes} data = contingency # reorder the keys order according to the one specified by the user # or if the index is None convert it into a simple list # right now it does not do any check, but can be modified in the future index = lrange(len(categories_levels)) if index is None else index contingency = {} for key, value in data.items(): new_key = tuple(key[i] for i in index) contingency[new_key] = value data = contingency return data
normalize the data to a dict with tuples of strings as keys right now it works with: 0 - dictionary (or equivalent mappable) 1 - pandas.Series with simple or hierarchical indexes 2 - numpy.ndarrays 3 - everything that can be converted to a numpy array 4 - pandas.DataFrame (via the _normalize_dataframe function)
_normalize_data
python
statsmodels/statsmodels
statsmodels/graphics/mosaicplot.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/mosaicplot.py
BSD-3-Clause
def _normalize_dataframe(dataframe, index): """Take a pandas DataFrame and count the element present in the given columns, return a hierarchical index on those columns """ #groupby the given keys, extract the same columns and count the element # then collapse them with a mean data = dataframe[index].dropna() grouped = data.groupby(index, sort=False, observed=False) counted = grouped[index].count() averaged = counted.mean(axis=1) # Fill empty missing with 0, see GH5639 averaged = averaged.fillna(0.0) return averaged
Take a pandas DataFrame and count the element present in the given columns, return a hierarchical index on those columns
_normalize_dataframe
python
statsmodels/statsmodels
statsmodels/graphics/mosaicplot.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/mosaicplot.py
BSD-3-Clause
def _statistical_coloring(data): """evaluate colors from the indipendence properties of the matrix It will encounter problem if one category has all zeros """ data = _normalize_data(data, None) categories_levels = _categories_level(list(data.keys())) Nlevels = len(categories_levels) total = 1.0 * sum(v for v in data.values()) # count the proportion of observation # for each level that has the given name # at each level levels_count = [] for level_idx in range(Nlevels): proportion = {} for level in categories_levels[level_idx]: proportion[level] = 0.0 for key, value in data.items(): if level == key[level_idx]: proportion[level] += value proportion[level] /= total levels_count.append(proportion) # for each key I obtain the expected value # and it's standard deviation from a binomial distribution # under the hipothesys of independence expected = {} for key, value in data.items(): base = 1.0 for i, k in enumerate(key): base *= levels_count[i][k] expected[key] = base * total, np.sqrt(total * base * (1.0 - base)) # now we have the standard deviation of distance from the # expected value for each tile. We create the colors from this sigmas = {k: (data[k] - m) / s for k, (m, s) in expected.items()} props = {} for key, dev in sigmas.items(): red = 0.0 if dev < 0 else (dev / (1 + dev)) blue = 0.0 if dev > 0 else (dev / (-1 + dev)) green = (1.0 - red - blue) / 2.0 hatch = 'x' if dev > 2 else 'o' if dev < -2 else '' props[key] = {'color': [red, green, blue], 'hatch': hatch} return props
evaluate colors from the indipendence properties of the matrix It will encounter problem if one category has all zeros
_statistical_coloring
python
statsmodels/statsmodels
statsmodels/graphics/mosaicplot.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/mosaicplot.py
BSD-3-Clause
def _create_labels(rects, horizontal, ax, rotation): """find the position of the label for each value of each category right now it supports only up to the four categories ax: the axis on which the label should be applied rotation: the rotation list for each side """ categories = _categories_level(list(rects.keys())) if len(categories) > 4: msg = ("maximum of 4 level supported for axes labeling... and 4" "is already a lot of levels, are you sure you need them all?") raise ValueError(msg) labels = {} #keep it fixed as will be used a lot of times items = list(rects.items()) vertical = not horizontal #get the axis ticks and labels locator to put the correct values! ax2 = ax.twinx() ax3 = ax.twiny() #this is the order of execution for horizontal disposition ticks_pos = [ax.set_xticks, ax.set_yticks, ax3.set_xticks, ax2.set_yticks] ticks_lab = [ax.set_xticklabels, ax.set_yticklabels, ax3.set_xticklabels, ax2.set_yticklabels] #for the vertical one, rotate it by one if vertical: ticks_pos = ticks_pos[1:] + ticks_pos[:1] ticks_lab = ticks_lab[1:] + ticks_lab[:1] #clean them for pos, lab in zip(ticks_pos, ticks_lab): pos([]) lab([]) #for each level, for each value in the level, take the mean of all #the sublevel that correspond to that partial key for level_idx, level in enumerate(categories): #this dictionary keep the labels only for this level level_ticks = dict() for value in level: #to which level it should refer to get the preceding #values of labels? it's rather a tricky question... #this is dependent on the side. It's a very crude management #but I couldn't think a more general way... if horizontal: if level_idx == 3: index_select = [-1, -1, -1] else: index_select = [+0, -1, -1] else: if level_idx == 3: index_select = [+0, -1, +0] else: index_select = [-1, -1, -1] #now I create the base key name and append the current value #It will search on all the rects to find the corresponding one #and use them to evaluate the mean position basekey = tuple(categories[i][index_select[i]] for i in range(level_idx)) basekey = basekey + (value,) subset = {k: v for k, v in items if basekey == k[:level_idx + 1]} #now I extract the center of all the tiles and make a weighted #mean of all these center on the area of the tile #this should give me the (more or less) correct position #of the center of the category vals = list(subset.values()) W = sum(w * h for (x, y, w, h) in vals) x_lab = sum(_get_position(x, w, h, W) for (x, y, w, h) in vals) y_lab = sum(_get_position(y, h, w, W) for (x, y, w, h) in vals) #now base on the ordering, select which position to keep #needs to be written in a more general form of 4 level are enough? #should give also the horizontal and vertical alignment side = (level_idx + vertical) % 4 level_ticks[value] = y_lab if side % 2 else x_lab #now we add the labels of this level to the correct axis ticks_pos[level_idx](list(level_ticks.values())) ticks_lab[level_idx](list(level_ticks.keys()), rotation=rotation[level_idx]) return labels
find the position of the label for each value of each category right now it supports only up to the four categories ax: the axis on which the label should be applied rotation: the rotation list for each side
_create_labels
python
statsmodels/statsmodels
statsmodels/graphics/mosaicplot.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/mosaicplot.py
BSD-3-Clause
def mosaic(data, index=None, ax=None, horizontal=True, gap=0.005, properties=lambda key: None, labelizer=None, title='', statistic=False, axes_label=True, label_rotation=0.0): """Create a mosaic plot from a contingency table. It allows to visualize multivariate categorical data in a rigorous and informative way. Parameters ---------- data : {dict, Series, ndarray, DataFrame} The contingency table that contains the data. Each category should contain a non-negative number with a tuple as index. It expects that all the combination of keys to be represents; if that is not true, will automatically consider the missing values as 0. The order of the keys will be the same as the one of insertion. If a dict of a Series (or any other dict like object) is used, it will take the keys as labels. If a np.ndarray is provided, it will generate a simple numerical labels. index : list, optional Gives the preferred order for the category ordering. If not specified will default to the given order. It does not support named indexes for hierarchical Series. If a DataFrame is provided, it expects a list with the name of the columns. ax : Axes, optional The graph where display the mosaic. If not given, will create a new figure horizontal : bool, optional The starting direction of the split (by default along the horizontal axis) gap : {float, sequence[float]} The list of gaps to be applied on each subdivision. If the length of the given array is less of the number of subcategories (or if it's a single number) it will extend it with exponentially decreasing gaps properties : dict[str, callable], optional A function that for each tile in the mosaic take the key of the tile and returns the dictionary of properties of the generated Rectangle, like color, hatch or similar. A default properties set will be provided fot the keys whose color has not been defined, and will use color variation to help visually separates the various categories. It should return None to indicate that it should use the default property for the tile. A dictionary of the properties for each key can be passed, and it will be internally converted to the correct function labelizer : dict[str, callable], optional A function that generate the text to display at the center of each tile base on the key of that tile title : str, optional The title of the axis statistic : bool, optional If true will use a crude statistical model to give colors to the plot. If the tile has a constraint that is more than 2 standard deviation from the expected value under independence hypothesis, it will go from green to red (for positive deviations, blue otherwise) and will acquire an hatching when crosses the 3 sigma. axes_label : bool, optional Show the name of each value of each category on the axis (default) or hide them. label_rotation : {float, list[float]} The rotation of the axis label (if present). If a list is given each axis can have a different rotation Returns ------- fig : Figure The figure containing the plot. rects : dict A dictionary that has the same keys of the original dataset, that holds a reference to the coordinates of the tile and the Rectangle that represent it. References ---------- A Brief History of the Mosaic Display Michael Friendly, York University, Psychology Department Journal of Computational and Graphical Statistics, 2001 Mosaic Displays for Loglinear Models. Michael Friendly, York University, Psychology Department Proceedings of the Statistical Graphics Section, 1992, 61-68. Mosaic displays for multi-way contingency tables. Michael Friendly, York University, Psychology Department Journal of the american statistical association March 1994, Vol. 89, No. 425, Theory and Methods Examples -------- >>> import numpy as np >>> import pandas as pd >>> import matplotlib.pyplot as plt >>> from statsmodels.graphics.mosaicplot import mosaic The most simple use case is to take a dictionary and plot the result >>> data = {'a': 10, 'b': 15, 'c': 16} >>> mosaic(data, title='basic dictionary') >>> plt.show() A more useful example is given by a dictionary with multiple indices. In this case we use a wider gap to a better visual separation of the resulting plot >>> data = {('a', 'b'): 1, ('a', 'c'): 2, ('d', 'b'): 3, ('d', 'c'): 4} >>> mosaic(data, gap=0.05, title='complete dictionary') >>> plt.show() The same data can be given as a simple or hierarchical indexed Series >>> rand = np.random.random >>> from itertools import product >>> tuples = list(product(['bar', 'baz', 'foo', 'qux'], ['one', 'two'])) >>> index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second']) >>> data = pd.Series(rand(8), index=index) >>> mosaic(data, title='hierarchical index series') >>> plt.show() The third accepted data structure is the np array, for which a very simple index will be created. >>> rand = np.random.random >>> data = 1+rand((2,2)) >>> mosaic(data, title='random non-labeled array') >>> plt.show() If you need to modify the labeling and the coloring you can give a function tocreate the labels and one with the graphical properties starting from the key tuple >>> data = {'a': 10, 'b': 15, 'c': 16} >>> props = lambda key: {'color': 'r' if 'a' in key else 'gray'} >>> labelizer = lambda k: {('a',): 'first', ('b',): 'second', ... ('c',): 'third'}[k] >>> mosaic(data, title='colored dictionary', properties=props, ... labelizer=labelizer) >>> plt.show() Using a DataFrame as source, specifying the name of the columns of interest >>> gender = ['male', 'male', 'male', 'female', 'female', 'female'] >>> pet = ['cat', 'dog', 'dog', 'cat', 'dog', 'cat'] >>> data = pd.DataFrame({'gender': gender, 'pet': pet}) >>> mosaic(data, ['pet', 'gender'], title='DataFrame as Source') >>> plt.show() .. plot :: plots/graphics_mosaicplot_mosaic.py """ if isinstance(data, DataFrame) and index is None: raise ValueError("You must pass an index if data is a DataFrame." " See examples.") from matplotlib.patches import Rectangle #from pylab import Rectangle fig, ax = utils.create_mpl_ax(ax) # normalize the data to a dict with tuple of strings as keys data = _normalize_data(data, index) # split the graph into different areas rects = _hierarchical_split(data, horizontal=horizontal, gap=gap) # if there is no specified way to create the labels # create a default one if labelizer is None: def labelizer(k): return "\n".join(k) if statistic: default_props = _statistical_coloring(data) else: default_props = _create_default_properties(data) if isinstance(properties, dict): color_dict = properties def properties(key): return color_dict.get(key, None) for k, v in rects.items(): # create each rectangle and put a label on it x, y, w, h = v conf = properties(k) props = conf if conf else default_props[k] text = labelizer(k) Rect = Rectangle((x, y), w, h, label=text, **props) ax.add_patch(Rect) ax.text(x + w / 2, y + h / 2, text, ha='center', va='center', size='smaller') #creating the labels on the axis #o clearing it if axes_label: if np.iterable(label_rotation): rotation = label_rotation else: rotation = [label_rotation] * 4 _create_labels(rects, horizontal, ax, rotation) else: ax.set_xticks([]) ax.set_xticklabels([]) ax.set_yticks([]) ax.set_yticklabels([]) ax.set_title(title) return fig, rects
Create a mosaic plot from a contingency table. It allows to visualize multivariate categorical data in a rigorous and informative way. Parameters ---------- data : {dict, Series, ndarray, DataFrame} The contingency table that contains the data. Each category should contain a non-negative number with a tuple as index. It expects that all the combination of keys to be represents; if that is not true, will automatically consider the missing values as 0. The order of the keys will be the same as the one of insertion. If a dict of a Series (or any other dict like object) is used, it will take the keys as labels. If a np.ndarray is provided, it will generate a simple numerical labels. index : list, optional Gives the preferred order for the category ordering. If not specified will default to the given order. It does not support named indexes for hierarchical Series. If a DataFrame is provided, it expects a list with the name of the columns. ax : Axes, optional The graph where display the mosaic. If not given, will create a new figure horizontal : bool, optional The starting direction of the split (by default along the horizontal axis) gap : {float, sequence[float]} The list of gaps to be applied on each subdivision. If the length of the given array is less of the number of subcategories (or if it's a single number) it will extend it with exponentially decreasing gaps properties : dict[str, callable], optional A function that for each tile in the mosaic take the key of the tile and returns the dictionary of properties of the generated Rectangle, like color, hatch or similar. A default properties set will be provided fot the keys whose color has not been defined, and will use color variation to help visually separates the various categories. It should return None to indicate that it should use the default property for the tile. A dictionary of the properties for each key can be passed, and it will be internally converted to the correct function labelizer : dict[str, callable], optional A function that generate the text to display at the center of each tile base on the key of that tile title : str, optional The title of the axis statistic : bool, optional If true will use a crude statistical model to give colors to the plot. If the tile has a constraint that is more than 2 standard deviation from the expected value under independence hypothesis, it will go from green to red (for positive deviations, blue otherwise) and will acquire an hatching when crosses the 3 sigma. axes_label : bool, optional Show the name of each value of each category on the axis (default) or hide them. label_rotation : {float, list[float]} The rotation of the axis label (if present). If a list is given each axis can have a different rotation Returns ------- fig : Figure The figure containing the plot. rects : dict A dictionary that has the same keys of the original dataset, that holds a reference to the coordinates of the tile and the Rectangle that represent it. References ---------- A Brief History of the Mosaic Display Michael Friendly, York University, Psychology Department Journal of Computational and Graphical Statistics, 2001 Mosaic Displays for Loglinear Models. Michael Friendly, York University, Psychology Department Proceedings of the Statistical Graphics Section, 1992, 61-68. Mosaic displays for multi-way contingency tables. Michael Friendly, York University, Psychology Department Journal of the american statistical association March 1994, Vol. 89, No. 425, Theory and Methods Examples -------- >>> import numpy as np >>> import pandas as pd >>> import matplotlib.pyplot as plt >>> from statsmodels.graphics.mosaicplot import mosaic The most simple use case is to take a dictionary and plot the result >>> data = {'a': 10, 'b': 15, 'c': 16} >>> mosaic(data, title='basic dictionary') >>> plt.show() A more useful example is given by a dictionary with multiple indices. In this case we use a wider gap to a better visual separation of the resulting plot >>> data = {('a', 'b'): 1, ('a', 'c'): 2, ('d', 'b'): 3, ('d', 'c'): 4} >>> mosaic(data, gap=0.05, title='complete dictionary') >>> plt.show() The same data can be given as a simple or hierarchical indexed Series >>> rand = np.random.random >>> from itertools import product >>> tuples = list(product(['bar', 'baz', 'foo', 'qux'], ['one', 'two'])) >>> index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second']) >>> data = pd.Series(rand(8), index=index) >>> mosaic(data, title='hierarchical index series') >>> plt.show() The third accepted data structure is the np array, for which a very simple index will be created. >>> rand = np.random.random >>> data = 1+rand((2,2)) >>> mosaic(data, title='random non-labeled array') >>> plt.show() If you need to modify the labeling and the coloring you can give a function tocreate the labels and one with the graphical properties starting from the key tuple >>> data = {'a': 10, 'b': 15, 'c': 16} >>> props = lambda key: {'color': 'r' if 'a' in key else 'gray'} >>> labelizer = lambda k: {('a',): 'first', ('b',): 'second', ... ('c',): 'third'}[k] >>> mosaic(data, title='colored dictionary', properties=props, ... labelizer=labelizer) >>> plt.show() Using a DataFrame as source, specifying the name of the columns of interest >>> gender = ['male', 'male', 'male', 'female', 'female', 'female'] >>> pet = ['cat', 'dog', 'dog', 'cat', 'dog', 'cat'] >>> data = pd.DataFrame({'gender': gender, 'pet': pet}) >>> mosaic(data, ['pet', 'gender'], title='DataFrame as Source') >>> plt.show() .. plot :: plots/graphics_mosaicplot_mosaic.py
mosaic
python
statsmodels/statsmodels
statsmodels/graphics/mosaicplot.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/mosaicplot.py
BSD-3-Clause
def mean_diff_plot( m1, m2, sd_limit=1.96, ax=None, scatter_kwds=None, mean_line_kwds=None, limit_lines_kwds=None, ): """ Construct a Tukey/Bland-Altman Mean Difference Plot. Tukey's Mean Difference Plot (also known as a Bland-Altman plot) is a graphical method to analyze the differences between two methods of measurement. The mean of the measures is plotted against their difference. For more information see https://en.wikipedia.org/wiki/Bland-Altman_plot Parameters ---------- m1 : array_like A 1-d array. m2 : array_like A 1-d array. sd_limit : float The limit of agreements expressed in terms of the standard deviation of the differences. If `md` is the mean of the differences, and `sd` is the standard deviation of those differences, then the limits of agreement that will be plotted are md +/- sd_limit * sd. The default of 1.96 will produce 95% confidence intervals for the means of the differences. If sd_limit = 0, no limits will be plotted, and the ylimit of the plot defaults to 3 standard deviations on either side of the mean. ax : AxesSubplot If `ax` is None, then a figure is created. If an axis instance is given, the mean difference plot is drawn on the axis. scatter_kwds : dict Options to to style the scatter plot. Accepts any keywords for the matplotlib Axes.scatter plotting method mean_line_kwds : dict Options to to style the scatter plot. Accepts any keywords for the matplotlib Axes.axhline plotting method limit_lines_kwds : dict Options to to style the scatter plot. Accepts any keywords for the matplotlib Axes.axhline plotting method Returns ------- Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected. References ---------- Bland JM, Altman DG (1986). "Statistical methods for assessing agreement between two methods of clinical measurement" Examples -------- Load relevant libraries. >>> import statsmodels.api as sm >>> import numpy as np >>> import matplotlib.pyplot as plt Making a mean difference plot. >>> # Seed the random number generator. >>> # This ensures that the results below are reproducible. >>> np.random.seed(9999) >>> m1 = np.random.random(20) >>> m2 = np.random.random(20) >>> f, ax = plt.subplots(1, figsize = (8,5)) >>> sm.graphics.mean_diff_plot(m1, m2, ax = ax) >>> plt.show() .. plot:: plots/graphics-mean_diff_plot.py """ fig, ax = utils.create_mpl_ax(ax) if len(m1) != len(m2): raise ValueError("m1 does not have the same length as m2.") if sd_limit < 0: raise ValueError(f"sd_limit ({sd_limit}) is less than 0.") means = np.mean([m1, m2], axis=0) diffs = m1 - m2 mean_diff = np.mean(diffs) std_diff = np.std(diffs, axis=0) scatter_kwds = scatter_kwds or {} if "s" not in scatter_kwds: scatter_kwds["s"] = 20 mean_line_kwds = mean_line_kwds or {} limit_lines_kwds = limit_lines_kwds or {} for kwds in [mean_line_kwds, limit_lines_kwds]: if "color" not in kwds: kwds["color"] = "gray" if "linewidth" not in kwds: kwds["linewidth"] = 1 if "linestyle" not in mean_line_kwds: kwds["linestyle"] = "--" if "linestyle" not in limit_lines_kwds: kwds["linestyle"] = ":" ax.scatter(means, diffs, **scatter_kwds) # Plot the means against the diffs. ax.axhline(mean_diff, **mean_line_kwds) # draw mean line. # Annotate mean line with mean difference. ax.annotate( f"mean diff:\n{mean_diff:0.3g}", xy=(0.99, 0.5), horizontalalignment="right", verticalalignment="center", fontsize=14, xycoords="axes fraction", ) if sd_limit > 0: half_ylim = (1.5 * sd_limit) * std_diff ax.set_ylim(mean_diff - half_ylim, mean_diff + half_ylim) limit_of_agreement = sd_limit * std_diff lower = mean_diff - limit_of_agreement upper = mean_diff + limit_of_agreement for j, lim in enumerate([lower, upper]): ax.axhline(lim, **limit_lines_kwds) ax.annotate( f"-{sd_limit} SD: {lower:0.2g}", xy=(0.99, 0.07), horizontalalignment="right", verticalalignment="bottom", fontsize=14, xycoords="axes fraction", ) ax.annotate( f"+{sd_limit} SD: {upper:0.2g}", xy=(0.99, 0.92), horizontalalignment="right", fontsize=14, xycoords="axes fraction", ) elif sd_limit == 0: half_ylim = 3 * std_diff ax.set_ylim(mean_diff - half_ylim, mean_diff + half_ylim) ax.set_ylabel("Difference", fontsize=15) ax.set_xlabel("Means", fontsize=15) ax.tick_params(labelsize=13) fig.tight_layout() return fig
Construct a Tukey/Bland-Altman Mean Difference Plot. Tukey's Mean Difference Plot (also known as a Bland-Altman plot) is a graphical method to analyze the differences between two methods of measurement. The mean of the measures is plotted against their difference. For more information see https://en.wikipedia.org/wiki/Bland-Altman_plot Parameters ---------- m1 : array_like A 1-d array. m2 : array_like A 1-d array. sd_limit : float The limit of agreements expressed in terms of the standard deviation of the differences. If `md` is the mean of the differences, and `sd` is the standard deviation of those differences, then the limits of agreement that will be plotted are md +/- sd_limit * sd. The default of 1.96 will produce 95% confidence intervals for the means of the differences. If sd_limit = 0, no limits will be plotted, and the ylimit of the plot defaults to 3 standard deviations on either side of the mean. ax : AxesSubplot If `ax` is None, then a figure is created. If an axis instance is given, the mean difference plot is drawn on the axis. scatter_kwds : dict Options to to style the scatter plot. Accepts any keywords for the matplotlib Axes.scatter plotting method mean_line_kwds : dict Options to to style the scatter plot. Accepts any keywords for the matplotlib Axes.axhline plotting method limit_lines_kwds : dict Options to to style the scatter plot. Accepts any keywords for the matplotlib Axes.axhline plotting method Returns ------- Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected. References ---------- Bland JM, Altman DG (1986). "Statistical methods for assessing agreement between two methods of clinical measurement" Examples -------- Load relevant libraries. >>> import statsmodels.api as sm >>> import numpy as np >>> import matplotlib.pyplot as plt Making a mean difference plot. >>> # Seed the random number generator. >>> # This ensures that the results below are reproducible. >>> np.random.seed(9999) >>> m1 = np.random.random(20) >>> m2 = np.random.random(20) >>> f, ax = plt.subplots(1, figsize = (8,5)) >>> sm.graphics.mean_diff_plot(m1, m2, ax = ax) >>> plt.show() .. plot:: plots/graphics-mean_diff_plot.py
mean_diff_plot
python
statsmodels/statsmodels
statsmodels/graphics/agreement.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/agreement.py
BSD-3-Clause
def theoretical_percentiles(self): """Theoretical percentiles""" return plotting_pos(self.nobs, self.a)
Theoretical percentiles
theoretical_percentiles
python
statsmodels/statsmodels
statsmodels/graphics/gofplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/gofplots.py
BSD-3-Clause
def theoretical_quantiles(self): """Theoretical quantiles""" try: return self.dist.ppf(self.theoretical_percentiles) except TypeError: msg = f"{self.dist.name} requires more parameters to compute ppf" raise TypeError(msg) except Exception as exc: msg = f"failed to compute the ppf of {self.dist.name}" raise type(exc)(msg)
Theoretical quantiles
theoretical_quantiles
python
statsmodels/statsmodels
statsmodels/graphics/gofplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/gofplots.py
BSD-3-Clause
def sorted_data(self): """sorted data""" sorted_data = np.sort(np.array(self.data)) sorted_data.sort() return sorted_data
sorted data
sorted_data
python
statsmodels/statsmodels
statsmodels/graphics/gofplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/gofplots.py
BSD-3-Clause
def sample_quantiles(self): """sample quantiles""" if self.fit and self.loc != 0 and self.scale != 1: return (self.sorted_data - self.loc) / self.scale else: return self.sorted_data
sample quantiles
sample_quantiles
python
statsmodels/statsmodels
statsmodels/graphics/gofplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/gofplots.py
BSD-3-Clause
def sample_percentiles(self): """Sample percentiles""" _check_for(self.dist, "cdf") if self._is_frozen: return self.dist.cdf(self.sorted_data) quantiles = (self.sorted_data - self.fit_params[-2]) / self.fit_params[-1] return self.dist.cdf(quantiles)
Sample percentiles
sample_percentiles
python
statsmodels/statsmodels
statsmodels/graphics/gofplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/gofplots.py
BSD-3-Clause
def ppplot( self, xlabel=None, ylabel=None, line=None, other=None, ax=None, **plotkwargs, ): """ Plot of the percentiles of x versus the percentiles of a distribution. Parameters ---------- xlabel : str or None, optional User-provided labels for the x-axis. If None (default), other values are used depending on the status of the kwarg `other`. ylabel : str or None, optional User-provided labels for the y-axis. If None (default), other values are used depending on the status of the kwarg `other`. line : {None, "45", "s", "r", q"}, optional Options for the reference line to which the data is compared: - "45": 45-degree line - "s": standardized line, the expected order statistics are scaled by the standard deviation of the given sample and have the mean added to them - "r": A regression line is fit - "q": A line is fit through the quartiles. - None: by default no reference line is added to the plot. other : ProbPlot, array_like, or None, optional If provided, ECDF(x) will be plotted against p(x) where x are sorted samples from `self`. ECDF is an empirical cumulative distribution function estimated from `other` and p(x) = 0.5/n, 1.5/n, ..., (n-0.5)/n where n is the number of samples in `self`. If an array-object is provided, it will be turned into a `ProbPlot` instance default parameters. If not provided (default), `self.dist(x)` is be plotted against p(x). ax : AxesSubplot, optional If given, this subplot is used to plot in instead of a new figure being created. **plotkwargs Additional arguments to be passed to the `plot` command. Returns ------- Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected. """ if other is not None: check_other = isinstance(other, ProbPlot) if not check_other: other = ProbPlot(other) p_x = self.theoretical_percentiles ecdf_x = ECDF(other.sample_quantiles)(self.sample_quantiles) fig, ax = _do_plot(p_x, ecdf_x, self.dist, ax=ax, line=line, **plotkwargs) if xlabel is None: xlabel = "Probabilities of 2nd Sample" if ylabel is None: ylabel = "Probabilities of 1st Sample" else: fig, ax = _do_plot( self.theoretical_percentiles, self.sample_percentiles, self.dist, ax=ax, line=line, **plotkwargs, ) if xlabel is None: xlabel = "Theoretical Probabilities" if ylabel is None: ylabel = "Sample Probabilities" ax.set_xlabel(xlabel) ax.set_ylabel(ylabel) ax.set_xlim([0.0, 1.0]) ax.set_ylim([0.0, 1.0]) return fig
Plot of the percentiles of x versus the percentiles of a distribution. Parameters ---------- xlabel : str or None, optional User-provided labels for the x-axis. If None (default), other values are used depending on the status of the kwarg `other`. ylabel : str or None, optional User-provided labels for the y-axis. If None (default), other values are used depending on the status of the kwarg `other`. line : {None, "45", "s", "r", q"}, optional Options for the reference line to which the data is compared: - "45": 45-degree line - "s": standardized line, the expected order statistics are scaled by the standard deviation of the given sample and have the mean added to them - "r": A regression line is fit - "q": A line is fit through the quartiles. - None: by default no reference line is added to the plot. other : ProbPlot, array_like, or None, optional If provided, ECDF(x) will be plotted against p(x) where x are sorted samples from `self`. ECDF is an empirical cumulative distribution function estimated from `other` and p(x) = 0.5/n, 1.5/n, ..., (n-0.5)/n where n is the number of samples in `self`. If an array-object is provided, it will be turned into a `ProbPlot` instance default parameters. If not provided (default), `self.dist(x)` is be plotted against p(x). ax : AxesSubplot, optional If given, this subplot is used to plot in instead of a new figure being created. **plotkwargs Additional arguments to be passed to the `plot` command. Returns ------- Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected.
ppplot
python
statsmodels/statsmodels
statsmodels/graphics/gofplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/gofplots.py
BSD-3-Clause
def qqplot( self, xlabel=None, ylabel=None, line=None, other=None, ax=None, swap: bool = False, **plotkwargs, ): """ Plot of the quantiles of x versus the quantiles/ppf of a distribution. Can also be used to plot against the quantiles of another `ProbPlot` instance. Parameters ---------- xlabel : {None, str} User-provided labels for the x-axis. If None (default), other values are used depending on the status of the kwarg `other`. ylabel : {None, str} User-provided labels for the y-axis. If None (default), other values are used depending on the status of the kwarg `other`. line : {None, "45", "s", "r", q"}, optional Options for the reference line to which the data is compared: - "45" - 45-degree line - "s" - standardized line, the expected order statistics are scaled by the standard deviation of the given sample and have the mean added to them - "r" - A regression line is fit - "q" - A line is fit through the quartiles. - None - by default no reference line is added to the plot. other : {ProbPlot, array_like, None}, optional If provided, the sample quantiles of this `ProbPlot` instance are plotted against the sample quantiles of the `other` `ProbPlot` instance. Sample size of `other` must be equal or larger than this `ProbPlot` instance. If the sample size is larger, sample quantiles of `other` will be interpolated to match the sample size of this `ProbPlot` instance. If an array-like object is provided, it will be turned into a `ProbPlot` instance using default parameters. If not provided (default), the theoretical quantiles are used. ax : AxesSubplot, optional If given, this subplot is used to plot in instead of a new figure being created. swap : bool, optional Flag indicating to swap the x and y labels. **plotkwargs Additional arguments to be passed to the `plot` command. Returns ------- Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected. """ if other is not None: check_other = isinstance(other, ProbPlot) if not check_other: other = ProbPlot(other) s_self = self.sample_quantiles s_other = other.sample_quantiles if len(s_self) > len(s_other): raise ValueError( "Sample size of `other` must be equal or " + "larger than this `ProbPlot` instance" ) elif len(s_self) < len(s_other): # Use quantiles of the smaller set and interpolate quantiles of # the larger data set p = plotting_pos(self.nobs, self.a) s_other = stats.mstats.mquantiles(s_other, p) fig, ax = _do_plot( s_other, s_self, self.dist, ax=ax, line=line, **plotkwargs ) if xlabel is None: xlabel = "Quantiles of 2nd Sample" if ylabel is None: ylabel = "Quantiles of 1st Sample" if swap: xlabel, ylabel = ylabel, xlabel else: fig, ax = _do_plot( self.theoretical_quantiles, self.sample_quantiles, self.dist, ax=ax, line=line, **plotkwargs, ) if xlabel is None: xlabel = "Theoretical Quantiles" if ylabel is None: ylabel = "Sample Quantiles" ax.set_xlabel(xlabel) ax.set_ylabel(ylabel) return fig
Plot of the quantiles of x versus the quantiles/ppf of a distribution. Can also be used to plot against the quantiles of another `ProbPlot` instance. Parameters ---------- xlabel : {None, str} User-provided labels for the x-axis. If None (default), other values are used depending on the status of the kwarg `other`. ylabel : {None, str} User-provided labels for the y-axis. If None (default), other values are used depending on the status of the kwarg `other`. line : {None, "45", "s", "r", q"}, optional Options for the reference line to which the data is compared: - "45" - 45-degree line - "s" - standardized line, the expected order statistics are scaled by the standard deviation of the given sample and have the mean added to them - "r" - A regression line is fit - "q" - A line is fit through the quartiles. - None - by default no reference line is added to the plot. other : {ProbPlot, array_like, None}, optional If provided, the sample quantiles of this `ProbPlot` instance are plotted against the sample quantiles of the `other` `ProbPlot` instance. Sample size of `other` must be equal or larger than this `ProbPlot` instance. If the sample size is larger, sample quantiles of `other` will be interpolated to match the sample size of this `ProbPlot` instance. If an array-like object is provided, it will be turned into a `ProbPlot` instance using default parameters. If not provided (default), the theoretical quantiles are used. ax : AxesSubplot, optional If given, this subplot is used to plot in instead of a new figure being created. swap : bool, optional Flag indicating to swap the x and y labels. **plotkwargs Additional arguments to be passed to the `plot` command. Returns ------- Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected.
qqplot
python
statsmodels/statsmodels
statsmodels/graphics/gofplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/gofplots.py
BSD-3-Clause
def probplot( self, xlabel=None, ylabel=None, line=None, exceed=False, ax=None, **plotkwargs, ): """ Plot of unscaled quantiles of x against the prob of a distribution. The x-axis is scaled linearly with the quantiles, but the probabilities are used to label the axis. Parameters ---------- xlabel : {None, str}, optional User-provided labels for the x-axis. If None (default), other values are used depending on the status of the kwarg `other`. ylabel : {None, str}, optional User-provided labels for the y-axis. If None (default), other values are used depending on the status of the kwarg `other`. line : {None, "45", "s", "r", q"}, optional Options for the reference line to which the data is compared: - "45" - 45-degree line - "s" - standardized line, the expected order statistics are scaled by the standard deviation of the given sample and have the mean added to them - "r" - A regression line is fit - "q" - A line is fit through the quartiles. - None - by default no reference line is added to the plot. exceed : bool, optional If False (default) the raw sample quantiles are plotted against the theoretical quantiles, show the probability that a sample will not exceed a given value. If True, the theoretical quantiles are flipped such that the figure displays the probability that a sample will exceed a given value. ax : AxesSubplot, optional If given, this subplot is used to plot in instead of a new figure being created. **plotkwargs Additional arguments to be passed to the `plot` command. Returns ------- Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected. """ if exceed: fig, ax = _do_plot( self.theoretical_quantiles[::-1], self.sorted_data, self.dist, ax=ax, line=line, **plotkwargs, ) if xlabel is None: xlabel = "Probability of Exceedance (%)" else: fig, ax = _do_plot( self.theoretical_quantiles, self.sorted_data, self.dist, ax=ax, line=line, **plotkwargs, ) if xlabel is None: xlabel = "Non-exceedance Probability (%)" if ylabel is None: ylabel = "Sample Quantiles" ax.set_xlabel(xlabel) ax.set_ylabel(ylabel) _fmt_probplot_axis(ax, self.dist, self.nobs) return fig
Plot of unscaled quantiles of x against the prob of a distribution. The x-axis is scaled linearly with the quantiles, but the probabilities are used to label the axis. Parameters ---------- xlabel : {None, str}, optional User-provided labels for the x-axis. If None (default), other values are used depending on the status of the kwarg `other`. ylabel : {None, str}, optional User-provided labels for the y-axis. If None (default), other values are used depending on the status of the kwarg `other`. line : {None, "45", "s", "r", q"}, optional Options for the reference line to which the data is compared: - "45" - 45-degree line - "s" - standardized line, the expected order statistics are scaled by the standard deviation of the given sample and have the mean added to them - "r" - A regression line is fit - "q" - A line is fit through the quartiles. - None - by default no reference line is added to the plot. exceed : bool, optional If False (default) the raw sample quantiles are plotted against the theoretical quantiles, show the probability that a sample will not exceed a given value. If True, the theoretical quantiles are flipped such that the figure displays the probability that a sample will exceed a given value. ax : AxesSubplot, optional If given, this subplot is used to plot in instead of a new figure being created. **plotkwargs Additional arguments to be passed to the `plot` command. Returns ------- Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected.
probplot
python
statsmodels/statsmodels
statsmodels/graphics/gofplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/gofplots.py
BSD-3-Clause
def qqplot( data, dist=stats.norm, distargs=(), a=0, loc=0, scale=1, fit=False, line=None, ax=None, **plotkwargs, ): """ Q-Q plot of the quantiles of x versus the quantiles/ppf of a distribution. Can take arguments specifying the parameters for dist or fit them automatically. (See fit under Parameters.) Parameters ---------- data : array_like A 1d data array. dist : callable Comparison distribution. The default is scipy.stats.distributions.norm (a standard normal). distargs : tuple A tuple of arguments passed to dist to specify it fully so dist.ppf may be called. a : float Offset for the plotting position of an expected order statistic, for example. The plotting positions are given by (i - a)/(nobs - 2*a + 1) for i in range(0,nobs+1) loc : float Location parameter for dist scale : float Scale parameter for dist fit : bool If fit is false, loc, scale, and distargs are passed to the distribution. If fit is True then the parameters for dist are fit automatically using dist.fit. The quantiles are formed from the standardized data, after subtracting the fitted loc and dividing by the fitted scale. line : {None, "45", "s", "r", "q"} Options for the reference line to which the data is compared: - "45" - 45-degree line - "s" - standardized line, the expected order statistics are scaled by the standard deviation of the given sample and have the mean added to them - "r" - A regression line is fit - "q" - A line is fit through the quartiles. - None - by default no reference line is added to the plot. ax : AxesSubplot, optional If given, this subplot is used to plot in instead of a new figure being created. **plotkwargs Additional matplotlib arguments to be passed to the `plot` command. Returns ------- Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected. See Also -------- scipy.stats.probplot Notes ----- Depends on matplotlib. If `fit` is True then the parameters are fit using the distribution's fit() method. Examples -------- >>> import statsmodels.api as sm >>> from matplotlib import pyplot as plt >>> data = sm.datasets.longley.load() >>> exog = sm.add_constant(data.exog) >>> mod_fit = sm.OLS(data.endog, exog).fit() >>> res = mod_fit.resid # residuals >>> fig = sm.qqplot(res) >>> plt.show() qqplot of the residuals against quantiles of t-distribution with 4 degrees of freedom: >>> import scipy.stats as stats >>> fig = sm.qqplot(res, stats.t, distargs=(4,)) >>> plt.show() qqplot against same as above, but with mean 3 and std 10: >>> fig = sm.qqplot(res, stats.t, distargs=(4,), loc=3, scale=10) >>> plt.show() Automatically determine parameters for t distribution including the loc and scale: >>> fig = sm.qqplot(res, stats.t, fit=True, line="45") >>> plt.show() The following plot displays some options, follow the link to see the code. .. plot:: plots/graphics_gofplots_qqplot.py """ probplot = ProbPlot( data, dist=dist, distargs=distargs, fit=fit, a=a, loc=loc, scale=scale ) fig = probplot.qqplot(ax=ax, line=line, **plotkwargs) return fig
Q-Q plot of the quantiles of x versus the quantiles/ppf of a distribution. Can take arguments specifying the parameters for dist or fit them automatically. (See fit under Parameters.) Parameters ---------- data : array_like A 1d data array. dist : callable Comparison distribution. The default is scipy.stats.distributions.norm (a standard normal). distargs : tuple A tuple of arguments passed to dist to specify it fully so dist.ppf may be called. a : float Offset for the plotting position of an expected order statistic, for example. The plotting positions are given by (i - a)/(nobs - 2*a + 1) for i in range(0,nobs+1) loc : float Location parameter for dist scale : float Scale parameter for dist fit : bool If fit is false, loc, scale, and distargs are passed to the distribution. If fit is True then the parameters for dist are fit automatically using dist.fit. The quantiles are formed from the standardized data, after subtracting the fitted loc and dividing by the fitted scale. line : {None, "45", "s", "r", "q"} Options for the reference line to which the data is compared: - "45" - 45-degree line - "s" - standardized line, the expected order statistics are scaled by the standard deviation of the given sample and have the mean added to them - "r" - A regression line is fit - "q" - A line is fit through the quartiles. - None - by default no reference line is added to the plot. ax : AxesSubplot, optional If given, this subplot is used to plot in instead of a new figure being created. **plotkwargs Additional matplotlib arguments to be passed to the `plot` command. Returns ------- Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected. See Also -------- scipy.stats.probplot Notes ----- Depends on matplotlib. If `fit` is True then the parameters are fit using the distribution's fit() method. Examples -------- >>> import statsmodels.api as sm >>> from matplotlib import pyplot as plt >>> data = sm.datasets.longley.load() >>> exog = sm.add_constant(data.exog) >>> mod_fit = sm.OLS(data.endog, exog).fit() >>> res = mod_fit.resid # residuals >>> fig = sm.qqplot(res) >>> plt.show() qqplot of the residuals against quantiles of t-distribution with 4 degrees of freedom: >>> import scipy.stats as stats >>> fig = sm.qqplot(res, stats.t, distargs=(4,)) >>> plt.show() qqplot against same as above, but with mean 3 and std 10: >>> fig = sm.qqplot(res, stats.t, distargs=(4,), loc=3, scale=10) >>> plt.show() Automatically determine parameters for t distribution including the loc and scale: >>> fig = sm.qqplot(res, stats.t, fit=True, line="45") >>> plt.show() The following plot displays some options, follow the link to see the code. .. plot:: plots/graphics_gofplots_qqplot.py
qqplot
python
statsmodels/statsmodels
statsmodels/graphics/gofplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/gofplots.py
BSD-3-Clause
def qqplot_2samples(data1, data2, xlabel=None, ylabel=None, line=None, ax=None): """ Q-Q Plot of two samples' quantiles. Can take either two `ProbPlot` instances or two array-like objects. In the case of the latter, both inputs will be converted to `ProbPlot` instances using only the default values - so use `ProbPlot` instances if finer-grained control of the quantile computations is required. Parameters ---------- data1 : {array_like, ProbPlot} Data to plot along x axis. If the sample sizes are unequal, the longer series is always plotted along the x-axis. data2 : {array_like, ProbPlot} Data to plot along y axis. Does not need to have the same number of observations as data 1. If the sample sizes are unequal, the longer series is always plotted along the x-axis. xlabel : {None, str} User-provided labels for the x-axis. If None (default), other values are used. ylabel : {None, str} User-provided labels for the y-axis. If None (default), other values are used. line : {None, "45", "s", "r", q"} Options for the reference line to which the data is compared: - "45" - 45-degree line - "s" - standardized line, the expected order statistics are scaled by the standard deviation of the given sample and have the mean added to them - "r" - A regression line is fit - "q" - A line is fit through the quartiles. - None - by default no reference line is added to the plot. ax : AxesSubplot, optional If given, this subplot is used to plot in instead of a new figure being created. Returns ------- Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected. See Also -------- scipy.stats.probplot Notes ----- 1) Depends on matplotlib. 2) If `data1` and `data2` are not `ProbPlot` instances, instances will be created using the default parameters. Therefore, it is recommended to use `ProbPlot` instance if fine-grained control is needed in the computation of the quantiles. Examples -------- >>> import statsmodels.api as sm >>> import numpy as np >>> import matplotlib.pyplot as plt >>> from statsmodels.graphics.gofplots import qqplot_2samples >>> x = np.random.normal(loc=8.5, scale=2.5, size=37) >>> y = np.random.normal(loc=8.0, scale=3.0, size=37) >>> pp_x = sm.ProbPlot(x) >>> pp_y = sm.ProbPlot(y) >>> qqplot_2samples(pp_x, pp_y) >>> plt.show() .. plot:: plots/graphics_gofplots_qqplot_2samples.py >>> fig = qqplot_2samples(pp_x, pp_y, xlabel=None, ylabel=None, ... line=None, ax=None) """ if not isinstance(data1, ProbPlot): data1 = ProbPlot(data1) if not isinstance(data2, ProbPlot): data2 = ProbPlot(data2) if data2.data.shape[0] > data1.data.shape[0]: fig = data1.qqplot(xlabel=xlabel, ylabel=ylabel, line=line, other=data2, ax=ax) else: fig = data2.qqplot( xlabel=ylabel, ylabel=xlabel, line=line, other=data1, ax=ax, swap=True, ) return fig
Q-Q Plot of two samples' quantiles. Can take either two `ProbPlot` instances or two array-like objects. In the case of the latter, both inputs will be converted to `ProbPlot` instances using only the default values - so use `ProbPlot` instances if finer-grained control of the quantile computations is required. Parameters ---------- data1 : {array_like, ProbPlot} Data to plot along x axis. If the sample sizes are unequal, the longer series is always plotted along the x-axis. data2 : {array_like, ProbPlot} Data to plot along y axis. Does not need to have the same number of observations as data 1. If the sample sizes are unequal, the longer series is always plotted along the x-axis. xlabel : {None, str} User-provided labels for the x-axis. If None (default), other values are used. ylabel : {None, str} User-provided labels for the y-axis. If None (default), other values are used. line : {None, "45", "s", "r", q"} Options for the reference line to which the data is compared: - "45" - 45-degree line - "s" - standardized line, the expected order statistics are scaled by the standard deviation of the given sample and have the mean added to them - "r" - A regression line is fit - "q" - A line is fit through the quartiles. - None - by default no reference line is added to the plot. ax : AxesSubplot, optional If given, this subplot is used to plot in instead of a new figure being created. Returns ------- Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected. See Also -------- scipy.stats.probplot Notes ----- 1) Depends on matplotlib. 2) If `data1` and `data2` are not `ProbPlot` instances, instances will be created using the default parameters. Therefore, it is recommended to use `ProbPlot` instance if fine-grained control is needed in the computation of the quantiles. Examples -------- >>> import statsmodels.api as sm >>> import numpy as np >>> import matplotlib.pyplot as plt >>> from statsmodels.graphics.gofplots import qqplot_2samples >>> x = np.random.normal(loc=8.5, scale=2.5, size=37) >>> y = np.random.normal(loc=8.0, scale=3.0, size=37) >>> pp_x = sm.ProbPlot(x) >>> pp_y = sm.ProbPlot(y) >>> qqplot_2samples(pp_x, pp_y) >>> plt.show() .. plot:: plots/graphics_gofplots_qqplot_2samples.py >>> fig = qqplot_2samples(pp_x, pp_y, xlabel=None, ylabel=None, ... line=None, ax=None)
qqplot_2samples
python
statsmodels/statsmodels
statsmodels/graphics/gofplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/gofplots.py
BSD-3-Clause
def qqline(ax, line, x=None, y=None, dist=None, fmt="r-", **lineoptions): """ Plot a reference line for a qqplot. Parameters ---------- ax : matplotlib axes instance The axes on which to plot the line line : str {"45","r","s","q"} Options for the reference line to which the data is compared.: - "45" - 45-degree line - "s" - standardized line, the expected order statistics are scaled by the standard deviation of the given sample and have the mean added to them - "r" - A regression line is fit - "q" - A line is fit through the quartiles. - None - By default no reference line is added to the plot. x : ndarray X data for plot. Not needed if line is "45". y : ndarray Y data for plot. Not needed if line is "45". dist : scipy.stats.distribution A scipy.stats distribution, needed if line is "q". fmt : str, optional Line format string passed to `plot`. **lineoptions Additional arguments to be passed to the `plot` command. Notes ----- There is no return value. The line is plotted on the given `ax`. Examples -------- Import the food expenditure dataset. Plot annual food expenditure on x-axis and household income on y-axis. Use qqline to add regression line into the plot. >>> import statsmodels.api as sm >>> import numpy as np >>> import matplotlib.pyplot as plt >>> from statsmodels.graphics.gofplots import qqline >>> foodexp = sm.datasets.engel.load() >>> x = foodexp.exog >>> y = foodexp.endog >>> ax = plt.subplot(111) >>> plt.scatter(x, y) >>> ax.set_xlabel(foodexp.exog_name[0]) >>> ax.set_ylabel(foodexp.endog_name) >>> qqline(ax, "r", x, y) >>> plt.show() .. plot:: plots/graphics_gofplots_qqplot_qqline.py """ lineoptions = lineoptions.copy() for ls in ("-", "--", "-.", ":"): if ls in fmt: lineoptions.setdefault("linestyle", ls) fmt = fmt.replace(ls, "") break for marker in ( ".", ",", "o", "v", "^", "<", ">", "1", "2", "3", "4", "8", "s", "p", "P", "*", "h", "H", "+", "x", "X", "D", "d", "|", "_", ): if marker in fmt: lineoptions.setdefault("marker", marker) fmt = fmt.replace(marker, "") break if fmt: lineoptions.setdefault("color", fmt) if line == "45": end_pts = lzip(ax.get_xlim(), ax.get_ylim()) end_pts[0] = min(end_pts[0]) end_pts[1] = max(end_pts[1]) ax.plot(end_pts, end_pts, **lineoptions) ax.set_xlim(end_pts) ax.set_ylim(end_pts) return # does this have any side effects? if x is None or y is None: raise ValueError("If line is not 45, x and y cannot be None.") x = np.array(x) y = np.array(y) if line == "r": # could use ax.lines[0].get_xdata(), get_ydata(), # but don't know axes are "clean" y = OLS(y, add_constant(x)).fit().fittedvalues ax.plot(x, y, **lineoptions) elif line == "s": m, b = np.std(y), np.mean(y) ref_line = x * m + b ax.plot(x, ref_line, **lineoptions) elif line == "q": _check_for(dist, "ppf") q25 = stats.scoreatpercentile(y, 25) q75 = stats.scoreatpercentile(y, 75) theoretical_quartiles = dist.ppf([0.25, 0.75]) m = (q75 - q25) / np.diff(theoretical_quartiles) b = q25 - m * theoretical_quartiles[0] ax.plot(x, m * x + b, **lineoptions)
Plot a reference line for a qqplot. Parameters ---------- ax : matplotlib axes instance The axes on which to plot the line line : str {"45","r","s","q"} Options for the reference line to which the data is compared.: - "45" - 45-degree line - "s" - standardized line, the expected order statistics are scaled by the standard deviation of the given sample and have the mean added to them - "r" - A regression line is fit - "q" - A line is fit through the quartiles. - None - By default no reference line is added to the plot. x : ndarray X data for plot. Not needed if line is "45". y : ndarray Y data for plot. Not needed if line is "45". dist : scipy.stats.distribution A scipy.stats distribution, needed if line is "q". fmt : str, optional Line format string passed to `plot`. **lineoptions Additional arguments to be passed to the `plot` command. Notes ----- There is no return value. The line is plotted on the given `ax`. Examples -------- Import the food expenditure dataset. Plot annual food expenditure on x-axis and household income on y-axis. Use qqline to add regression line into the plot. >>> import statsmodels.api as sm >>> import numpy as np >>> import matplotlib.pyplot as plt >>> from statsmodels.graphics.gofplots import qqline >>> foodexp = sm.datasets.engel.load() >>> x = foodexp.exog >>> y = foodexp.endog >>> ax = plt.subplot(111) >>> plt.scatter(x, y) >>> ax.set_xlabel(foodexp.exog_name[0]) >>> ax.set_ylabel(foodexp.endog_name) >>> qqline(ax, "r", x, y) >>> plt.show() .. plot:: plots/graphics_gofplots_qqplot_qqline.py
qqline
python
statsmodels/statsmodels
statsmodels/graphics/gofplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/gofplots.py
BSD-3-Clause
def plotting_pos(nobs, a=0.0, b=None): """ Generates sequence of plotting positions Parameters ---------- nobs : int Number of probability points to plot a : float, default 0.0 alpha parameter for the plotting position of an expected order statistic b : float, default None beta parameter for the plotting position of an expected order statistic. If None, then b is set to a. Returns ------- ndarray The plotting positions Notes ----- The plotting positions are given by (i - a)/(nobs + 1 - a - b) for i in range(1, nobs+1) See Also -------- scipy.stats.mstats.plotting_positions Additional information on alpha and beta """ b = a if b is None else b return (np.arange(1.0, nobs + 1) - a) / (nobs + 1 - a - b)
Generates sequence of plotting positions Parameters ---------- nobs : int Number of probability points to plot a : float, default 0.0 alpha parameter for the plotting position of an expected order statistic b : float, default None beta parameter for the plotting position of an expected order statistic. If None, then b is set to a. Returns ------- ndarray The plotting positions Notes ----- The plotting positions are given by (i - a)/(nobs + 1 - a - b) for i in range(1, nobs+1) See Also -------- scipy.stats.mstats.plotting_positions Additional information on alpha and beta
plotting_pos
python
statsmodels/statsmodels
statsmodels/graphics/gofplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/gofplots.py
BSD-3-Clause
def _fmt_probplot_axis(ax, dist, nobs): """ Formats a theoretical quantile axis to display the corresponding probabilities on the quantiles' scale. Parameters ---------- ax : AxesSubplot, optional The axis to be formatted nobs : scalar Number of observations in the sample dist : scipy.stats.distribution A scipy.stats distribution sufficiently specified to implement its ppf() method. Returns ------- There is no return value. This operates on `ax` in place """ _check_for(dist, "ppf") axis_probs = np.linspace(10, 90, 9, dtype=float) small = np.array([1.0, 2, 5]) axis_probs = np.r_[small, axis_probs, 100 - small[::-1]] if nobs >= 50: axis_probs = np.r_[small / 10, axis_probs, 100 - small[::-1] / 10] if nobs >= 500: axis_probs = np.r_[small / 100, axis_probs, 100 - small[::-1] / 100] axis_probs /= 100.0 axis_qntls = dist.ppf(axis_probs) ax.set_xticks(axis_qntls) ax.set_xticklabels( [str(lbl) for lbl in (axis_probs * 100)], rotation=45, rotation_mode="anchor", horizontalalignment="right", verticalalignment="center", ) ax.set_xlim([axis_qntls.min(), axis_qntls.max()])
Formats a theoretical quantile axis to display the corresponding probabilities on the quantiles' scale. Parameters ---------- ax : AxesSubplot, optional The axis to be formatted nobs : scalar Number of observations in the sample dist : scipy.stats.distribution A scipy.stats distribution sufficiently specified to implement its ppf() method. Returns ------- There is no return value. This operates on `ax` in place
_fmt_probplot_axis
python
statsmodels/statsmodels
statsmodels/graphics/gofplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/gofplots.py
BSD-3-Clause
def _do_plot(x, y, dist=None, line=None, ax=None, fmt="b", step=False, **kwargs): """ Boiler plate plotting function for the `ppplot`, `qqplot`, and `probplot` methods of the `ProbPlot` class Parameters ---------- x : array_like X-axis data to be plotted y : array_like Y-axis data to be plotted dist : scipy.stats.distribution A scipy.stats distribution, needed if `line` is "q". line : {"45", "s", "r", "q", None}, default None Options for the reference line to which the data is compared. ax : AxesSubplot, optional If given, this subplot is used to plot in instead of a new figure being created. fmt : str, optional matplotlib-compatible formatting string for the data markers kwargs : keywords These are passed to matplotlib.plot Returns ------- fig : Figure The figure containing `ax`. ax : AxesSubplot The original axes if provided. Otherwise a new instance. """ plot_style = { "marker": "o", "markerfacecolor": "C0", "markeredgecolor": "C0", "linestyle": "none", } plot_style.update(**kwargs) where = plot_style.pop("where", "pre") fig, ax = utils.create_mpl_ax(ax) ax.set_xmargin(0.02) if step: ax.step(x, y, fmt, where=where, **plot_style) else: ax.plot(x, y, fmt, **plot_style) if line: if line not in ["r", "q", "45", "s"]: msg = "%s option for line not understood" % line raise ValueError(msg) qqline(ax, line, x=x, y=y, dist=dist) return fig, ax
Boiler plate plotting function for the `ppplot`, `qqplot`, and `probplot` methods of the `ProbPlot` class Parameters ---------- x : array_like X-axis data to be plotted y : array_like Y-axis data to be plotted dist : scipy.stats.distribution A scipy.stats distribution, needed if `line` is "q". line : {"45", "s", "r", "q", None}, default None Options for the reference line to which the data is compared. ax : AxesSubplot, optional If given, this subplot is used to plot in instead of a new figure being created. fmt : str, optional matplotlib-compatible formatting string for the data markers kwargs : keywords These are passed to matplotlib.plot Returns ------- fig : Figure The figure containing `ax`. ax : AxesSubplot The original axes if provided. Otherwise a new instance.
_do_plot
python
statsmodels/statsmodels
statsmodels/graphics/gofplots.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/gofplots.py
BSD-3-Clause
def plot_corr(dcorr, xnames=None, ynames=None, title=None, normcolor=False, ax=None, cmap='RdYlBu_r'): """Plot correlation of many variables in a tight color grid. Parameters ---------- dcorr : ndarray Correlation matrix, square 2-D array. xnames : list[str], optional Labels for the horizontal axis. If not given (None), then the matplotlib defaults (integers) are used. If it is an empty list, [], then no ticks and labels are added. ynames : list[str], optional Labels for the vertical axis. Works the same way as `xnames`. If not given, the same names as for `xnames` are re-used. title : str, optional The figure title. If None, the default ('Correlation Matrix') is used. If ``title=''``, then no title is added. normcolor : bool or tuple of scalars, optional If False (default), then the color coding range corresponds to the range of `dcorr`. If True, then the color range is normalized to (-1, 1). If this is a tuple of two numbers, then they define the range for the color bar. ax : AxesSubplot, optional If `ax` is None, then a figure is created. If an axis instance is given, then only the main plot but not the colorbar is created. cmap : str or Matplotlib Colormap instance, optional The colormap for the plot. Can be any valid Matplotlib Colormap instance or name. Returns ------- Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected. Examples -------- >>> import numpy as np >>> import matplotlib.pyplot as plt >>> import statsmodels.graphics.api as smg >>> hie_data = sm.datasets.randhie.load_pandas() >>> corr_matrix = np.corrcoef(hie_data.data.T) >>> smg.plot_corr(corr_matrix, xnames=hie_data.names) >>> plt.show() .. plot:: plots/graphics_correlation_plot_corr.py """ if ax is None: create_colorbar = True else: create_colorbar = False fig, ax = utils.create_mpl_ax(ax) nvars = dcorr.shape[0] if ynames is None: ynames = xnames if title is None: title = 'Correlation Matrix' if isinstance(normcolor, tuple): vmin, vmax = normcolor elif normcolor: vmin, vmax = -1.0, 1.0 else: vmin, vmax = None, None axim = ax.imshow(dcorr, cmap=cmap, interpolation='nearest', extent=(0,nvars,0,nvars), vmin=vmin, vmax=vmax) # create list of label positions labelPos = np.arange(0, nvars) + 0.5 if isinstance(ynames, list) and len(ynames) == 0: ax.set_yticks([]) elif ynames is not None: ax.set_yticks(labelPos) ax.set_yticks(labelPos[:-1]+0.5, minor=True) ax.set_yticklabels(ynames[::-1], fontsize='small', horizontalalignment='right') if isinstance(xnames, list) and len(xnames) == 0: ax.set_xticks([]) elif xnames is not None: ax.set_xticks(labelPos) ax.set_xticks(labelPos[:-1]+0.5, minor=True) ax.set_xticklabels(xnames, fontsize='small', rotation=45, horizontalalignment='right') if not title == '': ax.set_title(title) if create_colorbar: fig.colorbar(axim, use_gridspec=True) fig.tight_layout() ax.tick_params(which='minor', length=0) ax.tick_params(direction='out', top=False, right=False) try: ax.grid(True, which='minor', linestyle='-', color='w', lw=1) except AttributeError: # Seems to fail for axes created with AxesGrid. MPL bug? pass return fig
Plot correlation of many variables in a tight color grid. Parameters ---------- dcorr : ndarray Correlation matrix, square 2-D array. xnames : list[str], optional Labels for the horizontal axis. If not given (None), then the matplotlib defaults (integers) are used. If it is an empty list, [], then no ticks and labels are added. ynames : list[str], optional Labels for the vertical axis. Works the same way as `xnames`. If not given, the same names as for `xnames` are re-used. title : str, optional The figure title. If None, the default ('Correlation Matrix') is used. If ``title=''``, then no title is added. normcolor : bool or tuple of scalars, optional If False (default), then the color coding range corresponds to the range of `dcorr`. If True, then the color range is normalized to (-1, 1). If this is a tuple of two numbers, then they define the range for the color bar. ax : AxesSubplot, optional If `ax` is None, then a figure is created. If an axis instance is given, then only the main plot but not the colorbar is created. cmap : str or Matplotlib Colormap instance, optional The colormap for the plot. Can be any valid Matplotlib Colormap instance or name. Returns ------- Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected. Examples -------- >>> import numpy as np >>> import matplotlib.pyplot as plt >>> import statsmodels.graphics.api as smg >>> hie_data = sm.datasets.randhie.load_pandas() >>> corr_matrix = np.corrcoef(hie_data.data.T) >>> smg.plot_corr(corr_matrix, xnames=hie_data.names) >>> plt.show() .. plot:: plots/graphics_correlation_plot_corr.py
plot_corr
python
statsmodels/statsmodels
statsmodels/graphics/correlation.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/correlation.py
BSD-3-Clause
def plot_corr_grid(dcorrs, titles=None, ncols=None, normcolor=False, xnames=None, ynames=None, fig=None, cmap='RdYlBu_r'): """ Create a grid of correlation plots. The individual correlation plots are assumed to all have the same variables, axis labels can be specified only once. Parameters ---------- dcorrs : list or iterable of ndarrays List of correlation matrices. titles : list[str], optional List of titles for the subplots. By default no title are shown. ncols : int, optional Number of columns in the subplot grid. If not given, the number of columns is determined automatically. normcolor : bool or tuple, optional If False (default), then the color coding range corresponds to the range of `dcorr`. If True, then the color range is normalized to (-1, 1). If this is a tuple of two numbers, then they define the range for the color bar. xnames : list[str], optional Labels for the horizontal axis. If not given (None), then the matplotlib defaults (integers) are used. If it is an empty list, [], then no ticks and labels are added. ynames : list[str], optional Labels for the vertical axis. Works the same way as `xnames`. If not given, the same names as for `xnames` are re-used. fig : Figure, optional If given, this figure is simply returned. Otherwise a new figure is created. cmap : str or Matplotlib Colormap instance, optional The colormap for the plot. Can be any valid Matplotlib Colormap instance or name. Returns ------- Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected. Examples -------- >>> import numpy as np >>> import matplotlib.pyplot as plt >>> import statsmodels.api as sm In this example we just reuse the same correlation matrix several times. Of course in reality one would show a different correlation (measuring a another type of correlation, for example Pearson (linear) and Spearman, Kendall (nonlinear) correlations) for the same variables. >>> hie_data = sm.datasets.randhie.load_pandas() >>> corr_matrix = np.corrcoef(hie_data.data.T) >>> sm.graphics.plot_corr_grid([corr_matrix] * 8, xnames=hie_data.names) >>> plt.show() .. plot:: plots/graphics_correlation_plot_corr_grid.py """ if ynames is None: ynames = xnames if not titles: titles = ['']*len(dcorrs) n_plots = len(dcorrs) if ncols is not None: nrows = int(np.ceil(n_plots / float(ncols))) else: # Determine number of rows and columns, square if possible, otherwise # prefer a wide (more columns) over a high layout. if n_plots < 4: nrows, ncols = 1, n_plots else: nrows = int(np.sqrt(n_plots)) ncols = int(np.ceil(n_plots / float(nrows))) # Create a figure with the correct size aspect = min(ncols / float(nrows), 1.8) vsize = np.sqrt(nrows) * 5 fig = utils.create_mpl_fig(fig, figsize=(vsize * aspect + 1, vsize)) for i, c in enumerate(dcorrs): ax = fig.add_subplot(nrows, ncols, i+1) # Ensure to only plot labels on bottom row and left column _xnames = xnames if nrows * ncols - (i+1) < ncols else [] _ynames = ynames if (i+1) % ncols == 1 else [] plot_corr(c, xnames=_xnames, ynames=_ynames, title=titles[i], normcolor=normcolor, ax=ax, cmap=cmap) # Adjust figure margins and add a colorbar fig.subplots_adjust(bottom=0.1, left=0.09, right=0.9, top=0.9) cax = fig.add_axes([0.92, 0.1, 0.025, 0.8]) fig.colorbar(fig.axes[0].images[0], cax=cax) return fig
Create a grid of correlation plots. The individual correlation plots are assumed to all have the same variables, axis labels can be specified only once. Parameters ---------- dcorrs : list or iterable of ndarrays List of correlation matrices. titles : list[str], optional List of titles for the subplots. By default no title are shown. ncols : int, optional Number of columns in the subplot grid. If not given, the number of columns is determined automatically. normcolor : bool or tuple, optional If False (default), then the color coding range corresponds to the range of `dcorr`. If True, then the color range is normalized to (-1, 1). If this is a tuple of two numbers, then they define the range for the color bar. xnames : list[str], optional Labels for the horizontal axis. If not given (None), then the matplotlib defaults (integers) are used. If it is an empty list, [], then no ticks and labels are added. ynames : list[str], optional Labels for the vertical axis. Works the same way as `xnames`. If not given, the same names as for `xnames` are re-used. fig : Figure, optional If given, this figure is simply returned. Otherwise a new figure is created. cmap : str or Matplotlib Colormap instance, optional The colormap for the plot. Can be any valid Matplotlib Colormap instance or name. Returns ------- Figure If `ax` is None, the created figure. Otherwise the figure to which `ax` is connected. Examples -------- >>> import numpy as np >>> import matplotlib.pyplot as plt >>> import statsmodels.api as sm In this example we just reuse the same correlation matrix several times. Of course in reality one would show a different correlation (measuring a another type of correlation, for example Pearson (linear) and Spearman, Kendall (nonlinear) correlations) for the same variables. >>> hie_data = sm.datasets.randhie.load_pandas() >>> corr_matrix = np.corrcoef(hie_data.data.T) >>> sm.graphics.plot_corr_grid([corr_matrix] * 8, xnames=hie_data.names) >>> plt.show() .. plot:: plots/graphics_correlation_plot_corr_grid.py
plot_corr_grid
python
statsmodels/statsmodels
statsmodels/graphics/correlation.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/correlation.py
BSD-3-Clause
def _make_ellipse(mean, cov, ax, level=0.95, color=None): """Support function for scatter_ellipse.""" from matplotlib.patches import Ellipse v, w = np.linalg.eigh(cov) u = w[0] / np.linalg.norm(w[0]) angle = np.arctan(u[1]/u[0]) angle = 180 * angle / np.pi # convert to degrees v = 2 * np.sqrt(v * stats.chi2.ppf(level, 2)) #get size corresponding to level ell = Ellipse(mean[:2], v[0], v[1], angle=180 + angle, facecolor='none', edgecolor=color, #ls='dashed', #for debugging lw=1.5) ell.set_clip_box(ax.bbox) ell.set_alpha(0.5) ax.add_artist(ell)
Support function for scatter_ellipse.
_make_ellipse
python
statsmodels/statsmodels
statsmodels/graphics/plot_grids.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/plot_grids.py
BSD-3-Clause
def scatter_ellipse(data, level=0.9, varnames=None, ell_kwds=None, plot_kwds=None, add_titles=False, keep_ticks=False, fig=None): """Create a grid of scatter plots with confidence ellipses. ell_kwds, plot_kdes not used yet looks ok with 5 or 6 variables, too crowded with 8, too empty with 1 Parameters ---------- data : array_like Input data. level : scalar, optional Default is 0.9. varnames : list[str], optional Variable names. Used for y-axis labels, and if `add_titles` is True also for titles. If not given, integers 1..data.shape[1] are used. ell_kwds : dict, optional UNUSED plot_kwds : dict, optional UNUSED add_titles : bool, optional Whether or not to add titles to each subplot. Default is False. Titles are constructed from `varnames`. keep_ticks : bool, optional If False (default), remove all axis ticks. fig : Figure, optional If given, this figure is simply returned. Otherwise a new figure is created. Returns ------- Figure If `fig` is None, the created figure. Otherwise `fig` itself. Examples -------- >>> import statsmodels.api as sm >>> import matplotlib.pyplot as plt >>> import numpy as np >>> from statsmodels.graphics.plot_grids import scatter_ellipse >>> data = sm.datasets.statecrime.load_pandas().data >>> fig = plt.figure(figsize=(8,8)) >>> scatter_ellipse(data, varnames=data.columns, fig=fig) >>> plt.show() .. plot:: plots/graphics_plot_grids_scatter_ellipse.py """ fig = utils.create_mpl_fig(fig) import matplotlib.ticker as mticker data = np.asanyarray(data) #needs mean and cov nvars = data.shape[1] if varnames is None: #assuming single digit, nvars<=10 else use 'var%2d' varnames = ['var%d' % i for i in range(nvars)] plot_kwds_ = dict(ls='none', marker='.', color='k', alpha=0.5) if plot_kwds: plot_kwds_.update(plot_kwds) ell_kwds_= dict(color='k') if ell_kwds: ell_kwds_.update(ell_kwds) dmean = data.mean(0) dcov = np.cov(data, rowvar=0) for i in range(1, nvars): for j in range(i): ax = fig.add_subplot(nvars-1, nvars-1, (i-1)*(nvars-1)+j+1) ## #sharey=ax_last) #sharey does not allow empty ticks? ## if j == 0: ## ax_last = ax ## ax.set_ylabel(varnames[i]) #TODO: make sure we have same xlim and ylim formatter = mticker.FormatStrFormatter('% 3.1f') ax.yaxis.set_major_formatter(formatter) ax.xaxis.set_major_formatter(formatter) idx = np.array([j,i]) ax.plot(*data[:,idx].T, **plot_kwds_) if np.isscalar(level): level = [level] for alpha in level: _make_ellipse(dmean[idx], dcov[idx[:,None], idx], ax, level=alpha, **ell_kwds_) if add_titles: ax.set_title(f'{varnames[i]}-{varnames[j]}') if not ax.get_subplotspec().is_first_col(): if not keep_ticks: ax.set_yticks([]) else: ax.yaxis.set_major_locator(mticker.MaxNLocator(3)) else: ax.set_ylabel(varnames[i]) if ax.get_subplotspec().is_last_row(): ax.set_xlabel(varnames[j]) else: if not keep_ticks: ax.set_xticks([]) else: ax.xaxis.set_major_locator(mticker.MaxNLocator(3)) dcorr = np.corrcoef(data, rowvar=0) dc = dcorr[idx[:,None], idx] xlim = ax.get_xlim() ylim = ax.get_ylim() ## xt = xlim[0] + 0.1 * (xlim[1] - xlim[0]) ## yt = ylim[0] + 0.1 * (ylim[1] - ylim[0]) ## if dc[1,0] < 0 : ## yt = ylim[0] + 0.1 * (ylim[1] - ylim[0]) ## else: ## yt = ylim[1] - 0.2 * (ylim[1] - ylim[0]) yrangeq = ylim[0] + 0.4 * (ylim[1] - ylim[0]) if dc[1,0] < -0.25 or (dc[1,0] < 0.25 and dmean[idx][1] > yrangeq): yt = ylim[0] + 0.1 * (ylim[1] - ylim[0]) else: yt = ylim[1] - 0.2 * (ylim[1] - ylim[0]) xt = xlim[0] + 0.1 * (xlim[1] - xlim[0]) ax.text(xt, yt, '$\\rho=%0.2f$'% dc[1,0]) for ax in fig.axes: if ax.get_subplotspec().is_last_row(): # or ax.is_first_col(): ax.xaxis.set_major_locator(mticker.MaxNLocator(3)) if ax.get_subplotspec().is_first_col(): ax.yaxis.set_major_locator(mticker.MaxNLocator(3)) return fig
Create a grid of scatter plots with confidence ellipses. ell_kwds, plot_kdes not used yet looks ok with 5 or 6 variables, too crowded with 8, too empty with 1 Parameters ---------- data : array_like Input data. level : scalar, optional Default is 0.9. varnames : list[str], optional Variable names. Used for y-axis labels, and if `add_titles` is True also for titles. If not given, integers 1..data.shape[1] are used. ell_kwds : dict, optional UNUSED plot_kwds : dict, optional UNUSED add_titles : bool, optional Whether or not to add titles to each subplot. Default is False. Titles are constructed from `varnames`. keep_ticks : bool, optional If False (default), remove all axis ticks. fig : Figure, optional If given, this figure is simply returned. Otherwise a new figure is created. Returns ------- Figure If `fig` is None, the created figure. Otherwise `fig` itself. Examples -------- >>> import statsmodels.api as sm >>> import matplotlib.pyplot as plt >>> import numpy as np >>> from statsmodels.graphics.plot_grids import scatter_ellipse >>> data = sm.datasets.statecrime.load_pandas().data >>> fig = plt.figure(figsize=(8,8)) >>> scatter_ellipse(data, varnames=data.columns, fig=fig) >>> plt.show() .. plot:: plots/graphics_plot_grids_scatter_ellipse.py
scatter_ellipse
python
statsmodels/statsmodels
statsmodels/graphics/plot_grids.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/plot_grids.py
BSD-3-Clause
def harmfunc(t): """Test function, combination of a few harmonic terms.""" # Constant, 0 with p=0.9, 1 with p=1 - for creating outliers ci = int(np.random.random() > 0.9) a1i = np.random.random() * 0.05 a2i = np.random.random() * 0.05 b1i = (0.15 - 0.1) * np.random.random() + 0.1 b2i = (0.15 - 0.1) * np.random.random() + 0.1 func = (1 - ci) * (a1i * np.sin(t) + a2i * np.cos(t)) + \ ci * (b1i * np.sin(t) + b2i * np.cos(t)) return func
Test function, combination of a few harmonic terms.
test_fboxplot_rainbowplot.harmfunc
python
statsmodels/statsmodels
statsmodels/graphics/tests/test_functional.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/tests/test_functional.py
BSD-3-Clause
def test_fboxplot_rainbowplot(close_figures): # Test fboxplot and rainbowplot together, is much faster. def harmfunc(t): """Test function, combination of a few harmonic terms.""" # Constant, 0 with p=0.9, 1 with p=1 - for creating outliers ci = int(np.random.random() > 0.9) a1i = np.random.random() * 0.05 a2i = np.random.random() * 0.05 b1i = (0.15 - 0.1) * np.random.random() + 0.1 b2i = (0.15 - 0.1) * np.random.random() + 0.1 func = (1 - ci) * (a1i * np.sin(t) + a2i * np.cos(t)) + \ ci * (b1i * np.sin(t) + b2i * np.cos(t)) return func np.random.seed(1234567) # Some basic test data, Model 6 from Sun and Genton. t = np.linspace(0, 2 * np.pi, 250) data = [harmfunc(t) for _ in range(20)] # fboxplot test fig = plt.figure() ax = fig.add_subplot(111) _, depth, ix_depth, ix_outliers = fboxplot(data, wfactor=2, ax=ax) ix_expected = np.array([13, 4, 15, 19, 8, 6, 3, 16, 9, 7, 1, 5, 2, 12, 17, 11, 14, 10, 0, 18]) assert_equal(ix_depth, ix_expected) ix_expected2 = np.array([2, 11, 17, 18]) assert_equal(ix_outliers, ix_expected2) # rainbowplot test (re-uses depth variable) xdata = np.arange(data[0].size) fig = rainbowplot(data, xdata=xdata, depth=depth, cmap=plt.cm.rainbow)
Test function, combination of a few harmonic terms.
test_fboxplot_rainbowplot
python
statsmodels/statsmodels
statsmodels/graphics/tests/test_functional.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/graphics/tests/test_functional.py
BSD-3-Clause
def _var_normal(norm): """Variance factor for asymptotic relative efficiency of mean M-estimator. The reference distribution is the standard normal distribution. This assumes that the psi function is continuous. Relative efficiency is 1 / var_normal Parameters ---------- norm : instance of a RobustNorm subclass. Norm for which variance for relative efficiency is computed. Returns ------- Variance factor. Notes ----- This function does not verify that the assumption on the psi function and it's derivative hold. Examples -------- The following computes the relative efficiency of an M-estimator for the mean using HuberT norm. At the default tuning parameter, the relative efficiency is 95%. >>> import statsmodels.robust import norms >>> v = _var_normal(norms.HuberT()) >>> eff = 1 / v >>> v, eff (1.0526312909084732, 0.9500002599551741) Notes ----- S-estimator for mean and regression also have the same variance and efficiency computation as M-estimators. Therefore, this function can be used also for S-estimators and other estimators that . Reference --------- Menenez et al., but it's also in all text books for robust statistics. """ num = stats.norm.expect(lambda x: norm.psi(x) ** 2) denom = stats.norm.expect(lambda x: norm.psi_deriv(x))**2 return num / denom
Variance factor for asymptotic relative efficiency of mean M-estimator. The reference distribution is the standard normal distribution. This assumes that the psi function is continuous. Relative efficiency is 1 / var_normal Parameters ---------- norm : instance of a RobustNorm subclass. Norm for which variance for relative efficiency is computed. Returns ------- Variance factor. Notes ----- This function does not verify that the assumption on the psi function and it's derivative hold. Examples -------- The following computes the relative efficiency of an M-estimator for the mean using HuberT norm. At the default tuning parameter, the relative efficiency is 95%. >>> import statsmodels.robust import norms >>> v = _var_normal(norms.HuberT()) >>> eff = 1 / v >>> v, eff (1.0526312909084732, 0.9500002599551741) Notes ----- S-estimator for mean and regression also have the same variance and efficiency computation as M-estimators. Therefore, this function can be used also for S-estimators and other estimators that . Reference --------- Menenez et al., but it's also in all text books for robust statistics.
_var_normal
python
statsmodels/statsmodels
statsmodels/robust/tools.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/tools.py
BSD-3-Clause
def _var_normal_jump(norm): """Variance factor for asymptotic relative efficiency of mean M-estimator. The reference distribution is the standard normal distribution. This allows for the case when the psi function is not continuous, i.e. has jumps as in TrimmedMean norm. Relative efficiency is 1 / var_normal Parameters ---------- norm : instance of a RobustNorm subclass. Norm for which variance for relative efficiency is computed. Returns ------- Variance factor. Notes ----- This function does not verify that the assumption on the psi function and it's derivative hold. Examples -------- >>> import statsmodels.robust import norms >>> v = _var_normal_jump(norms.HuberT()) >>> eff = 1 / v >>> v, eff (1.0526312908510451, 0.950000260007003) Reference --------- Menenez et al., but it's also in all text books for robust statistics. """ num = stats.norm.expect(lambda x: norm.psi(x)**2) def func(x): # derivative normal pdf # d/dx(exp(-x^2/2)/sqrt(2 π)) = -(e^(-x^2/2) x)/sqrt(2 π) return norm.psi(x) * (- x * np.exp(-x**2/2) / np.sqrt(2 * np.pi)) denom = integrate.quad(func, -np.inf, np.inf)[0] return num / denom**2
Variance factor for asymptotic relative efficiency of mean M-estimator. The reference distribution is the standard normal distribution. This allows for the case when the psi function is not continuous, i.e. has jumps as in TrimmedMean norm. Relative efficiency is 1 / var_normal Parameters ---------- norm : instance of a RobustNorm subclass. Norm for which variance for relative efficiency is computed. Returns ------- Variance factor. Notes ----- This function does not verify that the assumption on the psi function and it's derivative hold. Examples -------- >>> import statsmodels.robust import norms >>> v = _var_normal_jump(norms.HuberT()) >>> eff = 1 / v >>> v, eff (1.0526312908510451, 0.950000260007003) Reference --------- Menenez et al., but it's also in all text books for robust statistics.
_var_normal_jump
python
statsmodels/statsmodels
statsmodels/robust/tools.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/tools.py
BSD-3-Clause
def _get_tuning_param(norm, eff, kwd="c", kwargs=None, use_jump=False, bracket=None, ): """Tuning parameter for RLM norms for required relative efficiency. Parameters ---------- norm : instance of RobustNorm subclass eff : float in (0, 1) Required asymptotic relative efficiency compared to least squares at the normal reference distribution. For example, ``eff=0.95`` for 95% efficiency. kwd : str Name of keyword for tuning parameter. kwargs : dict or None Dict for other keyword parameters. use_jump : bool If False (default), then use computation that require continuous psi function. If True, then use computation then the psi function can have jump discontinuities. bracket : None or tuple Bracket with lower and upper bounds to use for scipy.optimize.brentq. If None, than a default bracket, currently [0.1, 10], is used. Returns ------- Float : Value of tuning parameter to achieve asymptotic relative efficiency. """ if bracket is None: bracket = [0.1, 10] if not use_jump: def func(c): # kwds.update({kwd: c}) # return _var_normal(norm(**kwds)) - 1 / eff norm._set_tuning_param(c, inplace=True) return _var_normal(norm) - 1 / eff else: def func(c): norm._set_tuning_param(c, inplace=True) return _var_normal_jump(norm) - 1 / eff res = optimize.brentq(func, *bracket) return res
Tuning parameter for RLM norms for required relative efficiency. Parameters ---------- norm : instance of RobustNorm subclass eff : float in (0, 1) Required asymptotic relative efficiency compared to least squares at the normal reference distribution. For example, ``eff=0.95`` for 95% efficiency. kwd : str Name of keyword for tuning parameter. kwargs : dict or None Dict for other keyword parameters. use_jump : bool If False (default), then use computation that require continuous psi function. If True, then use computation then the psi function can have jump discontinuities. bracket : None or tuple Bracket with lower and upper bounds to use for scipy.optimize.brentq. If None, than a default bracket, currently [0.1, 10], is used. Returns ------- Float : Value of tuning parameter to achieve asymptotic relative efficiency.
_get_tuning_param
python
statsmodels/statsmodels
statsmodels/robust/tools.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/tools.py
BSD-3-Clause
def tuning_s_estimator_mean(norm, breakdown=None): """Tuning parameter and scale bias correction for S-estimators of mean. The reference distribution is the normal distribution. This requires a (hard) redescending norm, i.e. with finite max rho. Parameters ---------- norm : instance of RobustNorm subclass breakdown : float or iterable of float in (0, 0.5] Desired breakdown point between 0 and 0.5. Default if breakdown is None is a list of breakdown points. Returns ------- Holder instance with the following attributes : - `breakdown` : breakdown point - `eff` : relative efficiency - `param` : tuning parameter for norm - `scale_bias` : correction term for Fisher consistency. Notes ----- Based on Rousseeuw and Leroy (1987). See table 19, p. 142 that can be replicated by this function for TukeyBiweight norm. Note, the results of this function are based computation without rounding to decimal precision, and differ in some cases in the last digit from the table by Rousseeuw and Leroy. Numerical expectation and root finding based on scipy integrate and optimize. TODO: more options for details, numeric approximation and root finding. There is currently no feasibility check in functions. Reference --------- Rousseeuw and Leroy book """ if breakdown is None: bps = [0.5, 0.45, 0.40, 0.35, 0.30, 0.25, 0.20, 0.15, 0.1, 0.05] else: # allow for scalar bp try: _ = iter(breakdown) bps = breakdown except TypeError: bps = [breakdown] def func(c): norm_ = norm norm_._set_tuning_param(c, inplace=True) bp = stats.norm.expect(lambda x: norm_.rho(x)) / norm_.max_rho() return bp res = [] for bp in bps: c_bp = optimize.brentq(lambda c0: func(c0) - bp, 0.1, 10) norm._set_tuning_param(c_bp, inplace=True) # inplace modification eff = 1 / _var_normal(norm) b = stats.norm.expect(lambda x : norm.rho(x)) res.append([bp, eff, c_bp, b]) if np.size(bps) > 1: res = np.asarray(res).T else: # use one list res = res[0] res2 = Holder( breakdown=res[0], eff=res[1], param=res[2], scale_bias=res[3], all=res, ) return res2
Tuning parameter and scale bias correction for S-estimators of mean. The reference distribution is the normal distribution. This requires a (hard) redescending norm, i.e. with finite max rho. Parameters ---------- norm : instance of RobustNorm subclass breakdown : float or iterable of float in (0, 0.5] Desired breakdown point between 0 and 0.5. Default if breakdown is None is a list of breakdown points. Returns ------- Holder instance with the following attributes : - `breakdown` : breakdown point - `eff` : relative efficiency - `param` : tuning parameter for norm - `scale_bias` : correction term for Fisher consistency. Notes ----- Based on Rousseeuw and Leroy (1987). See table 19, p. 142 that can be replicated by this function for TukeyBiweight norm. Note, the results of this function are based computation without rounding to decimal precision, and differ in some cases in the last digit from the table by Rousseeuw and Leroy. Numerical expectation and root finding based on scipy integrate and optimize. TODO: more options for details, numeric approximation and root finding. There is currently no feasibility check in functions. Reference --------- Rousseeuw and Leroy book
tuning_s_estimator_mean
python
statsmodels/statsmodels
statsmodels/robust/tools.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/tools.py
BSD-3-Clause
def scale_bias_cov_biw(c, k_vars): """Multivariate scale bias correction for TukeyBiweight norm. This uses the chisquare distribution as reference distribution for the squared Mahalanobis distance. """ p = k_vars # alias for formula chip, chip2, chip4, chip6 = stats.chi2.cdf(c**2, [p, p + 2, p + 4, p + 6]) b = p / 2 * chip2 - p * (p + 2) / (2 * c**2) * chip4 b += p * (p + 2) * (p + 4) / (6 * c**4) * chip6 + c**2 / 6 * (1 - chip) return b, b / (c**2 / 6)
Multivariate scale bias correction for TukeyBiweight norm. This uses the chisquare distribution as reference distribution for the squared Mahalanobis distance.
scale_bias_cov_biw
python
statsmodels/statsmodels
statsmodels/robust/tools.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/tools.py
BSD-3-Clause
def scale_bias_cov(norm, k_vars): """Multivariate scale bias correction. Parameter --------- norm : norm instance The rho function of the norm is used in the moment condition for estimating scale. k_vars : int Number of random variables in the multivariate data. Returns ------- scale_bias: float breakdown_point : float Breakdown point computed as scale bias divided by max rho. """ def rho(x): return norm.rho(np.sqrt(x)) scale_bias = stats.chi2.expect(rho, args=(k_vars,)) return scale_bias, scale_bias / norm.max_rho()
Multivariate scale bias correction. Parameter --------- norm : norm instance The rho function of the norm is used in the moment condition for estimating scale. k_vars : int Number of random variables in the multivariate data. Returns ------- scale_bias: float breakdown_point : float Breakdown point computed as scale bias divided by max rho.
scale_bias_cov
python
statsmodels/statsmodels
statsmodels/robust/tools.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/tools.py
BSD-3-Clause
def tuning_s_cov(norm, k_vars, breakdown_point=0.5, limits=()): """Tuning parameter for multivariate S-estimator given breakdown point. """ from .norms import TukeyBiweight # avoid circular import if not limits: limits = (0.5, 30) if isinstance(norm, TukeyBiweight): def func(c): return scale_bias_cov_biw(c, k_vars)[1] - breakdown_point else: norm = norm._set_tuning_param(2., inplace=False) # create copy def func(c): norm._set_tuning_param(c, inplace=True) return scale_bias_cov(norm, k_vars)[1] - breakdown_point p_tune = optimize.brentq(func, limits[0], limits[1]) return p_tune
Tuning parameter for multivariate S-estimator given breakdown point.
tuning_s_cov
python
statsmodels/statsmodels
statsmodels/robust/tools.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/tools.py
BSD-3-Clause
def eff_mvmean(norm, k_vars): """Efficiency for M-estimator of multivariate mean at normal distribution. This also applies to estimators that are locally equivalent to an M-estimator such as S- and MM-estimators. Parameters ---------- norm : instance of norm class k_vars : int Number of variables in multivariate random variable, i.e. dimension. Returns ------- eff : float Asymptotic relative efficiency of mean at normal distribution. alpha : float Numerical integral. Efficiency is beta**2 / alpha beta : float Numerical integral. Notes ----- This implements equ. (5.3) p. 1671 in Lopuhaä 1989 References ---------- .. [1] Lopuhaä, Hendrik P. 1989. “On the Relation between S-Estimators and M-Estimators of Multivariate Location and Covariance.” The Annals of Statistics 17 (4): 1662–83. """ k = k_vars # shortcut def f_alpha(d): return norm.psi(d) ** 2 / k def f_beta(d): return (1 - 1 / k) * norm.weights(d) + 1 / k * norm.psi_deriv(d) alpha = stats.chi(k).expect(f_alpha) beta = stats.chi(k).expect(f_beta) return beta**2 / alpha, alpha, beta
Efficiency for M-estimator of multivariate mean at normal distribution. This also applies to estimators that are locally equivalent to an M-estimator such as S- and MM-estimators. Parameters ---------- norm : instance of norm class k_vars : int Number of variables in multivariate random variable, i.e. dimension. Returns ------- eff : float Asymptotic relative efficiency of mean at normal distribution. alpha : float Numerical integral. Efficiency is beta**2 / alpha beta : float Numerical integral. Notes ----- This implements equ. (5.3) p. 1671 in Lopuhaä 1989 References ---------- .. [1] Lopuhaä, Hendrik P. 1989. “On the Relation between S-Estimators and M-Estimators of Multivariate Location and Covariance.” The Annals of Statistics 17 (4): 1662–83.
eff_mvmean
python
statsmodels/statsmodels
statsmodels/robust/tools.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/tools.py
BSD-3-Clause
def eff_mvshape(norm, k_vars): """Efficiency of M-estimator of multivariate shape at normal distribution. This also applies to estimators that are locally equivalent to an M-estimator such as S- and MM-estimators. Parameters ---------- norm : instance of norm class k_vars : int Number of variables in multivariate random variable, i.e. dimension. Returns ------- eff : float Asymptotic relative efficiency of mean at normal distribution. alpha : float Numerical integral. Efficiency is beta**2 / alpha beta : float Numerical integral. Notes ----- This implements sigma_1 in equ. (5.5) p. 1671 in Lopuhaä 1989. Efficiency of shape is approximately 1 / sigma1. References ---------- .. [1] Lopuhaä, Hendrik P. 1989. “On the Relation between S-Estimators and M-Estimators of Multivariate Location and Covariance.” The Annals of Statistics 17 (4): 1662–83. """ k = k_vars # shortcut def f_a(d): return k * (k + 2) * norm.psi(d) ** 2 * d**2 def f_b(d): return norm.psi_deriv(d) * d**2 + (k + 1) * norm.psi(d) * d a = stats.chi(k).expect(f_a) b = stats.chi(k).expect(f_b) return b**2 / a, a, b
Efficiency of M-estimator of multivariate shape at normal distribution. This also applies to estimators that are locally equivalent to an M-estimator such as S- and MM-estimators. Parameters ---------- norm : instance of norm class k_vars : int Number of variables in multivariate random variable, i.e. dimension. Returns ------- eff : float Asymptotic relative efficiency of mean at normal distribution. alpha : float Numerical integral. Efficiency is beta**2 / alpha beta : float Numerical integral. Notes ----- This implements sigma_1 in equ. (5.5) p. 1671 in Lopuhaä 1989. Efficiency of shape is approximately 1 / sigma1. References ---------- .. [1] Lopuhaä, Hendrik P. 1989. “On the Relation between S-Estimators and M-Estimators of Multivariate Location and Covariance.” The Annals of Statistics 17 (4): 1662–83.
eff_mvshape
python
statsmodels/statsmodels
statsmodels/robust/tools.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/tools.py
BSD-3-Clause
def tuning_m_cov_eff(norm, k_vars, efficiency=0.95, eff_mean=True, limits=()): """Tuning parameter for multivariate M-estimator given efficiency. This also applies to estimators that are locally equivalent to an M-estimator such as S- and MM-estimators. Parameters ---------- norm : instance of norm class k_vars : int Number of variables in multivariate random variable, i.e. dimension. efficiency : float < 1 Desired asymptotic relative efficiency of mean estimator. Default is 0.95. eff_mean : bool If eff_mean is true (default), then tuning parameter is to achieve efficiency of mean estimate. If eff_mean is fale, then tuning parameter is to achieve efficiency of shape estimate. limits : tuple Limits for rootfinding with scipy.optimize.brentq. In some cases the interval limits for rootfinding can be too small and not cover the root. Current default limits are (0.5, 30). Returns ------- float : Tuning parameter for the norm to achieve desired efficiency. Asymptotic relative efficiency of mean at normal distribution. Notes ----- This uses numerical integration and rootfinding and will be relatively slow. """ if not limits: limits = (0.5, 30) # make copy of norm norm = norm._set_tuning_param(1, inplace=False) if eff_mean: def func(c): norm._set_tuning_param(c, inplace=True) return eff_mvmean(norm, k_vars)[0] - efficiency else: def func(c): norm._set_tuning_param(c, inplace=True) return eff_mvshape(norm, k_vars)[0] - efficiency p_tune = optimize.brentq(func, limits[0], limits[1]) return p_tune
Tuning parameter for multivariate M-estimator given efficiency. This also applies to estimators that are locally equivalent to an M-estimator such as S- and MM-estimators. Parameters ---------- norm : instance of norm class k_vars : int Number of variables in multivariate random variable, i.e. dimension. efficiency : float < 1 Desired asymptotic relative efficiency of mean estimator. Default is 0.95. eff_mean : bool If eff_mean is true (default), then tuning parameter is to achieve efficiency of mean estimate. If eff_mean is fale, then tuning parameter is to achieve efficiency of shape estimate. limits : tuple Limits for rootfinding with scipy.optimize.brentq. In some cases the interval limits for rootfinding can be too small and not cover the root. Current default limits are (0.5, 30). Returns ------- float : Tuning parameter for the norm to achieve desired efficiency. Asymptotic relative efficiency of mean at normal distribution. Notes ----- This uses numerical integration and rootfinding and will be relatively slow.
tuning_m_cov_eff
python
statsmodels/statsmodels
statsmodels/robust/tools.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/tools.py
BSD-3-Clause
def tukeybiweight_mvmean_eff(k, eff, eff_mean=True): """tuning parameter for biweight norm to achieve efficiency for mv-mean. Uses values from precomputed table if available, otherwise computes it numerically and adds it to the module global dict. """ if eff_mean: table_dict = tukeybiweight_mvmean_eff_d else: table_dict = tukeybiweight_mvshape_eff_d try: tp = table_dict[(k, eff)] except KeyError: # compute and cache from .norms import TukeyBiweight # avoid circular import norm = TukeyBiweight(c=1) tp = tuning_m_cov_eff(norm, k, efficiency=eff, eff_mean=eff_mean) table_dict[(k, eff)] = tp return tp
tuning parameter for biweight norm to achieve efficiency for mv-mean. Uses values from precomputed table if available, otherwise computes it numerically and adds it to the module global dict.
tukeybiweight_mvmean_eff
python
statsmodels/statsmodels
statsmodels/robust/tools.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/tools.py
BSD-3-Clause
def _cabs(x): """absolute value function that changes complex sign based on real sign This could be useful for complex step derivatives of functions that need abs. Not yet used. """ sign = (x.real >= 0) * 2 - 1 return sign * x
absolute value function that changes complex sign based on real sign This could be useful for complex step derivatives of functions that need abs. Not yet used.
_cabs
python
statsmodels/statsmodels
statsmodels/robust/norms.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/norms.py
BSD-3-Clause
def rho(self, z): """ The robust criterion estimator function. Abstract method: -2 loglike used in M-estimator """ raise NotImplementedError
The robust criterion estimator function. Abstract method: -2 loglike used in M-estimator
rho
python
statsmodels/statsmodels
statsmodels/robust/norms.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/norms.py
BSD-3-Clause
def psi(self, z): """ Derivative of rho. Sometimes referred to as the influence function. Abstract method: psi = rho' """ raise NotImplementedError
Derivative of rho. Sometimes referred to as the influence function. Abstract method: psi = rho'
psi
python
statsmodels/statsmodels
statsmodels/robust/norms.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/norms.py
BSD-3-Clause
def weights(self, z): """ Returns the value of psi(z) / z Abstract method: psi(z) / z """ raise NotImplementedError
Returns the value of psi(z) / z Abstract method: psi(z) / z
weights
python
statsmodels/statsmodels
statsmodels/robust/norms.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/norms.py
BSD-3-Clause
def psi_deriv(self, z): """ Derivative of psi. Used to obtain robust covariance matrix. See statsmodels.rlm for more information. Abstract method: psi_derive = psi' """ raise NotImplementedError
Derivative of psi. Used to obtain robust covariance matrix. See statsmodels.rlm for more information. Abstract method: psi_derive = psi'
psi_deriv
python
statsmodels/statsmodels
statsmodels/robust/norms.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/norms.py
BSD-3-Clause
def __call__(self, z): """ Returns the value of estimator rho applied to an input """ return self.rho(z)
Returns the value of estimator rho applied to an input
__call__
python
statsmodels/statsmodels
statsmodels/robust/norms.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/norms.py
BSD-3-Clause
def rho(self, z): """ The least squares estimator rho function Parameters ---------- z : ndarray 1d array Returns ------- rho : ndarray rho(z) = (1/2.)*z**2 """ return z**2 * 0.5
The least squares estimator rho function Parameters ---------- z : ndarray 1d array Returns ------- rho : ndarray rho(z) = (1/2.)*z**2
rho
python
statsmodels/statsmodels
statsmodels/robust/norms.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/norms.py
BSD-3-Clause
def psi(self, z): """ The psi function for the least squares estimator The analytic derivative of rho Parameters ---------- z : array_like 1d array Returns ------- psi : ndarray psi(z) = z """ return np.asarray(z)
The psi function for the least squares estimator The analytic derivative of rho Parameters ---------- z : array_like 1d array Returns ------- psi : ndarray psi(z) = z
psi
python
statsmodels/statsmodels
statsmodels/robust/norms.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/norms.py
BSD-3-Clause
def weights(self, z): """ The least squares estimator weighting function for the IRLS algorithm. The psi function scaled by the input z Parameters ---------- z : array_like 1d array Returns ------- weights : ndarray weights(z) = np.ones(z.shape) """ z = np.asarray(z) return np.ones(z.shape, np.float64)
The least squares estimator weighting function for the IRLS algorithm. The psi function scaled by the input z Parameters ---------- z : array_like 1d array Returns ------- weights : ndarray weights(z) = np.ones(z.shape)
weights
python
statsmodels/statsmodels
statsmodels/robust/norms.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/norms.py
BSD-3-Clause
def psi_deriv(self, z): """ The derivative of the least squares psi function. Returns ------- psi_deriv : ndarray ones(z.shape) Notes ----- Used to estimate the robust covariance matrix. """ z = np.asarray(z) return np.ones(z.shape, np.float64)
The derivative of the least squares psi function. Returns ------- psi_deriv : ndarray ones(z.shape) Notes ----- Used to estimate the robust covariance matrix.
psi_deriv
python
statsmodels/statsmodels
statsmodels/robust/norms.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/norms.py
BSD-3-Clause
def _set_tuning_param(self, c, inplace=False): """Set and change the tuning parameter of the Norm. Warning: this needs to wipe cached attributes that depend on the param. """ if inplace: self.t = c return self else: return self.__class__(t=c)
Set and change the tuning parameter of the Norm. Warning: this needs to wipe cached attributes that depend on the param.
_set_tuning_param
python
statsmodels/statsmodels
statsmodels/robust/norms.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/norms.py
BSD-3-Clause
def _subset(self, z): """ Huber's T is defined piecewise over the range for z """ z = np.asarray(z) return np.less_equal(np.abs(z), self.t)
Huber's T is defined piecewise over the range for z
_subset
python
statsmodels/statsmodels
statsmodels/robust/norms.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/norms.py
BSD-3-Clause
def psi_deriv(self, z): """ The derivative of Huber's t psi function Notes ----- Used to estimate the robust covariance matrix. """ return np.less_equal(np.abs(z), self.t).astype(float)
The derivative of Huber's t psi function Notes ----- Used to estimate the robust covariance matrix.
psi_deriv
python
statsmodels/statsmodels
statsmodels/robust/norms.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/norms.py
BSD-3-Clause
def _set_tuning_param(self, c, inplace=False): """Set and change the tuning parameter of the Norm. Warning: this needs to wipe cached attributes that depend on the param. """ # todo : change default to inplace=False, when tools are fixed if inplace: self.a = c return self else: return self.__class__(a=c)
Set and change the tuning parameter of the Norm. Warning: this needs to wipe cached attributes that depend on the param.
_set_tuning_param
python
statsmodels/statsmodels
statsmodels/robust/norms.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/norms.py
BSD-3-Clause
def psi_deriv(self, z): """ The derivative of Ramsay's Ea psi function. Notes ----- Used to estimate the robust covariance matrix. """ a = self.a x = np.exp(-a * np.abs(z)) dx = -a * x * np.sign(z) y = z dy = 1 return x * dy + y * dx
The derivative of Ramsay's Ea psi function. Notes ----- Used to estimate the robust covariance matrix.
psi_deriv
python
statsmodels/statsmodels
statsmodels/robust/norms.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/norms.py
BSD-3-Clause
def _set_tuning_param(self, c, inplace=False): """Set and change the tuning parameter of the Norm. Warning: this needs to wipe cached attributes that depend on the param. """ if inplace: self.a = c return self else: return self.__class__(a=c)
Set and change the tuning parameter of the Norm. Warning: this needs to wipe cached attributes that depend on the param.
_set_tuning_param
python
statsmodels/statsmodels
statsmodels/robust/norms.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/norms.py
BSD-3-Clause
def _subset(self, z): """ Andrew's wave is defined piecewise over the range of z. """ z = np.asarray(z) return np.less_equal(np.abs(z), self.a * np.pi)
Andrew's wave is defined piecewise over the range of z.
_subset
python
statsmodels/statsmodels
statsmodels/robust/norms.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/norms.py
BSD-3-Clause
def psi_deriv(self, z): """ The derivative of Andrew's wave psi function Notes ----- Used to estimate the robust covariance matrix. """ test = self._subset(z) return test * np.cos(z / self.a)
The derivative of Andrew's wave psi function Notes ----- Used to estimate the robust covariance matrix.
psi_deriv
python
statsmodels/statsmodels
statsmodels/robust/norms.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/norms.py
BSD-3-Clause
def _set_tuning_param(self, c, inplace=False): """Set and change the tuning parameter of the Norm. Warning: this needs to wipe cached attributes that depend on the param. """ if inplace: self.c = c return self else: return self.__class__(c=c)
Set and change the tuning parameter of the Norm. Warning: this needs to wipe cached attributes that depend on the param.
_set_tuning_param
python
statsmodels/statsmodels
statsmodels/robust/norms.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/norms.py
BSD-3-Clause
def _subset(self, z): """ Least trimmed mean is defined piecewise over the range of z. """ z = np.asarray(z) return np.less_equal(np.abs(z), self.c)
Least trimmed mean is defined piecewise over the range of z.
_subset
python
statsmodels/statsmodels
statsmodels/robust/norms.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/norms.py
BSD-3-Clause
def psi_deriv(self, z): """ The derivative of least trimmed mean psi function Notes ----- Used to estimate the robust covariance matrix. """ test = self._subset(z) return test
The derivative of least trimmed mean psi function Notes ----- Used to estimate the robust covariance matrix.
psi_deriv
python
statsmodels/statsmodels
statsmodels/robust/norms.py
https://github.com/statsmodels/statsmodels/blob/master/statsmodels/robust/norms.py
BSD-3-Clause