response
stringlengths 1
33.1k
| instruction
stringlengths 22
582k
|
---|---|
def _save_helper(obj: UIElement | Sequence[UIElement], filename: PathLike, resources: Resources | None,
title: str | None, template: Template | str | None, theme: Theme | None = None) -> None:
'''
'''
from ..embed import file_html
html = file_html(obj, resources=resources, title=title, template=template or FILE, theme=theme)
with open(filename, mode="w", encoding="utf-8") as f:
f.write(html) |
|
Immediately display a Bokeh object or application.
:func:`show` may be called multiple times in a single Jupyter notebook
cell to display multiple objects. The objects are displayed in order.
Args:
obj (UIElement or Application or callable) :
A Bokeh object to display.
Bokeh plots, widgets, layouts (i.e. rows and columns) may be
passed to ``show`` in order to display them. If |output_file|
has been called, the output will be saved to an HTML file, which is also
opened in a new browser window or tab. If |output_notebook|
has been called in a Jupyter notebook, the output will be inline
in the associated notebook output cell.
In a Jupyter notebook, a Bokeh application or callable may also
be passed. A callable will be turned into an Application using a
``FunctionHandler``. The application will be run and displayed
inline in the associated notebook output cell.
browser (str, optional) :
Specify the browser to use to open output files(default: None)
For file output, the **browser** argument allows for specifying
which browser to display in, e.g. "safari", "firefox", "opera",
"windows-default". Not all platforms may support this option, see
the documentation for the standard library
:doc:`webbrowser <python:library/webbrowser>` module for
more information
new (str, optional) :
Specify the browser mode to use for output files (default: "tab")
For file output, opens or raises the browser window showing the
current output file. If **new** is 'tab', then opens a new tab.
If **new** is 'window', then opens a new window.
notebook_handle (bool, optional) :
Whether to create a notebook interaction handle (default: False)
For notebook output, toggles whether a handle which can be used
with ``push_notebook`` is returned. Note that notebook handles
only apply to standalone plots, layouts, etc. They do not apply
when showing Applications in the notebook.
notebook_url (URL, optional) :
Location of the Jupyter notebook page (default: "localhost:8888")
When showing Bokeh applications, the Bokeh server must be
explicitly configured to allow connections originating from
different URLs. This parameter defaults to the standard notebook
host and port. If you are running on a different location, you
will need to supply this value for the application to display
properly. If no protocol is supplied in the URL, e.g. if it is
of the form "localhost:8888", then "http" will be used.
``notebook_url`` can also be a function that takes one int for the
bound server port. If the port is provided, the function needs
to generate the full public URL to the bokeh server. If None
is passed, the function is to generate the origin URL.
If the environment variable JUPYTER_BOKEH_EXTERNAL_URL is set
to the external URL of a JupyterHub, notebook_url is overridden
with a callable which enables Bokeh to traverse the JupyterHub
proxy without specifying this parameter.
Some parameters are only useful when certain output modes are active:
* The ``browser`` and ``new`` parameters only apply when |output_file|
is active.
* The ``notebook_handle`` parameter only applies when |output_notebook|
is active, and non-Application objects are being shown. It is only
supported in Jupyter notebook and raises an exception for other notebook
types when it is True.
* The ``notebook_url`` parameter only applies when showing Bokeh
Applications in a Jupyter notebook.
* Any additional keyword arguments are passed to :class:`~bokeh.server.Server` when
showing a Bokeh app (added in version 1.1)
Returns:
When in a Jupyter notebook (with |output_notebook| enabled)
and ``notebook_handle=True``, returns a handle that can be used by
``push_notebook``, None otherwise. | def show(obj: UIElement | Application | ModifyDoc, browser: str | None = None, new: BrowserTarget = "tab",
notebook_handle: bool = False, notebook_url: str | ProxyUrlFunc = DEFAULT_JUPYTER_URL,
**kwargs: Any) -> CommsHandle | None:
'''Immediately display a Bokeh object or application.
:func:`show` may be called multiple times in a single Jupyter notebook
cell to display multiple objects. The objects are displayed in order.
Args:
obj (UIElement or Application or callable) :
A Bokeh object to display.
Bokeh plots, widgets, layouts (i.e. rows and columns) may be
passed to ``show`` in order to display them. If |output_file|
has been called, the output will be saved to an HTML file, which is also
opened in a new browser window or tab. If |output_notebook|
has been called in a Jupyter notebook, the output will be inline
in the associated notebook output cell.
In a Jupyter notebook, a Bokeh application or callable may also
be passed. A callable will be turned into an Application using a
``FunctionHandler``. The application will be run and displayed
inline in the associated notebook output cell.
browser (str, optional) :
Specify the browser to use to open output files(default: None)
For file output, the **browser** argument allows for specifying
which browser to display in, e.g. "safari", "firefox", "opera",
"windows-default". Not all platforms may support this option, see
the documentation for the standard library
:doc:`webbrowser <python:library/webbrowser>` module for
more information
new (str, optional) :
Specify the browser mode to use for output files (default: "tab")
For file output, opens or raises the browser window showing the
current output file. If **new** is 'tab', then opens a new tab.
If **new** is 'window', then opens a new window.
notebook_handle (bool, optional) :
Whether to create a notebook interaction handle (default: False)
For notebook output, toggles whether a handle which can be used
with ``push_notebook`` is returned. Note that notebook handles
only apply to standalone plots, layouts, etc. They do not apply
when showing Applications in the notebook.
notebook_url (URL, optional) :
Location of the Jupyter notebook page (default: "localhost:8888")
When showing Bokeh applications, the Bokeh server must be
explicitly configured to allow connections originating from
different URLs. This parameter defaults to the standard notebook
host and port. If you are running on a different location, you
will need to supply this value for the application to display
properly. If no protocol is supplied in the URL, e.g. if it is
of the form "localhost:8888", then "http" will be used.
``notebook_url`` can also be a function that takes one int for the
bound server port. If the port is provided, the function needs
to generate the full public URL to the bokeh server. If None
is passed, the function is to generate the origin URL.
If the environment variable JUPYTER_BOKEH_EXTERNAL_URL is set
to the external URL of a JupyterHub, notebook_url is overridden
with a callable which enables Bokeh to traverse the JupyterHub
proxy without specifying this parameter.
Some parameters are only useful when certain output modes are active:
* The ``browser`` and ``new`` parameters only apply when |output_file|
is active.
* The ``notebook_handle`` parameter only applies when |output_notebook|
is active, and non-Application objects are being shown. It is only
supported in Jupyter notebook and raises an exception for other notebook
types when it is True.
* The ``notebook_url`` parameter only applies when showing Bokeh
Applications in a Jupyter notebook.
* Any additional keyword arguments are passed to :class:`~bokeh.server.Server` when
showing a Bokeh app (added in version 1.1)
Returns:
When in a Jupyter notebook (with |output_notebook| enabled)
and ``notebook_handle=True``, returns a handle that can be used by
``push_notebook``, None otherwise.
'''
state = curstate()
if isinstance(obj, UIElement):
return _show_with_state(obj, state, browser, new, notebook_handle=notebook_handle)
def is_application(obj: Any) -> TypeGuard[Application]:
return getattr(obj, '_is_a_bokeh_application_class', False)
if is_application(obj) or callable(obj): # TODO (bev) check callable signature more thoroughly
# This ugliness is to prevent importing bokeh.application (which would bring
# in Tornado) just in order to show a non-server object
assert state.notebook_type is not None
return run_notebook_hook(state.notebook_type, 'app', obj, state, notebook_url, **kwargs)
raise ValueError(_BAD_SHOW_MSG) |
def _show_file_with_state(obj: UIElement, state: State, new: BrowserTarget, controller: BrowserLike) -> None:
'''
'''
filename = save(obj, state=state)
controller.open("file://" + filename, new=NEW_PARAM[new]) |
|
def _show_with_state(obj: UIElement, state: State, browser: str | None,
new: BrowserTarget, notebook_handle: bool = False) -> CommsHandle | None:
'''
'''
controller = get_browser_controller(browser=browser)
comms_handle = None
shown = False
if state.notebook:
assert state.notebook_type is not None
comms_handle = run_notebook_hook(state.notebook_type, 'doc', obj, state, notebook_handle)
shown = True
if state.file or not shown:
_show_file_with_state(obj, state, new, controller)
return comms_handle |
|
Return the current State object
Returns:
State : the current default State object | def curstate() -> State:
''' Return the current State object
Returns:
State : the current default State object
'''
global _STATE
if _STATE is None:
_STATE = State()
return _STATE |
Generate a default filename with a given extension, attempting to use
the filename of the currently running process, if possible.
If the filename of the current process is not available (or would not be
writable), then a temporary file with the given extension is returned.
Args:
ext (str) : the desired extension for the filename
Returns:
str
Raises:
RuntimeError
If the extensions requested is ".py" | def default_filename(ext: str) -> str:
''' Generate a default filename with a given extension, attempting to use
the filename of the currently running process, if possible.
If the filename of the current process is not available (or would not be
writable), then a temporary file with the given extension is returned.
Args:
ext (str) : the desired extension for the filename
Returns:
str
Raises:
RuntimeError
If the extensions requested is ".py"
'''
if ext == "py":
raise RuntimeError("asked for a default filename with 'py' extension")
filename = detect_current_filename()
if filename is None:
return temp_filename(ext)
basedir = dirname(filename) or os.getcwd()
if _no_access(basedir) or _shares_exec_prefix(basedir):
return temp_filename(ext)
name, _ = splitext(basename(filename))
return join(basedir, name + "." + ext) |
Attempt to return the filename of the currently running Python process
Returns None if the filename cannot be detected. | def detect_current_filename() -> str | None:
''' Attempt to return the filename of the currently running Python process
Returns None if the filename cannot be detected.
'''
import inspect
filename = None
frame = inspect.currentframe()
if frame is not None:
try:
while frame.f_back and frame.f_globals.get('name') != '__main__':
frame = frame.f_back
filename = frame.f_globals.get('__file__')
finally:
del frame
return filename |
Generate a temporary, writable filename with the given extension
| def temp_filename(ext: str) -> str:
''' Generate a temporary, writable filename with the given extension
'''
# todo: not safe - the file is deleted before being written to so another
# process can generate the same filename
with NamedTemporaryFile(suffix="." + ext) as f:
return f.name |
Return True if the given base dir is not accessible or writeable
| def _no_access(basedir: str) -> bool:
''' Return True if the given base dir is not accessible or writeable
'''
return not os.access(basedir, os.W_OK | os.X_OK) |
Whether a give base directory is on the system exex prefix
| def _shares_exec_prefix(basedir: str) -> bool:
''' Whether a give base directory is on the system exex prefix
'''
# XXX: exec_prefix has type str so why the check?
prefix: str | None = sys.exec_prefix
return prefix is not None and basedir.startswith(prefix) |
A decorator to mark abstract base classes derived from |HasProps|.
| def process_example(cls: type[Any]) -> None:
''' A decorator to mark abstract base classes derived from |HasProps|.
'''
if "__example__" in cls.__dict__:
cls.__doc__ = append_docstring(cls.__doc__, _EXAMPLE_TEMPLATE.format(path=cls.__dict__["__example__"])) |
Collect a duplicate-free list of all other Bokeh models referred to by
this model, or by any of its references, etc, unless filtered-out by the
provided callable.
Iterate over ``input_values`` and descend through their structure
collecting all nested ``Models`` on the go.
Args:
*discard (Callable[[Model], bool])
a callable which accepts a *Model* instance as its single argument
and returns a boolean stating whether to discard the instance. The
latter means that the instance will not be added to collected
models nor will its references be explored.
*input_values (Model)
Bokeh models to collect other models from
Returns:
list(Model) | def collect_filtered_models(discard: Callable[[Model], bool] | None, *input_values: Any) -> list[Model]:
''' Collect a duplicate-free list of all other Bokeh models referred to by
this model, or by any of its references, etc, unless filtered-out by the
provided callable.
Iterate over ``input_values`` and descend through their structure
collecting all nested ``Models`` on the go.
Args:
*discard (Callable[[Model], bool])
a callable which accepts a *Model* instance as its single argument
and returns a boolean stating whether to discard the instance. The
latter means that the instance will not be added to collected
models nor will its references be explored.
*input_values (Model)
Bokeh models to collect other models from
Returns:
list(Model)
'''
ids: set[ID] = set()
collected: list[Model] = []
queued: list[Model] = []
def queue_one(obj: Model) -> None:
if obj.id not in ids and not (callable(discard) and discard(obj)):
queued.append(obj)
for value in input_values:
visit_value_and_its_immediate_references(value, queue_one)
while queued:
obj = queued.pop(0)
if obj.id not in ids:
ids.add(obj.id)
collected.append(obj)
visit_immediate_value_references(obj, queue_one)
return collected |
Collect a duplicate-free list of all other Bokeh models referred to by
this model, or by any of its references, etc.
Iterate over ``input_values`` and descend through their structure
collecting all nested ``Models`` on the go. The resulting list is
duplicate-free based on objects' identifiers.
Args:
*input_values (Model)
Bokeh models to collect other models from
Returns:
list[Model] : all models reachable from this one. | def collect_models(*input_values: Any) -> list[Model]:
''' Collect a duplicate-free list of all other Bokeh models referred to by
this model, or by any of its references, etc.
Iterate over ``input_values`` and descend through their structure
collecting all nested ``Models`` on the go. The resulting list is
duplicate-free based on objects' identifiers.
Args:
*input_values (Model)
Bokeh models to collect other models from
Returns:
list[Model] : all models reachable from this one.
'''
return collect_filtered_models(None, *input_values) |
Look up a Bokeh model class, given its view model name.
Args:
view_model_name (str) :
A view model name for a Bokeh model to look up
Returns:
Model: the model class corresponding to ``view_model_name``
Raises:
KeyError, if the model cannot be found
Example:
.. code-block:: python
>>> from bokeh.model import get_class
>>> get_class("Range1d")
<class 'bokeh.models.ranges.Range1d'> | def get_class(view_model_name: str) -> type[Model]:
''' Look up a Bokeh model class, given its view model name.
Args:
view_model_name (str) :
A view model name for a Bokeh model to look up
Returns:
Model: the model class corresponding to ``view_model_name``
Raises:
KeyError, if the model cannot be found
Example:
.. code-block:: python
>>> from bokeh.model import get_class
>>> get_class("Range1d")
<class 'bokeh.models.ranges.Range1d'>
'''
# In order to look up from the model catalog that Model maintains, it
# has to be created first. These imports ensure that all built-in Bokeh
# models are represented in the catalog.
from .. import models # noqa: F401
from .. import plotting # noqa: F401
from .model import Model
known_models = Model.model_class_reverse_map
if view_model_name in known_models:
return known_models[view_model_name]
else:
raise KeyError(f"View model name '{view_model_name}' not found") |
Visit all references to another Model without recursing into any
of the child Model; may visit the same Model more than once if
it's referenced more than once. Does not visit the passed-in value. | def visit_immediate_value_references(value: Any, visitor: Callable[[Model], None]) -> None:
''' Visit all references to another Model without recursing into any
of the child Model; may visit the same Model more than once if
it's referenced more than once. Does not visit the passed-in value.
'''
if isinstance(value, HasProps):
for attr in value.properties_with_refs():
child = getattr(value, attr)
visit_value_and_its_immediate_references(child, visitor)
else:
visit_value_and_its_immediate_references(value, visitor) |
Visit Models, HasProps, and Python containers.
Recurses down HasProps references and Python containers (does not recurse
down Model subclasses).
The ordering in this function is to optimize performance. We check the
most comomn types (int, float, str) first so that we can quickly return in
the common case. We avoid isinstance and issubclass checks in a couple
places with `type` checks because isinstance checks can be slow. | def visit_value_and_its_immediate_references(obj: Any, visitor: Callable[[Model], None]) -> None:
''' Visit Models, HasProps, and Python containers.
Recurses down HasProps references and Python containers (does not recurse
down Model subclasses).
The ordering in this function is to optimize performance. We check the
most comomn types (int, float, str) first so that we can quickly return in
the common case. We avoid isinstance and issubclass checks in a couple
places with `type` checks because isinstance checks can be slow.
'''
from .model import Model
typ = type(obj)
if typ in {int, float, str}: # short circuit on common scalar types
return
if typ is list or issubclass(typ, list | tuple): # check common containers
for item in obj:
visit_value_and_its_immediate_references(item, visitor)
elif issubclass(typ, dict):
for key, value in obj.items():
visit_value_and_its_immediate_references(key, visitor)
visit_value_and_its_immediate_references(value, visitor)
elif issubclass(typ, HasProps):
if issubclass(typ, Model):
visitor(obj)
else:
# this isn't a Model, so recurse into it
visit_immediate_value_references(obj, visitor)
elif is_dataclass(obj):
for _, value in entries(obj):
visit_value_and_its_immediate_references(value, visitor) |
Allow flexible selector syntax.
Returns:
dict | def _select_helper(args, kwargs):
""" Allow flexible selector syntax.
Returns:
dict
"""
if len(args) > 1:
raise TypeError("select accepts at most ONE positional argument.")
if len(args) > 0 and len(kwargs) > 0:
raise TypeError("select accepts EITHER a positional argument, OR keyword arguments (not both).")
if len(args) == 0 and len(kwargs) == 0:
raise TypeError("select requires EITHER a positional argument, OR keyword arguments.")
if args:
arg = args[0]
if isinstance(arg, dict):
selector = arg
elif isinstance(arg, str):
selector = dict(name=arg)
elif isinstance(arg, type) and issubclass(arg, Model):
selector = {"type": arg}
else:
raise TypeError("selector must be a dictionary, string or plot object.")
elif 'selector' in kwargs:
if len(kwargs) == 1:
selector = kwargs['selector']
else:
raise TypeError("when passing 'selector' keyword arg, not other keyword args may be present")
else:
selector = kwargs
return selector |
Constraints ``GlyphRenderer.glyph`` to the given type or types. | def GlyphRendererOf(*types: type[Model]):
""" Constraints ``GlyphRenderer.glyph`` to the given type or types. """
return TypeOfAttr(Instance(GlyphRenderer), "glyph", Either(*(Instance(tp) for tp in types))) |
Given a bokeh model f, return a model that displays the graph of its submodels.
Clicking on the nodes of the graph reveals the attributes of that submodel. | def generate_structure_plot(f: Model) -> Model:
""" Given a bokeh model f, return a model that displays the graph of its submodels.
Clicking on the nodes of the graph reveals the attributes of that submodel.
"""
return _BokehStructureGraph(f).model |
Color palette select widget.
.. deprecated:: 3.4.0
Use ``PaletteSelect`` widget instead. | def ColorMap(*args: any, **kwargs: any) -> PaletteSelect:
''' Color palette select widget.
.. deprecated:: 3.4.0
Use ``PaletteSelect`` widget instead.
'''
deprecated((3, 4, 0), "ColorMap widget", "PaletteSelect widget")
return PaletteSelect(*args, **kwargs) |
Return the contour data of filled and/or line contours that can be
passed to :func:`bokeh.models.ContourRenderer.set_data` | def contour_data(
x: ArrayLike | None = None,
y: ArrayLike | None = None,
z: ArrayLike | np.ma.MaskedArray | None = None,
levels: ArrayLike | None = None,
*,
want_fill: bool = True,
want_line: bool = True,
) -> ContourData:
''' Return the contour data of filled and/or line contours that can be
passed to :func:`bokeh.models.ContourRenderer.set_data`
'''
levels = _validate_levels(levels)
if len(levels) < 2:
want_fill = False
if not want_fill and not want_line:
raise ValueError("Neither fill nor line requested in contour_data")
coords = _contour_coords(x, y, z, levels, want_fill, want_line)
fill_data = None
if coords.fill_coords:
fill_coords = coords.fill_coords
fill_data = FillData(xs=fill_coords.xs, ys=fill_coords.ys, lower_levels=levels[:-1], upper_levels=levels[1:])
line_data = None
if coords.line_coords:
line_coords = coords.line_coords
line_data = LineData(xs=line_coords.xs, ys=line_coords.ys, levels=levels)
return ContourData(fill_data, line_data) |
Creates a :class:`bokeh.models.ContourRenderer` containing filled
polygons and/or contour lines.
Usually it is preferable to call :func:`~bokeh.plotting.figure.contour`
instead of this function.
Filled contour polygons are calculated if ``fill_color`` is set,
contour lines if ``line_color`` is set.
Args:
x (array-like[float] of shape (ny, nx) or (nx,), optional) :
The x-coordinates of the ``z`` values. May be 2D with the same
shape as ``z.shape``, or 1D with length ``nx = z.shape[1]``.
If not specified are assumed to be ``np.arange(nx)``. Must be
ordered monotonically.
y (array-like[float] of shape (ny, nx) or (ny,), optional) :
The y-coordinates of the ``z`` values. May be 2D with the same
shape as ``z.shape``, or 1D with length ``ny = z.shape[0]``.
If not specified are assumed to be ``np.arange(ny)``. Must be
ordered monotonically.
z (array-like[float] of shape (ny, nx)) :
A 2D NumPy array of gridded values to calculate the contours
of. May be a masked array, and any invalid values (``np.inf``
or ``np.nan``) will also be masked out.
levels (array-like[float]) :
The z-levels to calculate the contours at, must be increasing.
Contour lines are calculated at each level and filled contours
are calculated between each adjacent pair of levels so the
number of sets of contour lines is ``len(levels)`` and the
number of sets of filled contour polygons is ``len(levels)-1``.
**visuals: |fill properties|, |hatch properties| and |line properties|
Fill and hatch properties are used for filled contours, line
properties for line contours. If using vectorized properties
then the correct number must be used, ``len(levels)`` for line
properties and ``len(levels)-1`` for fill and hatch properties.
``fill_color`` and ``line_color`` are more flexible in that
they will accept longer sequences and interpolate them to the
required number using :func:`~bokeh.palettes.linear_palette`,
and also accept palette collections (dictionaries mapping from
integer length to color sequence) such as
`bokeh.palettes.Cividis`. | def from_contour(
x: ArrayLike | None = None,
y: ArrayLike | None = None,
z: ArrayLike | np.ma.MaskedArray | None = None,
levels: ArrayLike | None = None,
**visuals, # This is union of LineProps, FillProps and HatchProps
) -> ContourRenderer:
''' Creates a :class:`bokeh.models.ContourRenderer` containing filled
polygons and/or contour lines.
Usually it is preferable to call :func:`~bokeh.plotting.figure.contour`
instead of this function.
Filled contour polygons are calculated if ``fill_color`` is set,
contour lines if ``line_color`` is set.
Args:
x (array-like[float] of shape (ny, nx) or (nx,), optional) :
The x-coordinates of the ``z`` values. May be 2D with the same
shape as ``z.shape``, or 1D with length ``nx = z.shape[1]``.
If not specified are assumed to be ``np.arange(nx)``. Must be
ordered monotonically.
y (array-like[float] of shape (ny, nx) or (ny,), optional) :
The y-coordinates of the ``z`` values. May be 2D with the same
shape as ``z.shape``, or 1D with length ``ny = z.shape[0]``.
If not specified are assumed to be ``np.arange(ny)``. Must be
ordered monotonically.
z (array-like[float] of shape (ny, nx)) :
A 2D NumPy array of gridded values to calculate the contours
of. May be a masked array, and any invalid values (``np.inf``
or ``np.nan``) will also be masked out.
levels (array-like[float]) :
The z-levels to calculate the contours at, must be increasing.
Contour lines are calculated at each level and filled contours
are calculated between each adjacent pair of levels so the
number of sets of contour lines is ``len(levels)`` and the
number of sets of filled contour polygons is ``len(levels)-1``.
**visuals: |fill properties|, |hatch properties| and |line properties|
Fill and hatch properties are used for filled contours, line
properties for line contours. If using vectorized properties
then the correct number must be used, ``len(levels)`` for line
properties and ``len(levels)-1`` for fill and hatch properties.
``fill_color`` and ``line_color`` are more flexible in that
they will accept longer sequences and interpolate them to the
required number using :func:`~bokeh.palettes.linear_palette`,
and also accept palette collections (dictionaries mapping from
integer length to color sequence) such as
`bokeh.palettes.Cividis`.
'''
levels = _validate_levels(levels)
if len(levels) < 2:
want_fill = False
nlevels = len(levels)
want_line = visuals.get("line_color", None) is not None
if want_line:
# Handle possible callback or interpolation for line_color.
visuals["line_color"] = _color(visuals["line_color"], nlevels)
line_cds = ColumnDataSource()
_process_sequence_literals(MultiLine, visuals, line_cds, False)
# Remove line visuals identified from visuals dict.
line_visuals = {}
for name in LineProps.properties():
prop = visuals.pop(name, None)
if prop is not None:
line_visuals[name] = prop
else:
visuals.pop("line_color", None)
want_fill = visuals.get("fill_color", None) is not None
if want_fill:
# Handle possible callback or interpolation for fill_color and hatch_color.
visuals["fill_color"] = _color(visuals["fill_color"], nlevels-1)
if "hatch_color" in visuals:
visuals["hatch_color"] = _color(visuals["hatch_color"], nlevels-1)
fill_cds = ColumnDataSource()
_process_sequence_literals(MultiPolygons, visuals, fill_cds, False)
else:
visuals.pop("fill_color", None)
# Check for extra unknown kwargs.
unknown = visuals.keys() - FillProps.properties() - HatchProps.properties()
if unknown:
raise ValueError(f"Unknown keyword arguments in 'from_contour': {', '.join(unknown)}")
new_contour_data = contour_data(x=x, y=y, z=z, levels=levels, want_fill=want_fill, want_line=want_line)
# Will be other possibilities here like logarithmic....
contour_renderer = ContourRenderer(
fill_renderer=GlyphRenderer(glyph=MultiPolygons(), data_source=ColumnDataSource()),
line_renderer=GlyphRenderer(glyph=MultiLine(), data_source=ColumnDataSource()),
levels=list(levels))
contour_renderer.set_data(new_contour_data)
if new_contour_data.fill_data:
glyph = contour_renderer.fill_renderer.glyph
for name, value in visuals.items():
setattr(glyph, name, value)
cds = contour_renderer.fill_renderer.data_source
for name, value in fill_cds.data.items():
cds.add(value, name)
glyph.line_alpha = 0 # Don't display lines around fill.
glyph.line_width = 0
if new_contour_data.line_data:
glyph = contour_renderer.line_renderer.glyph
for name, value in line_visuals.items():
setattr(glyph, name, value)
cds = contour_renderer.line_renderer.data_source
for name, value in line_cds.data.items():
cds.add(value, name)
return contour_renderer |
Return the (xs, ys) coords of filled and/or line contours. | def _contour_coords(
x: ArrayLike | None,
y: ArrayLike | None,
z: ArrayLike | np.ma.MaskedArray | None,
levels: ArrayLike,
want_fill: bool,
want_line: bool,
) -> ContourCoords:
'''
Return the (xs, ys) coords of filled and/or line contours.
'''
if not want_fill and not want_line:
raise RuntimeError("Neither fill nor line requested in _contour_coords")
from contourpy import FillType, LineType, contour_generator
cont_gen = contour_generator(x, y, z, line_type=LineType.ChunkCombinedNan, fill_type=FillType.OuterOffset)
fill_coords = None
if want_fill:
all_xs = []
all_ys = []
for i in range(len(levels)-1):
filled = cont_gen.filled(levels[i], levels[i+1])
# This is guaranteed by use of fill_type=FillType.OuterOffset in contour_generator call.
filled = cast("FillReturn_OuterOffset", filled)
coords = _filled_to_coords(filled)
all_xs.append(coords.xs)
all_ys.append(coords.ys)
fill_coords = FillCoords(all_xs, all_ys)
line_coords = None
if want_line:
all_xs = []
all_ys = []
for level in levels:
lines = cont_gen.lines(level)
# This is guaranteed by use of line_type=LineType.ChunkCombinedNan in contour_generator call.
lines = cast("LineReturn_ChunkCombinedNan", lines)
coords = _lines_to_coords(lines)
all_xs.append(coords.xs)
all_ys.append(coords.ys)
line_coords = LineCoords(all_xs, all_ys)
return ContourCoords(fill_coords, line_coords) |
Create a new :class:`~bokeh.plotting.GMap` for plotting.
Args:
google_api_key (str):
Google requires an API key be supplied for maps to function. See:
https://developers.google.com/maps/documentation/javascript/get-api-key
The Google API key will be stored as a base64-encoded string in
the Bokeh Document JSON.
map_options: (:class:`~bokeh.models.map_plots.GMapOptions`)
Configuration specific to a Google Map
All other keyword arguments are passed to :class:`~bokeh.plotting.GMap`.
Returns:
:class:`~bokeh.plotting.GMap` | def gmap(google_api_key, map_options, **kwargs) -> GMap:
''' Create a new :class:`~bokeh.plotting.GMap` for plotting.
Args:
google_api_key (str):
Google requires an API key be supplied for maps to function. See:
https://developers.google.com/maps/documentation/javascript/get-api-key
The Google API key will be stored as a base64-encoded string in
the Bokeh Document JSON.
map_options: (:class:`~bokeh.models.map_plots.GMapOptions`)
Configuration specific to a Google Map
All other keyword arguments are passed to :class:`~bokeh.plotting.GMap`.
Returns:
:class:`~bokeh.plotting.GMap`
'''
return GMap(api_key=google_api_key, map_options=map_options, **kwargs) |
Generate a ``GraphRenderer`` from a ``networkx.Graph`` object and networkx
layout function. Any keyword arguments will be passed to the
layout function.
Only two-dimensional layouts are supported.
Args:
graph (networkx.Graph) : a networkx graph to render
layout_function (function or dict) : a networkx layout function or mapping of node keys to positions.
The position is a two element sequence containing the x and y coordinate.
Returns:
instance (GraphRenderer)
.. note::
Node and edge attributes may be lists or tuples. However, a given
attribute must either have *all* lists or tuple values, or *all*
scalar values, for nodes or edges it is defined on.
.. warning::
Node attributes labeled 'index' and edge attributes labeled 'start' or 'end' are ignored.
If you want to convert these attributes, please re-label them to other names.
Raises:
ValueError | def from_networkx(graph: nx.Graph, layout_function: dict[int | str, Sequence[float]], **kwargs: Any) -> GraphRenderer:
'''
Generate a ``GraphRenderer`` from a ``networkx.Graph`` object and networkx
layout function. Any keyword arguments will be passed to the
layout function.
Only two-dimensional layouts are supported.
Args:
graph (networkx.Graph) : a networkx graph to render
layout_function (function or dict) : a networkx layout function or mapping of node keys to positions.
The position is a two element sequence containing the x and y coordinate.
Returns:
instance (GraphRenderer)
.. note::
Node and edge attributes may be lists or tuples. However, a given
attribute must either have *all* lists or tuple values, or *all*
scalar values, for nodes or edges it is defined on.
.. warning::
Node attributes labeled 'index' and edge attributes labeled 'start' or 'end' are ignored.
If you want to convert these attributes, please re-label them to other names.
Raises:
ValueError
'''
# Handles nx 1.x vs 2.x data structure change
# Convert node attributes
node_dict = dict()
node_attr_keys = [attr_key for node in list(graph.nodes(data=True))
for attr_key in node[1].keys()]
node_attr_keys = list(set(node_attr_keys))
for attr_key in node_attr_keys:
values = [node_attr[attr_key] if attr_key in node_attr.keys() else None
for _, node_attr in graph.nodes(data=True)]
values = _handle_sublists(values)
node_dict[attr_key] = values
if 'index' in node_attr_keys:
warn("Converting node attributes labeled 'index' are skipped. "
"If you want to convert these attributes, please re-label with other names.")
node_dict['index'] = list(graph.nodes())
# Convert edge attributes
edge_dict = dict()
edge_attr_keys = [attr_key for edge in graph.edges(data=True)
for attr_key in edge[2].keys()]
edge_attr_keys = list(set(edge_attr_keys))
for attr_key in edge_attr_keys:
values = [edge_attr[attr_key] if attr_key in edge_attr.keys() else None
for _, _, edge_attr in graph.edges(data=True)]
values = _handle_sublists(values)
edge_dict[attr_key] = values
if 'start' in edge_attr_keys or 'end' in edge_attr_keys:
warn("Converting edge attributes labeled 'start' or 'end' are skipped. "
"If you want to convert these attributes, please re-label them with other names.")
edge_dict['start'] = [x[0] for x in graph.edges()]
edge_dict['end'] = [x[1] for x in graph.edges()]
graph_renderer = GraphRenderer()
graph_renderer.node_renderer.data_source.data = node_dict
graph_renderer.edge_renderer.data_source.data = edge_dict
if callable(layout_function):
graph_layout = layout_function(graph, **kwargs)
else:
graph_layout = layout_function
node_keys = graph_renderer.node_renderer.data_source.data['index']
if set(node_keys) != set(layout_function.keys()):
warn("Node keys in 'layout_function' don't match node keys in the graph. "
"These nodes may not be displayed correctly.")
graph_renderer.layout_provider = StaticLayoutProvider(graph_layout=graph_layout)
return graph_renderer |
Prints a list of valid marker types for scatter()
Returns:
None | def markers():
''' Prints a list of valid marker types for scatter()
Returns:
None
'''
print("Available markers: \n\n - " + "\n - ".join(list(MarkerType)))
print()
print("Shortcuts: \n\n" + "\n".join(f" {short!r}: {name}" for (short, name) in _MARKER_SHORTCUTS.items())) |
Applies basic cascading logic to deduce properties for a glyph.
Args:
glyphclass :
the type of glyph being handled
props (dict) :
Maps properties and prefixed properties to their values.
Keys in `props` matching `glyphclass` visual properties (those of
'line_', 'fill_', 'hatch_' or 'text_') with added `prefix` will get
popped, other keys will be ignored.
Keys take the form '[{prefix}][{feature}_]{trait}'. Only {feature}
must not contain underscores.
Keys of the form '{prefix}{trait}' work as lower precedence aliases
for {trait} for all {features}, as long as the glyph has no
property called {trait}. I.e. this won't apply to "width" in a
`rect` glyph.
Ex: {'fill_color': 'blue', 'selection_line_width': 0.5}
prefix (str) :
Prefix used when accessing `props`. Ex: 'selection_'
override_defaults (dict) :
Explicitly provided fallback based on '{trait}', in case property
not set in `props`.
Ex. 'width' here may be used for 'selection_line_width'.
defaults (dict) :
Property fallback, in case prefixed property not in `props` or
`override_defaults`.
Ex. 'line_width' here may be used for 'selection_line_width'.
Returns:
result (dict) :
Resulting properties for the instance (no prefixes).
Notes:
Feature trait 'text_color', as well as traits 'color' and 'alpha', have
ultimate defaults in case those can't be deduced. | def pop_visuals(glyphclass, props, prefix="", defaults={}, override_defaults={}):
"""
Applies basic cascading logic to deduce properties for a glyph.
Args:
glyphclass :
the type of glyph being handled
props (dict) :
Maps properties and prefixed properties to their values.
Keys in `props` matching `glyphclass` visual properties (those of
'line_', 'fill_', 'hatch_' or 'text_') with added `prefix` will get
popped, other keys will be ignored.
Keys take the form '[{prefix}][{feature}_]{trait}'. Only {feature}
must not contain underscores.
Keys of the form '{prefix}{trait}' work as lower precedence aliases
for {trait} for all {features}, as long as the glyph has no
property called {trait}. I.e. this won't apply to "width" in a
`rect` glyph.
Ex: {'fill_color': 'blue', 'selection_line_width': 0.5}
prefix (str) :
Prefix used when accessing `props`. Ex: 'selection_'
override_defaults (dict) :
Explicitly provided fallback based on '{trait}', in case property
not set in `props`.
Ex. 'width' here may be used for 'selection_line_width'.
defaults (dict) :
Property fallback, in case prefixed property not in `props` or
`override_defaults`.
Ex. 'line_width' here may be used for 'selection_line_width'.
Returns:
result (dict) :
Resulting properties for the instance (no prefixes).
Notes:
Feature trait 'text_color', as well as traits 'color' and 'alpha', have
ultimate defaults in case those can't be deduced.
"""
defaults = defaults.copy()
defaults.setdefault('text_color', 'black')
defaults.setdefault('hatch_color', 'black')
trait_defaults = {}
trait_defaults.setdefault('color', get_default_color())
trait_defaults.setdefault('alpha', 1.0)
result, traits = dict(), set()
prop_names = set(glyphclass.properties())
for name in filter(_is_visual, prop_names):
_, trait = _split_feature_trait(name)
# e.g. "line_color", "selection_fill_alpha"
if prefix+name in props:
result[name] = props.pop(prefix+name)
# e.g. "nonselection_alpha"
elif trait not in prop_names and prefix+trait in props:
result[name] = props[prefix+trait]
# e.g. an alpha to use for nonselection if none is provided
elif trait in override_defaults:
result[name] = override_defaults[trait]
# e.g use values off the main glyph
elif name in defaults:
result[name] = defaults[name]
# e.g. not specificed anywhere else
elif trait in trait_defaults:
result[name] = trait_defaults[trait]
if trait not in prop_names:
traits.add(trait)
for trait in traits:
props.pop(prefix+trait, None)
return result |
Feature is up to first '_'. Ex. 'line_color' => ['line', 'color'] | def _split_feature_trait(ft):
"""Feature is up to first '_'. Ex. 'line_color' => ['line', 'color']"""
ft = ft.split('_', 1)
return ft if len(ft)==2 else [*ft, None] |
Whether a feature trait name is visual | def _is_visual(ft):
"""Whether a feature trait name is visual"""
feature, trait = _split_feature_trait(ft)
return feature in ('line', 'fill', 'hatch', 'text', 'global') and trait is not None |
Adds tools to the plot object
Args:
toolbar (Toolbar): instance of a Toolbar object
tools_map (dict[str]): tool_map from _process_tools_arg
active_drag (str, None, "auto" or Tool): the tool to set active for drag
active_inspect (str, None, "auto", Tool or Tool[]): the tool to set active for inspect
active_scroll (str, None, "auto" or Tool): the tool to set active for scroll
active_tap (str, None, "auto" or Tool): the tool to set active for tap
active_multi (str, None, "auto" or Tool): the tool to set active for tap
Returns:
None
Note:
This function sets properties on Toolbar | def process_active_tools(toolbar: Toolbar, tool_map: dict[str, Tool],
active_drag: ActiveDrag, active_inspect: ActiveInspect, active_scroll: ActiveScroll,
active_tap: ActiveTap, active_multi: ActiveMulti) -> None:
""" Adds tools to the plot object
Args:
toolbar (Toolbar): instance of a Toolbar object
tools_map (dict[str]): tool_map from _process_tools_arg
active_drag (str, None, "auto" or Tool): the tool to set active for drag
active_inspect (str, None, "auto", Tool or Tool[]): the tool to set active for inspect
active_scroll (str, None, "auto" or Tool): the tool to set active for scroll
active_tap (str, None, "auto" or Tool): the tool to set active for tap
active_multi (str, None, "auto" or Tool): the tool to set active for tap
Returns:
None
Note:
This function sets properties on Toolbar
"""
if active_drag in ["auto", None] or isinstance(active_drag, Tool):
toolbar.active_drag = cast(Any, active_drag)
elif active_drag in tool_map:
toolbar.active_drag = cast(Any, tool_map[active_drag])
else:
raise ValueError(f"Got unknown {active_drag!r} for 'active_drag', which was not a string supplied in 'tools' argument")
if active_inspect in ["auto", None] or isinstance(active_inspect, Tool) or \
(isinstance(active_inspect, list) and all(isinstance(t, Tool) for t in active_inspect)):
toolbar.active_inspect = cast(Any, active_inspect)
elif isinstance(active_inspect, str) and active_inspect in tool_map:
toolbar.active_inspect = cast(Any, tool_map[active_inspect])
else:
raise ValueError(f"Got unknown {active_inspect!r} for 'active_inspect', which was not a string supplied in 'tools' argument")
if active_scroll in ["auto", None] or isinstance(active_scroll, Tool):
toolbar.active_scroll = cast(Any, active_scroll)
elif active_scroll in tool_map:
toolbar.active_scroll = cast(Any, tool_map[active_scroll])
else:
raise ValueError(f"Got unknown {active_scroll!r} for 'active_scroll', which was not a string supplied in 'tools' argument")
if active_tap in ["auto", None] or isinstance(active_tap, Tool):
toolbar.active_tap = cast(Any, active_tap)
elif active_tap in tool_map:
toolbar.active_tap = cast(Any, tool_map[active_tap])
else:
raise ValueError(f"Got unknown {active_tap!r} for 'active_tap', which was not a string supplied in 'tools' argument")
if active_multi in ["auto", None] or isinstance(active_multi, Tool):
toolbar.active_multi = cast(Any, active_multi)
elif active_multi in tool_map:
toolbar.active_multi = cast(Any, tool_map[active_multi])
else:
raise ValueError(f"Got unknown {active_multi!r} for 'active_multi', which was not a string supplied in 'tools' argument") |
Adds tools to the plot object
Args:
plot (Plot): instance of a plot object
tools (seq[Tool or str]|str): list of tool types or string listing the
tool names. Those are converted using the to actual Tool instances.
tooltips (string or seq[tuple[str, str]], optional):
tooltips to use to configure a HoverTool
Returns:
list of Tools objects added to plot, map of supplied string names to tools | def process_tools_arg(plot: Plot, tools: str | Sequence[Tool | str],
tooltips: str | tuple[str, str] | None = None) -> tuple[list[Tool], dict[str, Tool]]:
""" Adds tools to the plot object
Args:
plot (Plot): instance of a plot object
tools (seq[Tool or str]|str): list of tool types or string listing the
tool names. Those are converted using the to actual Tool instances.
tooltips (string or seq[tuple[str, str]], optional):
tooltips to use to configure a HoverTool
Returns:
list of Tools objects added to plot, map of supplied string names to tools
"""
tool_objs, tool_map = _resolve_tools(tools)
repeated_tools = [ str(obj) for obj in _collect_repeated_tools(tool_objs) ]
if repeated_tools:
warn(f"{','.join(repeated_tools)} are being repeated")
if tooltips is not None:
for tool_obj in tool_objs:
if isinstance(tool_obj, HoverTool):
tool_obj.tooltips = tooltips # type: ignore
break
else:
tool_objs.append(HoverTool(tooltips=tooltips))
return tool_objs, tool_map |
def _read_data() -> pd.DataFrame:
'''
'''
import pandas as pd
with open(external_path('airports.json')) as f:
content = f.read()
airports = json.loads(content)
schema: Any = [['attributes', 'nam'], ['attributes', 'zv3'], ['geometry', 'x'], ['geometry', 'y']]
data = pd.json_normalize(airports['features'], meta=schema)
data.rename(columns={'attributes.nam': 'name', 'attributes.zv3': 'elevation'}, inplace=True)
data.rename(columns={'geometry.x': 'x', 'geometry.y': 'y'}, inplace=True)
return data |
|
def _read_data() -> pd.DataFrame:
'''
'''
import pandas as pd
return pd.read_csv(StringIO(CSV), skiprows=1, skipinitialspace=True, engine='python') |
|
def _read_data() -> pd.DataFrame:
'''
'''
import pandas as pd
return pd.read_csv(StringIO(CSV), skiprows=1, skipinitialspace=True, engine='python') |
|
def _clean_data(df: pd.DataFrame) -> pd.DataFrame:
'''
'''
df = df.copy()
df['mfr'] = [x.split()[0] for x in df.name]
df.loc[df.mfr == 'chevy', 'mfr'] = 'chevrolet'
df.loc[df.mfr == 'chevroelt', 'mfr'] = 'chevrolet'
df.loc[df.mfr == 'maxda', 'mfr'] = 'mazda'
df.loc[df.mfr == 'mercedes-benz', 'mfr'] = 'mercedes'
df.loc[df.mfr == 'toyouta', 'mfr'] = 'toyota'
df.loc[df.mfr == 'vokswagen', 'mfr'] = 'volkswagen'
df.loc[df.mfr == 'vw', 'mfr'] = 'volkswagen'
ORIGINS = ['North America', 'Europe', 'Asia']
df.origin = [ORIGINS[x-1] for x in df.origin]
return df |
|
def _capitalize_words(string: str) -> str:
'''
'''
return " ".join(word.capitalize() for word in string.split(" ")) |
|
def _read_data() -> DataFrame:
'''
'''
df = package_csv('autompg2', 'auto-mpg2.csv').copy()
df["manufacturer"] = df["manufacturer"].map(_capitalize_words)
df["model"] = df["model"].map(_capitalize_words)
df["drv"] = df["drv"].replace({"f": "front", "r": "rear", "4": "4x4"})
return df |
|
def _read_data() -> tuple[DataFrame, dict[str, bytes]]:
'''
'''
df = package_csv('browsers', 'browsers_nov_2013.csv', names=["Version", "Share"], skiprows=1)
_versions = df.Version.map(lambda x: x.rsplit(" ", 1))
df["Browser"] = _versions.map(lambda x: x[0])
df["VersionNumber"] = _versions.map(lambda x: x[1] if len(x) == 2 else "0")
icons = {}
for browser in ["Chrome", "Firefox", "Safari", "Opera", "IE"]:
with open(package_path(join("icons", browser.lower() + "_32x32.png")), "rb") as icon:
icons[browser] = icon.read()
return df, icons |
|
def _read_data() -> pd.DataFrame:
'''
'''
import pandas as pd
data = package_csv('commits', 'commits.txt.gz', parse_dates=True, header=None, names=['day', 'datetime'], index_col='datetime')
data.index = cast(Any, pd.to_datetime(data.index, utc=True).tz_convert('US/Central'))
data['time'] = data.index.time # type: ignore[attr-defined]
return data |
|
def _read_data() -> DataFrame:
'''
'''
import pandas as pd
df = package_csv('daylight', 'daylight_warsaw_2013.csv', parse_dates=False)
df["Date"] = pd.to_datetime(df.Date).map(lambda x: pd.to_datetime(x).date())
df["Sunrise"] = pd.to_datetime(df.Sunrise, format="%H:%M:%S").map(lambda x: x.time())
df["Sunset"] = pd.to_datetime(df.Sunset, format="%H:%M:%S").map(lambda x: x.time())
return df |
|
def _read_data() -> DataFrame:
'''
'''
df = external_csv('population', 'WPP2012_SA_DB03_POPULATION_QUINQUENNIAL.csv', encoding="CP1250")
df = df[df.Sex != "Both"]
df = df.drop(["VarID", "Variant", "MidPeriod", "SexID", "AgeGrpSpan"], axis=1)
df = df.rename(columns={"Time": "Year"})
df.Value *= 1000
return df |
|
def _read_data(name: str) -> StockData:
'''
'''
filename = external_path(name + '.csv')
data = StockData(
date = [],
open = [],
high = [],
low = [],
close = [],
volume = [],
adj_close = [],
)
with open_csv(filename) as f:
next(f)
reader = csv.reader(f, delimiter=",")
for row in reader:
date, open_price, high, low, close, volume, adj_close = row
data['date'].append(date)
data['open'].append(float(open_price))
data['high'].append(float(high))
data['low'].append(float(low))
data['close'].append(float(close))
data['volume'].append(int(volume))
data['adj_close'].append(float(adj_close))
return data |
|
def _read_data() -> dict[tuple[State, County], float]:
'''
'''
data: dict[tuple[State, County], float] = {}
with open_csv(external_path("unemployment09.csv")) as f:
reader = csv.reader(f, delimiter=",", quotechar='"')
for row in reader:
_, state_id, county_id, _, _, _, _, _, rate = row
data[(int(state_id), int(county_id))] = float(rate)
return data |
|
def _read_data() -> dict[tuple[State, County], CountyData]:
'''
'''
data: dict[tuple[State, County], CountyData] = {}
with open_csv(external_path('US_Counties.csv')) as f:
next(f)
reader = csv.reader(f, delimiter=",", quotechar='"')
for row in reader:
name, _, state, _, geometry, _, _, _, det_name, state_id, county_id, _, _ = row
xml = et.fromstring(geometry)
lats: list[float] = []
lons: list[float] = []
for i, poly in enumerate(xml.findall('.//outerBoundaryIs/LinearRing/coordinates')):
if i > 0:
lats.append(nan)
lons.append(nan)
assert isinstance(poly.text, str)
coords = (c.split(',')[:2] for c in poly.text.split())
lat, lon = list(zip(*[(float(lat), float(lon)) for lon, lat in coords]))
lats.extend(lat)
lons.extend(lon)
data[(int(state_id), int(county_id))] = CountyData(
name = name,
detailed_name = det_name,
state = state,
lats = np.array(lats),
lons = np.array(lons),
)
return data |
|
def _read_data() -> list[tuple[datetime, str]]:
'''
'''
ic = import_required('icalendar', "us_holidays data requires icalendar (http://icalendar.readthedocs.org) to be installed")
with open(package_path("USHolidays.ics")) as f:
data = ic.Calendar.from_ical(f.read())
return sorted((comp.get("dtstart").dt, str(comp.get("summary"))) for comp in data.walk() if comp.name == "VEVENT") |
|
def _read_data() -> DataFrame:
'''
'''
data = package_csv('us_marriages_divorces', 'us_marriages_divorces.csv')
return data.interpolate(method='linear', axis=0).ffill().bfill() |
|
def _read_data() -> dict[State, StateData]:
'''
'''
data: dict[State, StateData] = {}
with gzip.open(package_path('US_Regions_State_Boundaries.csv.gz')) as f:
decoded = codecs.iterdecode(f, "utf-8")
next(decoded)
reader = csv.reader(decoded, delimiter=",", quotechar='"')
for row in reader:
region, name, code, geometry, _ = row
xml = et.fromstring(geometry)
lats: list[float] = []
lons: list[float] = []
for i, poly in enumerate(xml.findall('.//outerBoundaryIs/LinearRing/coordinates')):
if i > 0:
lats.append(nan)
lons.append(nan)
assert isinstance(poly.text, str)
coords = (c.split(',')[:2] for c in poly.text.split())
lat, lon = list(zip(*[(float(lat), float(lon)) for lon, lat in coords]))
lats.extend(lat)
lons.extend(lon)
data[code] = StateData(
name = name,
region = region,
lats = np.array(lats),
lons = np.array(lons),
)
return data |
|
Load a Python source file at a given path as a module.
Arguments:
module_path (str): path to a Python source file
Returns
module | def load_auth_module(module_path: PathLike) -> ModuleType:
''' Load a Python source file at a given path as a module.
Arguments:
module_path (str): path to a Python source file
Returns
module
'''
module_name = "bokeh.auth_" + make_globally_unique_id().replace('-', '')
spec = importlib.util.spec_from_file_location(module_name, module_path)
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
return module |
Return True if a URL is not one of the common absolute URL formats.
Arguments:
url (str): a URL string
Returns
bool | def probably_relative_url(url: str) -> bool:
''' Return True if a URL is not one of the common absolute URL formats.
Arguments:
url (str): a URL string
Returns
bool
'''
return not url.startswith(("http://", "https://", "//")) |
Decorator that adds the necessary locking and post-processing
to manipulate the session's document. Expects to decorate a
method on ServerSession and transforms it into a coroutine
if it wasn't already. | def _needs_document_lock(func: F) -> F:
'''Decorator that adds the necessary locking and post-processing
to manipulate the session's document. Expects to decorate a
method on ServerSession and transforms it into a coroutine
if it wasn't already.
'''
@wraps(func)
async def _needs_document_lock_wrapper(self: ServerSession, *args, **kwargs):
# while we wait for and hold the lock, prevent the session
# from being discarded. This avoids potential weirdness
# with the session vanishing in the middle of some async
# task.
if self.destroyed:
log.debug("Ignoring locked callback on already-destroyed session.")
return None
self.block_expiration()
try:
with await self._lock.acquire():
if self._pending_writes is not None:
raise RuntimeError("internal class invariant violated: _pending_writes " + \
"should be None if lock is not held")
self._pending_writes = []
try:
result = func(self, *args, **kwargs)
if inspect.isawaitable(result):
# Note that this must not be outside of the critical section.
# Otherwise, the async callback will be ran without document locking.
result = await result
finally:
# we want to be very sure we reset this or we'll
# keep hitting the RuntimeError above as soon as
# any callback goes wrong
pending_writes = self._pending_writes
self._pending_writes = None
for p in pending_writes:
await p
return result
finally:
self.unblock_expiration()
return _needs_document_lock_wrapper |
Return the time in milliseconds since the epoch as a floating
point number. | def current_time() -> float:
'''Return the time in milliseconds since the epoch as a floating
point number.
'''
return time.monotonic() * 1000 |
Bind a socket to a port on an address.
Args:
address (str) :
An address to bind a port on, e.g. ``"localhost"``
port (int) :
A port number to bind.
Pass 0 to have the OS automatically choose a free port.
This function returns a 2-tuple with the new socket as the first element,
and the port that was bound as the second. (Useful when passing 0 as a port
number to bind any free port.)
Returns:
(socket, port) | def bind_sockets(address: str | None, port: int) -> tuple[list[socket], int]:
''' Bind a socket to a port on an address.
Args:
address (str) :
An address to bind a port on, e.g. ``"localhost"``
port (int) :
A port number to bind.
Pass 0 to have the OS automatically choose a free port.
This function returns a 2-tuple with the new socket as the first element,
and the port that was bound as the second. (Useful when passing 0 as a port
number to bind any free port.)
Returns:
(socket, port)
'''
ss = netutil.bind_sockets(port=port or 0, address=address)
assert len(ss)
ports = {s.getsockname()[1] for s in ss}
assert len(ports) == 1, "Multiple ports assigned??"
actual_port = ports.pop()
if port:
assert actual_port == port
return ss, actual_port |
Check a given request host against a allowlist.
Args:
host (str) :
A host string to compare against a allowlist.
If the host does not specify a port, then ``":80"`` is implicitly
assumed.
allowlist (seq[str]) :
A list of host patterns to match against
Returns:
``True``, if ``host`` matches any pattern in ``allowlist``, otherwise
``False``
| def check_allowlist(host: str, allowlist: Sequence[str]) -> bool:
''' Check a given request host against a allowlist.
Args:
host (str) :
A host string to compare against a allowlist.
If the host does not specify a port, then ``":80"`` is implicitly
assumed.
allowlist (seq[str]) :
A list of host patterns to match against
Returns:
``True``, if ``host`` matches any pattern in ``allowlist``, otherwise
``False``
'''
if ':' not in host:
host = host + ':80'
if host in allowlist:
return True
return any(match_host(host, pattern) for pattern in allowlist) |
This allowlist can be used to restrict websocket or other connections to
only those explicitly originating from approved hosts.
Args:
host_list (seq[str]) :
A list of string `<name>` or `<name>:<port>` values to add to the
allowlist.
If no port is specified in a host string, then ``":80"`` is
implicitly assumed.
port (int) :
If ``host_list`` is empty or ``None``, then the allowlist will
be the single item list `` [ 'localhost:<port>' ]``
If ``host_list`` is not empty, this parameter has no effect.
Returns:
list[str]
Raises:
ValueError, if host or port values are invalid
Note:
If any host in ``host_list`` contains a wildcard ``*`` a warning will
be logged regarding permissive websocket connections. | def create_hosts_allowlist(host_list: Sequence[str] | None, port: int | None) -> list[str]:
'''
This allowlist can be used to restrict websocket or other connections to
only those explicitly originating from approved hosts.
Args:
host_list (seq[str]) :
A list of string `<name>` or `<name>:<port>` values to add to the
allowlist.
If no port is specified in a host string, then ``":80"`` is
implicitly assumed.
port (int) :
If ``host_list`` is empty or ``None``, then the allowlist will
be the single item list `` [ 'localhost:<port>' ]``
If ``host_list`` is not empty, this parameter has no effect.
Returns:
list[str]
Raises:
ValueError, if host or port values are invalid
Note:
If any host in ``host_list`` contains a wildcard ``*`` a warning will
be logged regarding permissive websocket connections.
'''
if not host_list:
return ['localhost:' + str(port)]
hosts: list[str] = []
for host in host_list:
if '*' in host:
log.warning(
"Host wildcard %r will allow connections originating "
"from multiple (or possibly all) hostnames or IPs. Use non-wildcard "
"values to restrict access explicitly", host)
if host == '*':
# do not append the :80 port suffix in that case: any port is
# accepted
hosts.append(host)
continue
parts = host.split(':')
if len(parts) == 1:
if parts[0] == "":
raise ValueError("Empty host value")
hosts.append(host+":80")
elif len(parts) == 2:
try:
int(parts[1])
except ValueError:
raise ValueError(f"Invalid port in host value: {host}")
if parts[0] == "":
raise ValueError("Empty host value")
hosts.append(host)
else:
raise ValueError(f"Invalid host value: {host}")
return hosts |
Match a host string against a pattern
Args:
host (str)
A hostname to compare to the given pattern
pattern (str)
A string representing a hostname pattern, possibly including
wildcards for ip address octets or ports.
This function will return ``True`` if the hostname matches the pattern,
including any wildcards. If the pattern contains a port, the host string
must also contain a matching port.
Returns:
bool
Examples:
>>> match_host('192.168.0.1:80', '192.168.0.1:80')
True
>>> match_host('192.168.0.1:80', '192.168.0.1')
True
>>> match_host('192.168.0.1:80', '192.168.0.1:8080')
False
>>> match_host('192.168.0.1', '192.168.0.2')
False
>>> match_host('192.168.0.1', '192.168.*.*')
True
>>> match_host('alice', 'alice')
True
>>> match_host('alice:80', 'alice')
True
>>> match_host('alice', 'bob')
False
>>> match_host('foo.example.com', 'foo.example.com.net')
False
>>> match_host('alice', '*')
True
>>> match_host('alice', '*:*')
True
>>> match_host('alice:80', '*')
True
>>> match_host('alice:80', '*:80')
True
>>> match_host('alice:8080', '*:80')
False | def match_host(host: str, pattern: str) -> bool:
''' Match a host string against a pattern
Args:
host (str)
A hostname to compare to the given pattern
pattern (str)
A string representing a hostname pattern, possibly including
wildcards for ip address octets or ports.
This function will return ``True`` if the hostname matches the pattern,
including any wildcards. If the pattern contains a port, the host string
must also contain a matching port.
Returns:
bool
Examples:
>>> match_host('192.168.0.1:80', '192.168.0.1:80')
True
>>> match_host('192.168.0.1:80', '192.168.0.1')
True
>>> match_host('192.168.0.1:80', '192.168.0.1:8080')
False
>>> match_host('192.168.0.1', '192.168.0.2')
False
>>> match_host('192.168.0.1', '192.168.*.*')
True
>>> match_host('alice', 'alice')
True
>>> match_host('alice:80', 'alice')
True
>>> match_host('alice', 'bob')
False
>>> match_host('foo.example.com', 'foo.example.com.net')
False
>>> match_host('alice', '*')
True
>>> match_host('alice', '*:*')
True
>>> match_host('alice:80', '*')
True
>>> match_host('alice:80', '*:80')
True
>>> match_host('alice:8080', '*:80')
False
'''
host_port: str | None = None
if ':' in host:
host, host_port = host.rsplit(':', 1)
pattern_port: str | None = None
if ':' in pattern:
pattern, pattern_port = pattern.rsplit(':', 1)
if pattern_port == '*':
pattern_port = None
if pattern_port is not None and host_port != pattern_port:
return False
host_parts = host.split('.')
pattern_parts = pattern.split('.')
if len(pattern_parts) > len(host_parts):
return False
for h, p in zip(host_parts, pattern_parts):
if h == p or p == '*':
continue
else:
return False
return True |
Required Sphinx extension setup function. | def setup(app):
""" Required Sphinx extension setup function. """
app.add_node(bokehjs_content, html=bokehjs_content.html)
app.add_directive("bokehjs-content", BokehJSContent)
return PARALLEL_SAFE |
Required Sphinx extension setup function. | def setup(app):
""" Required Sphinx extension setup function. """
app.add_autodocumenter(ColorDocumenter)
app.add_autodocumenter(EnumDocumenter)
app.add_autodocumenter(PropDocumenter)
app.add_autodocumenter(ModelDocumenter)
return PARALLEL_SAFE |
Required Sphinx extension setup function. | def setup(app):
""" Required Sphinx extension setup function. """
app.add_directive_to_domain("py", "bokeh-color", BokehColorDirective)
return PARALLEL_SAFE |
Generate an inline visual representation of a single color palette.
If the HTML representation of the dataframe can not be created, a
SphinxError is raised to terminate the build.
For details on the arguments to this function, consult the Docutils docs:
http://docutils.sourceforge.net/docs/howto/rst-roles.html#define-the-role-function | def bokeh_dataframe(name, rawtext, text, lineno, inliner, options=None, content=None):
"""Generate an inline visual representation of a single color palette.
If the HTML representation of the dataframe can not be created, a
SphinxError is raised to terminate the build.
For details on the arguments to this function, consult the Docutils docs:
http://docutils.sourceforge.net/docs/howto/rst-roles.html#define-the-role-function
"""
import pandas as pd
module_name, df_name = text.rsplit(".", 1)
try:
module = importlib.import_module(module_name)
except ImportError:
raise SphinxError(f"Unable to generate HTML table for {df_name}: couldn't import module {module_name}")
df = getattr(module, df_name, None)
if df is None:
raise SphinxError(f"Unable to generate HTML table for {df_name}: no Dataframe {df_name} in module {module_name}")
if not isinstance(df, pd.DataFrame):
raise SphinxError(f"{text!r} is not a pandas Dataframe")
node = nodes.raw("", df.head().to_html(), format="html")
return [node], [] |
Required Sphinx extension setup function. | def setup(app):
""" Required Sphinx extension setup function. """
app.add_role("bokeh-dataframe", bokeh_dataframe)
return PARALLEL_SAFE |
Required Sphinx extension setup function. | def setup(app):
""" Required Sphinx extension setup function. """
app.add_directive_to_domain("py", "bokeh-enum", BokehEnumDirective)
return PARALLEL_SAFE |
Required Sphinx extension setup function. | def setup(app):
""" Required Sphinx extension setup function. """
app.add_directive("bokeh-example-metadata", BokehExampleMetadataDirective)
return PARALLEL_SAFE |
Required Sphinx extension setup function. | def setup(app):
""" Required Sphinx extension setup function. """
app.add_config_value("bokeh_gallery_dir", join("docs", "gallery"), "html")
app.add_config_value("bokeh_examples_dir", join("docs", "examples"), "html")
app.add_config_value("bokeh_example_subdirs", [], "html")
app.add_config_value("bokeh_sampledata_xref_skiplist", [], "html")
app.connect("config-inited", config_inited_handler)
app.add_directive("bokeh-gallery", BokehGalleryDirective)
return PARALLEL_SAFE |
Required Sphinx extension setup function. | def setup(app):
""" Required Sphinx extension setup function. """
app.add_directive_to_domain("py", "bokeh-jinja", BokehJinjaDirective)
return PARALLEL_SAFE |
Required Sphinx extension setup function. | def setup(app):
""" Required Sphinx extension setup function. """
app.add_directive_to_domain("py", "bokeh-model", BokehModelDirective)
return PARALLEL_SAFE |
Required Sphinx extension setup function. | def setup(app):
""" Required Sphinx extension setup function. """
app.add_directive_to_domain("py", "bokeh-options", BokehOptionsDirective)
return PARALLEL_SAFE |
Generate an inline visual representations of a single color palette.
This function evaluates the expression ``f"palette = {text}"``, in the
context of a ``globals`` namespace that has previously imported all of
|bokeh.plotting|. The resulting value for ``palette`` is used to
construct a sequence of HTML ``<span>`` elements for each color.
If evaluating the palette expression fails or does not produce a list or
tuple of all strings, then a SphinxError is raised to terminate the build.
For details on the arguments to this function, consult the Docutils docs:
http://docutils.sourceforge.net/docs/howto/rst-roles.html#define-the-role-function | def bokeh_palette(name, rawtext, text, lineno, inliner, options=None, content=None):
"""Generate an inline visual representations of a single color palette.
This function evaluates the expression ``f"palette = {text}"``, in the
context of a ``globals`` namespace that has previously imported all of
|bokeh.plotting|. The resulting value for ``palette`` is used to
construct a sequence of HTML ``<span>`` elements for each color.
If evaluating the palette expression fails or does not produce a list or
tuple of all strings, then a SphinxError is raised to terminate the build.
For details on the arguments to this function, consult the Docutils docs:
http://docutils.sourceforge.net/docs/howto/rst-roles.html#define-the-role-function
"""
try:
exec(f"palette = {text}", _globals)
except Exception as e:
raise SphinxError(f"cannot evaluate palette expression {text!r}, reason: {e}")
p = _globals.get("palette", None)
if not isinstance(p, list | tuple) or not all(isinstance(x, str) for x in p):
raise SphinxError(f"palette expression {text!r} generated invalid or no output: {p}")
w = 20 if len(p) < 15 else 10 if len(p) < 32 else 5 if len(p) < 64 else 2 if len(p) < 128 else 1
html = PALETTE_DETAIL.render(palette=p, width=w)
node = nodes.raw("", html, format="html")
return [node], [] |
Required Sphinx extension setup function. | def setup(app):
""" Required Sphinx extension setup function. """
app.add_role("bokeh-palette", bokeh_palette)
return PARALLEL_SAFE |
Required Sphinx extension setup function. | def setup(app):
""" Required Sphinx extension setup function. """
app.add_node(bokeh_palette_group, html=bokeh_palette_group.html)
app.add_directive("bokeh-palette-group", BokehPaletteGroupDirective)
return PARALLEL_SAFE |
Required Sphinx extension setup function. | def setup(app):
""" Required Sphinx extension setup function. """
app.add_directive("bokeh-plot", BokehPlotDirective)
app.add_node(autoload_script, html=autoload_script.html)
app.add_config_value("bokeh_missing_google_api_key_ok", True, "html")
app.connect("builder-inited", builder_inited)
app.connect("build-finished", build_finished)
app.connect("env-merge-info", env_merge_info)
return PARALLEL_SAFE |
Required Sphinx extension setup function. | def setup(app):
""" Required Sphinx extension setup function. """
app.add_directive_to_domain("py", "bokeh-prop", BokehPropDirective)
return PARALLEL_SAFE |
Required Sphinx extension setup function. | def setup(app):
""" Required Sphinx extension setup function. """
app.add_directive("bokeh-releases", BokehReleases)
return PARALLEL_SAFE |
Link to a Bokeh Github issue.
Returns 2 part tuple containing list of nodes to insert into the
document and a list of system messages. Both are allowed to be
empty. | def bokeh_commit(name, rawtext, text, lineno, inliner, options=None, content=None):
"""Link to a Bokeh Github issue.
Returns 2 part tuple containing list of nodes to insert into the
document and a list of system messages. Both are allowed to be
empty.
"""
app = inliner.document.settings.env.app
node = _make_gh_link_node(app, rawtext, "commit", "commit ", "commit", text, options)
return [node], [] |
Link to a Bokeh Github issue.
Returns 2 part tuple containing list of nodes to insert into the
document and a list of system messages. Both are allowed to be
empty. | def bokeh_issue(name, rawtext, text, lineno, inliner, options=None, content=None):
"""Link to a Bokeh Github issue.
Returns 2 part tuple containing list of nodes to insert into the
document and a list of system messages. Both are allowed to be
empty.
"""
app = inliner.document.settings.env.app
try:
issue_num = int(text)
if issue_num <= 0:
raise ValueError
except ValueError:
msg = inliner.reporter.error(f"Github issue number must be a number greater than or equal to 1; {text!r} is invalid.", line=lineno)
prb = inliner.problematic(rawtext, rawtext, msg)
return [prb], [msg]
node = _make_gh_link_node(app, rawtext, "issue", "#", "issues", str(issue_num), options)
return [node], [] |
Provide the minimum supported Python version from pyproject.toml.
Returns 2 part tuple containing list of nodes to insert into the
document and a list of system messages. Both are allowed to be
empty. | def bokeh_minpy(name, rawtext, text, lineno, inliner, options=None, content=None):
"""Provide the minimum supported Python version from pyproject.toml.
Returns 2 part tuple containing list of nodes to insert into the
document and a list of system messages. Both are allowed to be
empty.
"""
pyproject = toml.load(join(_REPO_TOP, "pyproject.toml"))
node = nodes.Text(pyproject["project"]["requires-python"].lstrip(">="))
return [node], [] |
Link to a Bokeh Github issue.
Returns 2 part tuple containing list of nodes to insert into the
document and a list of system messages. Both are allowed to be
empty. | def bokeh_pull(name, rawtext, text, lineno, inliner, options=None, content=None):
"""Link to a Bokeh Github issue.
Returns 2 part tuple containing list of nodes to insert into the
document and a list of system messages. Both are allowed to be
empty.
"""
app = inliner.document.settings.env.app
try:
issue_num = int(text)
if issue_num <= 0:
raise ValueError
except ValueError:
msg = inliner.reporter.error(f"Github pull request number must be a number greater than or equal to 1; {text!r} is invalid.", line=lineno)
prb = inliner.problematic(rawtext, rawtext, msg)
return [prb], [msg]
node = _make_gh_link_node(app, rawtext, "pull", "pull request ", "pull", str(issue_num), options)
return [node], [] |
Provide the list of required package dependencies for Bokeh.
Returns 2 part tuple containing list of nodes to insert into the
document and a list of system messages. Both are allowed to be
empty. | def bokeh_requires(name, rawtext, text, lineno, inliner, options=None, content=None):
"""Provide the list of required package dependencies for Bokeh.
Returns 2 part tuple containing list of nodes to insert into the
document and a list of system messages. Both are allowed to be
empty.
"""
pyproject = toml.load(join(_REPO_TOP, "pyproject.toml"))
node = nodes.bullet_list()
for dep in pyproject["project"]["dependencies"]:
node += nodes.list_item("", nodes.Text(dep))
return [node], [] |
Link to a URL in the Bokeh GitHub tree, pointing to appropriate tags
for releases, or to main otherwise.
The link text is simply the URL path supplied, so typical usage might
look like:
.. code-block:: none
All of the examples are located in the :bokeh-tree:`examples`
subdirectory of your Bokeh checkout.
Returns 2 part tuple containing list of nodes to insert into the
document and a list of system messages. Both are allowed to be
empty. | def bokeh_tree(name, rawtext, text, lineno, inliner, options=None, content=None):
"""Link to a URL in the Bokeh GitHub tree, pointing to appropriate tags
for releases, or to main otherwise.
The link text is simply the URL path supplied, so typical usage might
look like:
.. code-block:: none
All of the examples are located in the :bokeh-tree:`examples`
subdirectory of your Bokeh checkout.
Returns 2 part tuple containing list of nodes to insert into the
document and a list of system messages. Both are allowed to be
empty.
"""
app = inliner.document.settings.env.app
tag = app.env.config["version"]
if "-" in tag:
tag = "main"
url = f"{BOKEH_GH}/tree/{tag}/{text}"
options = options or {}
set_classes(options)
node = nodes.reference(rawtext, text, refuri=url, **options)
return [node], [] |
Required Sphinx extension setup function. | def setup(app):
""" Required Sphinx extension setup function. """
app.add_role("bokeh-commit", bokeh_commit)
app.add_role("bokeh-issue", bokeh_issue)
app.add_role("bokeh-minpy", bokeh_minpy)
app.add_role("bokeh-pull", bokeh_pull)
app.add_role("bokeh-requires", bokeh_requires)
app.add_role("bokeh-tree", bokeh_tree)
return PARALLEL_SAFE |
Return a link to a Bokeh Github resource.
Args:
app (Sphinx app) : current app
rawtext (str) : text being replaced with link node.
role (str) : role name
kind (str) : resource type (issue, pull, etc.)
api_type (str) : type for api link
id : (str) : id of the resource to link to
options (dict) : options dictionary passed to role function | def _make_gh_link_node(app, rawtext, role, kind, api_type, id, options=None):
"""Return a link to a Bokeh Github resource.
Args:
app (Sphinx app) : current app
rawtext (str) : text being replaced with link node.
role (str) : role name
kind (str) : resource type (issue, pull, etc.)
api_type (str) : type for api link
id : (str) : id of the resource to link to
options (dict) : options dictionary passed to role function
"""
url = f"{BOKEH_GH}/{api_type}/{id}"
options = options or {}
set_classes(options)
node = nodes.reference(rawtext, f"{kind}{utils.unescape(id)}", refuri=url, **options)
return node |
Required Sphinx extension setup function. | def setup(app):
""" Required Sphinx extension setup function. """
app.add_node(sampledata_list)
app.add_directive("bokeh-example-index", BokehGalleryOverviewDirective)
app.add_directive("bokeh-sampledata-xref", BokehSampledataXrefDirective)
app.connect('doctree-resolved', process_sampledata_xrefs)
app.connect('env-purge-doc', purge_xrefs)
app.connect('env-merge-info', merge_xrefs)
app.connect('doctree-resolved', process_gallery_overview)
app.connect('env-purge-doc', purge_gallery_xrefs)
app.connect('env-merge-info', merge_gallery_xrefs)
return PARALLEL_SAFE |
Required Sphinx extension setup function. | def setup(app):
""" Required Sphinx extension setup function. """
app.add_directive_to_domain("py", "bokeh-settings", BokehSettingsDirective)
return PARALLEL_SAFE |
Collect page names for the sitemap as HTML pages are built. | def html_page_context(app, pagename, templatename, context, doctree):
"""Collect page names for the sitemap as HTML pages are built."""
site = context["SITEMAP_BASE_URL"]
version = context["version"]
app.sitemap_links.add(f"{site}{version}/{pagename}.html") |
Generate a ``sitemap.txt`` from the collected HTML page links. | def build_finished(app, exception):
"""Generate a ``sitemap.txt`` from the collected HTML page links."""
filename = join(app.outdir, "sitemap.xml")
links_iter = status_iterator(sorted(app.sitemap_links), "adding links to sitemap... ", "brown", len(app.sitemap_links), app.verbosity)
try:
with open(filename, "w") as f:
f.write(_header)
for link in links_iter:
http_link = escape(link.strip().replace("https://", "http://"))
f.write(_item.format(link=http_link))
f.write(_footer)
except OSError as e:
raise SphinxError(f"cannot write sitemap.txt, reason: {e}") |
Required Sphinx extension setup function. | def setup(app):
""" Required Sphinx extension setup function. """
app.connect("html-page-context", html_page_context)
app.connect("build-finished", build_finished)
app.sitemap_links = set()
return PARALLEL_SAFE |
Return a browser controller.
Args:
browser (str or None) : browser name, or ``None`` (default: ``None``)
If passed the string ``'none'``, a dummy web browser controller
is returned.
Otherwise, use the value to select an appropriate controller using
the :doc:`webbrowser <python:library/webbrowser>` standard library
module. If the value is ``None``, a system default is used.
Returns:
controller : a web browser controller | def get_browser_controller(browser: str | None = None) -> BrowserLike:
''' Return a browser controller.
Args:
browser (str or None) : browser name, or ``None`` (default: ``None``)
If passed the string ``'none'``, a dummy web browser controller
is returned.
Otherwise, use the value to select an appropriate controller using
the :doc:`webbrowser <python:library/webbrowser>` standard library
module. If the value is ``None``, a system default is used.
Returns:
controller : a web browser controller
'''
browser = settings.browser(browser)
if browser is None:
controller = cast(BrowserLike, webbrowser)
elif browser == "none":
controller = DummyWebBrowser()
else:
controller = webbrowser.get(browser)
return controller |
Open a browser to view the specified location.
Args:
location (str) : Location to open
If location does not begin with "http:" it is assumed
to be a file path on the local filesystem.
browser (str or None) : what browser to use (default: None)
If ``None``, use the system default browser.
new (str) : How to open the location. Valid values are:
``'same'`` - open in the current tab
``'tab'`` - open a new tab in the current window
``'window'`` - open in a new window
autoraise (bool) : Whether to automatically raise the location
in a new browser window (default: True)
Returns:
None | def view(location: str, browser: str | None = None, new: BrowserTarget = "same", autoraise: bool = True) -> None:
''' Open a browser to view the specified location.
Args:
location (str) : Location to open
If location does not begin with "http:" it is assumed
to be a file path on the local filesystem.
browser (str or None) : what browser to use (default: None)
If ``None``, use the system default browser.
new (str) : How to open the location. Valid values are:
``'same'`` - open in the current tab
``'tab'`` - open a new tab in the current window
``'window'`` - open in a new window
autoraise (bool) : Whether to automatically raise the location
in a new browser window (default: True)
Returns:
None
'''
try:
new_id = NEW_PARAM[new]
except KeyError:
raise RuntimeError(f"invalid 'new' value passed to view: {new!r}, valid values are: 'same', 'window', or 'tab'")
if location.startswith("http"):
url = location
else:
url = "file://" + abspath(location)
try:
controller = get_browser_controller(browser)
controller.open(url, new=new_id, autoraise=autoraise)
except Exception:
pass |
Bokeh-internal function to check callback signature | def _check_callback(callback: Callable[..., Any], fargs: Sequence[str], what: str ="Callback functions") -> None:
'''Bokeh-internal function to check callback signature'''
sig = signature(callback)
all_names, default_values = get_param_info(sig)
nargs = len(all_names) - len(default_values)
if nargs != len(fargs):
expected = ", ".join(fargs)
received = str(sig)
raise ValueError(f"{what} must have signature func({expected}), got func{received}") |
Returns the current cache hook used to look up the compiled
code given the CustomModel and Implementation | def get_cache_hook() -> Callable[[CustomModel, Implementation], AttrDict | None]:
'''Returns the current cache hook used to look up the compiled
code given the CustomModel and Implementation'''
return _CACHING_IMPLEMENTATION |
Sets a compiled model cache hook used to look up the compiled
code given the CustomModel and Implementation | def set_cache_hook(hook: Callable[[CustomModel, Implementation], AttrDict | None]) -> None:
'''Sets a compiled model cache hook used to look up the compiled
code given the CustomModel and Implementation'''
global _CACHING_IMPLEMENTATION
_CACHING_IMPLEMENTATION = hook |
Generate a key to cache a custom extension implementation with.
There is no metadata other than the Model classes, so this is the only
base to generate a cache key.
We build the model keys from the list of ``model.full_name``. This is
not ideal but possibly a better solution can be found found later. | def calc_cache_key(custom_models: dict[str, CustomModel]) -> str:
''' Generate a key to cache a custom extension implementation with.
There is no metadata other than the Model classes, so this is the only
base to generate a cache key.
We build the model keys from the list of ``model.full_name``. This is
not ideal but possibly a better solution can be found found later.
'''
model_names = {model.full_name for model in custom_models.values()}
encoded_names = ",".join(sorted(model_names)).encode('utf-8')
return hashlib.sha256(encoded_names).hexdigest() |
Create a bundle of selected `models`. | def bundle_models(models: Sequence[type[HasProps]] | None) -> str | None:
"""Create a bundle of selected `models`. """
custom_models = _get_custom_models(models)
if custom_models is None:
return None
key = calc_cache_key(custom_models)
bundle = _bundle_cache.get(key, None)
if bundle is None:
try:
_bundle_cache[key] = bundle = _bundle_models(custom_models)
except CompilationError as error:
print("Compilation failed:", file=sys.stderr)
print(str(error), file=sys.stderr)
sys.exit(1)
return bundle |
Create a bundle of all models. | def bundle_all_models() -> str | None:
"""Create a bundle of all models. """
return bundle_models(None) |
Return cached compiled implementation | def _model_cache_no_op(model: CustomModel, implementation: Implementation) -> AttrDict | None:
"""Return cached compiled implementation"""
return None |
Returns CustomModels for models with a custom `__implementation__` | def _get_custom_models(models: Sequence[type[HasProps]] | None) -> dict[str, CustomModel] | None:
"""Returns CustomModels for models with a custom `__implementation__`"""
custom_models: dict[str, CustomModel] = dict()
for cls in models or HasProps.model_class_reverse_map.values():
impl = getattr(cls, "__implementation__", None)
if impl is not None:
model = CustomModel(cls)
custom_models[model.full_name] = model
return custom_models if custom_models else None |
Returns the compiled implementation of supplied `models`. | def _compile_models(custom_models: dict[str, CustomModel]) -> dict[str, AttrDict]:
"""Returns the compiled implementation of supplied `models`. """
ordered_models = sorted(custom_models.values(), key=lambda model: model.full_name)
custom_impls = {}
dependencies: list[tuple[str, str]] = []
for model in ordered_models:
dependencies.extend(list(model.dependencies.items()))
if dependencies:
dependencies = sorted(dependencies, key=lambda name_version: name_version[0])
_run_npmjs(["install", "--no-progress"] + [ name + "@" + version for (name, version) in dependencies ])
for model in ordered_models:
impl = model.implementation
compiled = _CACHING_IMPLEMENTATION(model, impl)
if compiled is None:
compiled = nodejs_compile(impl.code, lang=impl.lang, file=impl.file)
if "error" in compiled:
raise CompilationError(compiled.error)
custom_impls[model.full_name] = compiled
return custom_impls |
Create a JavaScript bundle with selected `models`. | def _bundle_models(custom_models: dict[str, CustomModel]) -> str:
""" Create a JavaScript bundle with selected `models`. """
exports = []
modules = []
lib_dir = Path(bokehjs_dir) / "js" / "lib"
known_modules: set[str] = set()
for path in lib_dir.rglob("*.d.ts"):
s = str(path.relative_to(lib_dir))
s = s.removesuffix(".d.ts")
s = s.replace(os.path.sep, "/")
known_modules.add(s)
custom_impls = _compile_models(custom_models)
extra_modules = {}
def resolve_modules(to_resolve: set[str], root: str) -> dict[str, str]:
resolved = {}
for module in to_resolve:
if module.startswith(("./", "../")):
def mkpath(module: str, ext: str = "") -> str:
return abspath(join(root, *module.split("/")) + ext)
if module.endswith(exts):
path = mkpath(module)
if not exists(path):
raise RuntimeError(f"no such module: {module}")
else:
for ext in exts:
path = mkpath(module, ext)
if exists(path):
break
else:
raise RuntimeError(f"no such module: {module}")
impl = FromFile(path)
compiled = nodejs_compile(impl.code, lang=impl.lang, file=impl.file)
if impl.lang == "less":
code = _style_template % dict(css=json.dumps(compiled.code))
deps = []
else:
code = compiled.code
deps = compiled.deps
sig = hashlib.sha256(code.encode('utf-8')).hexdigest()
resolved[module] = sig
deps_map = resolve_deps(deps, dirname(path))
if sig not in extra_modules:
extra_modules[sig] = True
modules.append((sig, code, deps_map))
else:
index = module + ("" if module.endswith("/") else "/") + "index"
if index not in known_modules:
raise RuntimeError(f"no such module: {module}")
return resolved
def resolve_deps(deps : list[str], root: str) -> dict[str, str]:
custom_modules = {model.module for model in custom_models.values()}
missing = set(deps) - known_modules - custom_modules
return resolve_modules(missing, root)
for model in custom_models.values():
compiled = custom_impls[model.full_name]
deps_map = resolve_deps(compiled.deps, model.path)
exports.append((model.name, model.module))
modules.append((model.module, compiled.code, deps_map))
# sort everything by module name
exports = sorted(exports, key=lambda spec: spec[1])
modules = sorted(modules, key=lambda spec: spec[0])
bare_modules = []
for i, (module, code, deps) in enumerate(modules):
for name, ref in deps.items():
code = code.replace(f"""require("{name}")""", f"""require("{ref}")""")
code = code.replace(f"""require('{name}')""", f"""require('{ref}')""")
bare_modules.append((module, code))
sep = ",\n"
rendered_exports = sep.join(_export_template % dict(name=name, module=module) for (name, module) in exports)
rendered_modules = sep.join(_module_template % dict(module=module, source=code) for (module, code) in bare_modules)
content = _plugin_template % dict(prelude=_plugin_prelude, exports=rendered_exports, modules=rendered_modules)
return _plugin_umd % dict(content=content) |
Iterate over a dataclass' fields and their values. | def entries(obj: Any) -> Iterable[tuple[str, Any]]:
""" Iterate over a dataclass' fields and their values. """
if is_dataclass(obj):
for f in fields(obj):
value = getattr(obj, f.name)
if value is not Unspecified:
yield (f.name, value)
else:
raise TypeError(f"expected a dataclass, got {type(obj)}") |
Attempt to import an optional dependency.
Silently returns None if the requested module is not available.
Args:
mod_name (str) : name of the optional module to try to import
Returns:
imported module or None, if import fails | def import_optional(mod_name: str) -> ModuleType | None:
''' Attempt to import an optional dependency.
Silently returns None if the requested module is not available.
Args:
mod_name (str) : name of the optional module to try to import
Returns:
imported module or None, if import fails
'''
try:
return import_module(mod_name)
except ImportError:
pass
except Exception:
msg = f"Failed to import optional module `{mod_name}`"
log.exception(msg)
return None |
Attempt to import a required dependency.
Raises a RuntimeError if the requested module is not available.
Args:
mod_name (str) : name of the required module to try to import
error_msg (str) : error message to raise when the module is missing
Returns:
imported module
Raises:
RuntimeError | def import_required(mod_name: str, error_msg: str) -> ModuleType:
''' Attempt to import a required dependency.
Raises a RuntimeError if the requested module is not available.
Args:
mod_name (str) : name of the required module to try to import
error_msg (str) : error message to raise when the module is missing
Returns:
imported module
Raises:
RuntimeError
'''
try:
return import_module(mod_name)
except ImportError as e:
raise RuntimeError(error_msg) from e |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.