code
stringlengths 501
5.19M
| package
stringlengths 2
81
| path
stringlengths 9
304
| filename
stringlengths 4
145
|
---|---|---|---|
yt Overview
===========
yt is a community-developed analysis and visualization toolkit for
volumetric data. yt has been applied mostly to astrophysical simulation data,
but it can be applied to many different types of data including seismology,
radio telescope data, weather simulations, and nuclear engineering simulations.
yt is developed in Python under the open-source model.
yt supports :ref:`many different code formats <code-support>`, and we provide
:ref:`sample data for each format <getting-sample-data>` with
:ref:`instructions on how to load and examine each data type <examining-data>`.
Table of Contents
-----------------
.. raw:: html
<table class="contentstable" align="left">
<tr valign="top">
<td width="25%">
<p>
<a href="intro/index.html">Introduction to yt</a>
</p>
</td>
<td width="75%">
<p class="linkdescr">What does yt offer? How can I use it? How to think in yt?</p>
</td>
</tr>
<tr valign="top">
<td width="25%">
<p>
<a href="yt4differences.html">yt 4.0</a>
</p>
</td>
<td width="75%">
<p class="linkdescr">How yt-4.0 differs from past versions</p>
</td>
</tr>
<tr valign="top">
<td width="25%">
<p>
<a href="yt3differences.html">yt 3.0</a>
</p>
</td>
<td width="75%">
<p class="linkdescr">How yt-3.0 differs from past versions</p>
</td>
</tr>
<tr valign="top">
<td width="25%">
<p>
<a href="installing.html">Installation</a>
</p>
</td>
<td width="75%">
<p class="linkdescr">Getting, installing, and updating yt</p>
</td>
</tr>
<tr valign="top">
<td width="25%">
<p>
<a href="quickstart/index.html">yt Quickstart</a>
</p>
</td>
<td width="75%">
<p class="linkdescr">Demonstrations of what yt can do</p>
</td>
</tr>
<tr valign="top">
<td width="25%">
<p>
<a href="examining/index.html">Loading and Examining Data</a>
</p>
</td>
<td width="75%">
<p class="linkdescr">How to load all dataset types in yt and examine raw data</p>
</td>
</tr>
<tr valign="top">
<td width="25%">
<p>
<a href="cookbook/index.html">The Cookbook</a>
</p>
</td>
<td width="75%">
<p class="linkdescr">Example recipes for how to accomplish a variety of tasks</p>
</td>
</tr>
<tr valign="top">
<td width="25%">
<p>
<a href="visualizing/index.html">Visualizing Data</a>
</p>
</td>
<td width="75%">
<p class="linkdescr">Make plots, projections, volume renderings, movies, and more</p>
</td>
</tr>
<tr valign="top">
<td width="25%">
<p>
<a href="analyzing/index.html">General Data Analysis</a>
</p>
</td>
<td width="75%">
<p class="linkdescr">The nuts and bolts of manipulating yt datasets</p>
</td>
</tr>
<tr valign="top">
<td width="25%">
<p>
<a href="analyzing/domain_analysis/index.html">Domain-Specific Analysis</a>
</p>
</td>
<td width="75%">
<p class="linkdescr">Astrophysical analysis, clump finding, cosmology calculations, and more</p>
</td>
</tr>
<tr valign="top">
<td width="25%">
<p>
<a href="developing/index.html">Developer Guide</a>
</p>
</td>
<td width="75%">
<p class="linkdescr">Catering yt to work for your exact use case</p>
</td>
</tr>
<tr valign="top">
<td width="25%">
<p>
<a href="reference/index.html">Reference Materials</a>
</p>
</td>
<td width="75%">
<p class="linkdescr">Lists of fields, quantities, classes, functions, and more</p>
</td>
</tr>
<tr valign="top">
<td width="25%">
<p>
<a href="faq/index.html">Frequently Asked Questions</a>
</p>
</td>
<td width="75%">
<p class="linkdescr">Solutions for common questions and problems</p>
</td>
</tr>
<tr valign="top">
<td width="25%">
<p>
<a href="help/index.html">Getting help</a>
</p>
</td>
<td width="75%">
<p class="linkdescr">What to do if you run into problems</p>
</td>
</tr>
<tr valign="top">
<td width="25%">
<p>
<a href="about/index.html">About yt</a>
</p>
</td>
<td width="75%">
<p class="linkdescr">What is yt?</p>
</td>
</tr>
</table>
.. toctree::
:hidden:
intro/index
installing
yt Quickstart <quickstart/index>
yt4differences
yt3differences
cookbook/index
visualizing/index
analyzing/index
analyzing/domain_analysis/index
examining/index
developing/index
reference/index
faq/index
Getting Help <help/index>
about/index
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/index.rst | index.rst |
.. _yt3differences:
What's New and Different in yt 3.0?
===================================
If you are new to yt, welcome! If you're coming to yt 3.0 from an older
version, however, there may be a few things in this version that are different
than what you are used to. We have tried to build compatibility layers to
minimize disruption to existing scripts, but necessarily things will be
different in some ways.
.. contents::
:depth: 2
:local:
:backlinks: none
Updating to yt 3.0 from Old Versions (and going back)
-----------------------------------------------------
First off, you need to update your version of yt to yt 3.0. If you're
installing yt for the first time, please visit :ref:`installing-yt`.
If you already have a version of yt installed, you should just need one
command:
.. code-block:: bash
$ yt update
This will update yt to the most recent version and rebuild the source base.
If you installed using the installer script, it will assure you have all of the
latest dependencies as well. This step may take a few minutes. To test
that yt is correctly installed, try:
.. code-block:: bash
$ python -c "import yt"
.. _transitioning-to-3.0:
Converting Old Scripts to Work with yt 3.0
------------------------------------------
After installing yt-3.0, you'll want to change your old scripts in a few key
ways. After accounting for the changes described in the list below, try
running your script. If it still fails, the callback failures in python are
fairly descriptive and it may be possible to deduce what remaining changes are
necessary. If you continue to have trouble, please don't hesitate to
:ref:`request help <asking-for-help>`.
The list below is arranged in order of most important changes to least
important changes.
* **Replace** ``from yt.mods import *`` **with** ``import yt`` **and prepend yt
classes and functions with** ``yt.``
We have reworked yt's import system so that most commonly-used yt functions
and classes live in the top-level yt namespace. That means you can now
import yt with ``import yt``, load a dataset with ``ds = yt.load(filename)``
and create a plot with ``yt.SlicePlot``. See :ref:`api-reference` for a full
API listing. You can still import using ``from yt.mods import *`` to get a
pylab-like experience.
* **Unit conversions are different**
Fields and metadata for data objects and datasets now have units. The unit
system keeps you from making weird things like ``ergs`` + ``g`` and can
handle things like ``g`` + ``kg`` or ``kg*m/s**2 == Newton``. See
:ref:`units` and :ref:`conversion-factors` for more information.
* **Change field names from CamelCase to lower_case_with_underscores**
Previously, yt would use "Enzo-isms" for field names. We now very
specifically define fields as lowercase with underscores. For instance,
what used to be ``VelocityMagnitude`` would now be ``velocity_magnitude``.
Axis names are now at the *end* of field names, not the beginning.
``x-velocity`` is now ``velocity_x``. For a full list of all of the fields,
see :ref:`field-list`.
* **Full field names have two parts now**
Fields can be accessed by a single name, but they are named internally as
``(field_type, field_name)`` for more explicit designation which can address
particles, deposited fluid quantities, and more. See :ref:`fields`.
* **Code-specific field names can be accessed by the name defined by the
external code**
Mesh fields that exist on-disk in an output file can be read in using whatever
name is used by the output file. On-disk fields are always returned in code
units. The full field name will be ``(code_name, field_name)``. See
:ref:`field-list`.
* **Particle fields are now more obviously different than mesh fields**
Particle fields on-disk will also be in code units, and will be named
``(particle_type, field_name)``. If there is only one particle type in the
output file, all particles will use ``io`` as the particle type. See
:ref:`fields`.
* **Change** ``pf`` **to** ``ds``
The objects we used to refer to as "parameter files" we now refer to as
datasets. Instead of ``pf``, we now suggest you use ``ds`` to refer to an
object returned by ``yt.load``.
* **Remove any references to** ``pf.h`` **with** ``ds``
You can now create data objects without referring to the hierarchy. Instead
of ``pf.h.all_data()``, you can now say ``ds.all_data()``. The hierarchy is
still there, but it is now called the index: ``ds.index``.
* **Use** ``yt.enable_parallelism()`` **to make a script parallel-compatible**
Command line arguments are only parsed when yt is imported using ``from
yt.mods import *``. Since command line arguments are not parsed when using
``import yt``, it is no longer necessary to specify ``--parallel`` at the
command line when running a parallel computation. Use
``yt.enable_parallelism()`` in your script instead. See
:ref:`parallel-computation` for more details.
* **Change your derived quantities to the new syntax**
Derived quantities have been reworked. You can now do
``dd.quantities.total_mass()`` instead of ``dd.quantities['TotalMass']()``.
See :ref:`derived-quantities`.
* **Change your method of accessing the** ``grids`` **attribute**
The ``grids`` attribute of data objects no longer exists. To get this
information, you have to use spatial chunking and then access them. See
:ref:`here <grid-chunking>` for an example. For datasets that use grid
hierarchies, you can also access the grids for the entire dataset via
``ds.index.grids``. This attribute is not defined for particle or octree
datasets.
Cool New Things
---------------
Lots of new things have been added in yt 3.0! Below we summarize a handful of
these.
Lots of New Codes are Supported
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Because of the additions of **Octrees**, **Particle Deposition**,
and **Irregular Grids**, we now support a bunch more codes. See
:ref:`code-support` for more information.
Octrees
^^^^^^^
Octree datasets such as RAMSES, ART and ARTIO are now supported -- without any
regridding! We have a native, lightweight octree indexing system.
Irregular Grids
^^^^^^^^^^^^^^^
MOAB Hex8 format is supported, and non-regular grids can be added relatively
easily.
Better Particle Support
^^^^^^^^^^^^^^^^^^^^^^^
Particle Codes and SPH
""""""""""""""""""""""
yt 3.0 features particle selection, smoothing, and deposition. This utilizes a
combination of coarse-grained indexing and octree indexing for particles.
Particle Deposition
"""""""""""""""""""
In yt-3.0, we provide mechanisms for describing and creating fields generated
by depositing particles into one or a handful of zones. This could include
deposited mass or density, average values, and the like. For instance, the
total stellar mass in some region can be deposited and averaged.
Particle Filters and Unions
"""""""""""""""""""""""""""
Throughout yt, the notion of "particle types" has been more deeply embedded.
These particle types can be dynamically defined at runtime, for instance by
taking a filter of a given type or the union of several different types. This
might be, for instance, defining a new type called ``young_stars`` that is a
filtering of ``star_age`` to be fewer than a given threshold, or ``fast`` that
filters based on the velocity of a particle. Unions could be the joining of
multiple types of particles -- the default union of which is ``all``,
representing all particle types in the simulation.
Units
^^^^^
yt now has a unit system. This is one of the bigger features, and in essence it means
that you can convert units between anything. In practice, it makes it much
easier to define fields and convert data between different unit systems. See
:ref:`units` for more information.
Non-Cartesian Coordinates
^^^^^^^^^^^^^^^^^^^^^^^^^
Preliminary support for non-cartesian coordinates has been added. We expect
this to be considerably solidified and expanded in yt 3.1.
Reworked Import System
^^^^^^^^^^^^^^^^^^^^^^
It's now possible to import all yt functionality using ``import yt``. Rather
than using ``from yt.mods import *``, we suggest using ``import yt`` in new
scripts. Most commonly used yt functionality is attached to the ``yt`` module.
Load a dataset with ``yt.load()``, create a phase plot using ``yt.PhasePlot``,
and much more, see :ref:`the api docs <api-reference>` to learn more about what's
in the ``yt`` namespace, or just use tab completion in IPython: ``yt.<tab>``.
It's still possible to use ``from yt.mods import *`` to create an interactive
pylab-like experience. Importing yt this way has several side effects, most
notably the command line arguments parsing and other startup tasks will run.
API Changes
-----------
These are the items that have already changed in *user-facing* API:
Field Naming
^^^^^^^^^^^^
.. warning:: Field naming is probably the single biggest change you will
encounter in yt 3.0.
Fields can be accessed by their short names, but yt now has an explicit
mechanism of distinguishing between field types and particle types. This is
expressed through a two-key description. For example::
my_object["gas", "density"]
will return the gas field density. In this example "gas" is the field type and
"density" is the field name. Field types are a bit like a namespace. This
system extends to particle types as well. By default you do *not* need to use
the field "type" key, but in case of ambiguity it will utilize the default value
in its place. This should therefore be identical to::
my_object["density"]
To enable a compatibility layer, on the dataset you simply need to call the
method ``setup_deprecated_fields`` like so:
.. code-block:: python
ds = yt.load("MyData")
ds.setup_deprecated_fields()
This sets up aliases from the old names to the new. See :ref:`fields` and
:ref:`field-list` for more information.
Units of Fields
^^^^^^^^^^^^^^^
Fields now are all subclasses of NumPy arrays, the ``YTArray``, which carries
along with it units. This means that if you want to manipulate fields, you
have to modify them in a unitful way. See :ref:`units`.
Parameter Files are Now Datasets
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Wherever possible, we have attempted to replace the term "parameter file"
(i.e., ``pf``) with the term "dataset." In yt-3.0, all of
the ``pf`` attributes of objects are now ``ds`` or ``dataset`` attributes.
Hierarchy is Now Index
^^^^^^^^^^^^^^^^^^^^^^
The hierarchy object (``pf.h``) is now referred to as an index (``ds.index``).
It is no longer necessary to directly refer to the ``index`` as often, since
data objects are now attached to the to the ``dataset`` object. Before, you
would say ``pf.h.sphere()``, now you can say ``ds.sphere()``.
New derived quantities interface
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Derived quantities can now be accessed via a function that hangs off of the
``quantities`` attribute of data objects. Instead of
``dd.quantities['TotalMass']()``, you can now use ``dd.quantities.total_mass()``
to do the same thing. All derived quantities can be accessed via a function that
hangs off of the ``quantities`` attribute of data objects.
Any derived quantities that *always* returned lists (like ``Extrema``, which
would return a list even if you only ask for one field) now only returns a
single result if you only ask for one field. Results for particle and mesh
fields will also be returned separately. See :ref:`derived-quantities` for more
information.
Field Info
^^^^^^^^^^
In previous versions of yt, the ``dataset`` object (what we used to call a
parameter file) had a ``field_info`` attribute which was a dictionary leading to
derived field definitions. At the present time, because of the field naming
changes (i.e., access-by-tuple) it is better to utilize the function
``_get_field_info`` than to directly access the ``field_info`` dictionary. For
example::
finfo = ds._get_field_info("gas", "density")
This function respects the special "field type" ``unknown`` and will search all
field types for the field name.
Projection Argument Order
^^^^^^^^^^^^^^^^^^^^^^^^^
Previously, projections were inconsistent with the other data objects.
(The API for Plot Windows is the same.) The argument order is now ``field``
then ``axis`` as seen here:
:class:`~yt.data_objects.construction_data_containers.YTQuadTreeProj`.
Field Parameters
^^^^^^^^^^^^^^^^
All data objects now accept an explicit list of ``field_parameters`` rather
than accepting ``kwargs`` and supplying them to field parameters. See
:ref:`field_parameters`.
Object Renaming
^^^^^^^^^^^^^^^
Nearly all internal objects have been renamed. Typically this means either
removing ``AMR`` from the prefix or replacing it with ``YT``. All names of
objects remain the same for the purposes of selecting data and creating them;
i.e., ``sphere`` objects are still called ``sphere`` - you can access or create one
via ``ds.sphere``. For a detailed description and index see
:ref:`available-objects`.
Boolean Regions
^^^^^^^^^^^^^^^
Boolean regions are not yet implemented in yt 3.0.
.. _grid-chunking:
Grids
^^^^^
It used to be that one could get access to the grids that belonged to a data
object. Because we no longer have just grid-based data in yt, this attribute
does not make sense. If you need to determine which grids contribute to a
given object, you can either query the ``grid_indices`` field, or mandate
spatial chunking like so:
.. code-block:: python
for chunk in obj.chunks([], "spatial"):
for grid in chunk._current_chunk.objs:
print(grid)
This will "spatially" chunk the ``obj`` object and print out all the grids
included.
Halo Catalogs
^^^^^^^^^^^^^
The ``Halo Profiler`` infrastructure has been fundamentally rewritten and now
exists using the ``Halo Catalog`` framework. See :ref:`halo-analysis`.
Analysis Modules
^^^^^^^^^^^^^^^^
While we're trying to port over all of the old analysis modules, we have not
gotten all of them working in 3.0 yet. The docs pages for those modules
not-yet-functioning are clearly marked.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/yt3differences.rst | yt3differences.rst |
.. _yt4differences:
What's New and Different in yt 4.0?
===================================
If you are new to yt, welcome! If you're coming to yt 4.0 from an older
version, however, there may be a few things in this version that are different
than what you are used to. We have tried to build compatibility layers to
minimize disruption to existing scripts, but necessarily things will be
different in some ways.
.. contents::
:depth: 2
:local:
:backlinks: none
Updating to yt 4.0 from Old Versions (and going back)
-----------------------------------------------------
.. _transitioning-to-4.0:
Converting Old Scripts to Work with yt 4.0
------------------------------------------
After installing yt-4.0, you’ll want to change your old scripts in a few key
ways. After accounting for the changes described in the list below, try
running your script. If it still fails, the Python tracebacks
should be fairly descriptive and it may be possible to deduce what remaining
changes are necessary. If you continue to have trouble, please don’t hesitate
to :ref:`request help <asking-for-help>`.
The list below is arranged in order of most to least important changes.
* **Fields should be specified as tuples not as strings**
In the past, you could specify fields as strings like ``"density"``, but
with the growth of yt and its many derived fields, there can be sometimes
be overlapping field names (e.g., ``("gas", "density")`` and
``("PartType0", "density")``), where yt doesn't know which to use. To remove
any ambiguity, it is now strongly recommended to explicitly specify the full
tuple form of all fields. Just search for all field accesses in your scripts,
and replace strings with tuples (e.g. replace ``"a"`` with
``("gas", "a" )``). There is a compatibility rule in yt-4.0 to allow strings
to continue to work until yt-4.1, but you may get unexpected behavior. Any
field specifications that are ambiguous will throw an error in future
versions of yt. See our :ref:`fields`, and :ref:`available field list
<available-fields>` documentation for more information.
* **Use Newer Versions of Python**
The yt-4.0 release will be the final release of yt to support Python 3.6.
Starting with yt-4.1, python 3.6 will no longer be supported, so please
start using 3.7+ as soon as possible.
* **Particle-based datasets no longer accept n_ref and over_refine_factor**
One of the major upgrades in yt-4 is native treatment of particle-based
datasets. This is in contrast to previous yt behavior which loaded particle-based
datasets as octrees, which could then be treated like grid-based datasets.
In order to define the octrees, users were required to specify ``n_ref``
and ``over_refine_factor`` values at load time. Please remove
any reference to ``n_ref`` and ``over_refine_factor`` in your scripts.
* **Neutral ion fields changing format**
In previous versions, neutral ion fields were specified as
``ELEMENT_number_density`` (e.g., ``H_number_density`` to represent H I
number density). This led to a lot of confusion, because some people assumed
these fields were the total hydrogen density, not neutral hydrogen density.
In yt-4.0, we have resolved this issue by explicitly calling total hydrogen
number density ``H_nuclei_density`` and neutral hydrogen density
``H_p0_number_density`` (where ``p0`` refers to plus 0 charge). This syntax
follows the rule for other ions: H II = ``H_p1`` = ionized hydrogen. Change
your scripts accordingly. See :ref:`species-fields` for more information.
* **Change in energy and momentum field names**
Fields representing energy and momentum quantities are now given names which
reflect their dimensionality. For example, the ``("gas", "kinetic_energy")``
field was actually a field for kinetic energy density, and so it has been
renamed to ``("gas", "kinetic_energy_density")``. The old name still exists
as an alias as of yt v4.0.0, but it will be removed in yt v4.1.0. See
next item below for more information.
Other examples include ``"gas", "specific_thermal_energy"`` for thermal
energy per unit mass, and ``("gas", "momentum_density_x")`` for the x-axis
component of momentum density. See :ref:`efields` for more information.
* **Deprecated field names**
Certain field names are deprecated within yt v4.0.x and removed in
yt v4.1. For example, ``("gas", "kinetic_energy")`` has been renamed to
``("gas", "kinetic_energy_density")``, though the former name has been added
as an alias. Other fields, such as
``("gas", "cylindrical_tangential_velocity_absolute")``, are being removed
entirely. When the deprecated field names are used for the first time in a
session, a warning will be logged, so it is advisable to set
your logging level to ``WARNING`` (``yt.set_log_level("error")``) at a
minimum to catch these. See :ref:`faq-log-level` for more information on
setting your log level and :ref:`available-fields` to see all available
fields.
* ``cmocean`` **colormaps need prefixing**
yt used to automatically load and register external colormaps from the
``cmocean`` package unprefixed (e.g., ``set_cmap(FIELD, "balance")``. This
became unsustainable with the 3.4 release of Matplotlib, in which colormaps
with colliding names raise errors. The fix is to explicitly import the
``cmocean`` module and prefix ``cmocean`` colormaps (like ``balance``) with
``cmo.`` (e.g., ``cmo.balance``). Note that this solution works with any
yt-supported version of Matplotlib, but is not backward compatible with
earlier versions of yt.
* Position and velocity fields now default to using linear scaling in profiles
and phase plots, whereas previously behavior was determined by whether the
dataset was particle- or grid-based. Efforts have been made to standardize
the treatment of other fields in profile and phase plots for particle and
grid datasets.
Important New Aliases
^^^^^^^^^^^^^^^^^^^^^
With the advent of supporting SPH data at the particle level instead of smoothing
onto an octree (see below), a new alias for both gas particle masses and cell masses
has been created: ``("gas", "mass")``, which aliases to ``("gas", "cell_mass")`` for
grid-based frontends and to the gas particle mass for SPH frontends. In a number of
places in yt, code that used ``("gas", "cell_mass")`` has been replaced by
``("gas", "mass")``. Since the latter is an alias for the former, old scripts which
use ``("gas", "cell_mass")`` should not break.
Deprecations
^^^^^^^^^^^^
The following methods and method arguments are deprecated as of yt 4.0 and will be
removed in yt 4.1
* :meth:`~yt.visualization.plot_window.PlotWindow.set_window_size` is deprecated
in favor to :meth:`~yt.visualization.plot_container.PlotContainer.set_figure_size`
* :meth:`~yt.visualization.eps_writer.return_cmap` is deprecated in favor to
:meth:`~yt.visualization.eps_writer.return_colormap`
* :meth:`~yt.data_objects.derived_quantities.WeightedVariance` is deprecated in favor
to :meth:`~yt.data_objects.derived_quantities.WeightedStandardDeviation`
* :meth:`~yt.visualization.plot_window.PWViewerMPL.annotate_clear` is deprecated in
favor to :meth:`~yt.visualization.plot_window.PWViewerMPL.clear_annotations`
* :meth:`~yt.visualization.color_maps.add_cmap` is deprecated in favor to
:meth:`~yt.visualization.color_maps.add_colormap`
* :meth:`~yt.loaders.simulation` is deprecated in favor to :meth:`~yt.loaders.load_simulation`
* :meth:`~yt.data_objects.index_subobjects.octree_subset.OctreeSubset.get_vertex_centered_data`
now takes a list of fields as input, passing a single field is deprecated
* manually updating the ``periodicity`` attributed of a :class:`~yt.data_objects.static_output.Dataset` object is deprecated. Use the
:meth:`~yt.data_objects.static_output.Dataset.force_periodicity` if you need to force periodicity to ``True`` or ``False`` along all axes.
* the :meth:`~yt.data_objects.static_output.Dataset.add_smoothed_particle_field` method is deprecated and already has no effect in yt 4.0 .
See :ref:`sph-data`
* the :meth:`~yt.data_objects.static_output.Dataset.add_gradient_fields` used to accept an ``input_field`` keyword argument, now deprecated
in favor to ``fields``
* :meth:`~yt.data_objects.time_series.DatasetSeries.from_filenames` is deprecated because its functionality is now
included in the basic ``__init__`` method. Use :class:`~yt.data_objects.time_series.DatasetSeries` directly.
* the ``particle_type`` keyword argument from ``yt.add_field()`` (:meth:`~yt.fields.field_info_container.FieldInfoContainer.add_field`) and ``ds.add_field()`` (:meth:`~yt.data_objects.static_output.Dataset.add_field`) methods is now a deprecated in favor to
the ``sampling_type`` keyword argument.
* the :meth:`~yt.fields.particle_fields.add_volume_weighted_smoothed_field` is deprecated and already has no effect in yt 4.0 .
See :ref:`sph-data`
* the :meth:`~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree.locate_brick` method is deprecated in favor to, and is now an alias for :meth:`~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree.locate_node`
* the :class:`~yt.utilities.exceptions.YTOutputNotIdentified` error is a deprecated alias for :class:`~yt.utilities.exceptions.YTUnidentifiedDataType`
* the ``limits`` argument from :meth:`~yt.visualization.image_writer.write_projection` is deprecated in
favor to ``vmin`` and ``vmax``
* :meth:`~yt.visualization.plot_container.ImagePlotContainer.set_cbar_minorticks` is a deprecated alias for :meth:`~yt.visualization.plot_container.ImagePlotContainer.set_colorbar_minorticks`
* the ``axis`` argument from :meth:`yt.visualization.plot_window.SlicePlot` is a deprecated alias for the ``normal`` argument
* the old configuration file ``ytrc`` is deprecated in favor of the new ``yt.toml`` format. In yt 4.0,
you'll get a warning every time you import yt if you're still using the old configuration file,
which will instruct you to invoke the yt command line interface to convert automatically to the new format.
* the ``load_field_plugins`` parameter is deprecated from the configuration file (note that it is already not used as of yt 4.0)
Cool New Things
---------------
Changes for Working with SPH Data
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In yt-3.0 most user-facing operations on SPH data are produced by interpolating
SPH data onto a volume-filling octree mesh. Historically this was easier to
implement When support for SPH data was added to yt as it allowed re-using a lot
of the existing infrastructure. This had some downsides because the octree was a
single, global object, the memory and CPU overhead of smoothing SPH data onto
the octree can be prohibitive on particle datasets produced by large
simulations. Constructing the octree during the initial indexing phase also
required each particle (albeit, in a 64-bit integer) to be present in memory
simultaneously for a sorting operation, which was memory prohibitive.
Visualizations of slices and projections produced by yt using the default
settings are somewhat blocky since by default we use a relatively coarse octree
to preserve memory.
In yt-4.0 this has all changed! Over the past two years, Nathan Goldbaum, Meagan
Lang and Matt Turk implemented a new approach for handling I/O of particle data,
based on storing compressed bitmaps containing Morton indices instead of an
in-memory octree. This new capability means that the global octree index is now
no longer necessary to enable I/O chunking and spatial indexing of particle data
in yt.
The new I/O method has opened up a new way of dealing with the particle data and
in particular, SPH data.
.. _sph-data:
Scatter and Gather approach for SPH data
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As mentioned, previously operations such as slice, projection and arbitrary
grids would smooth the particle data onto the global octree. As this is no
longer used, a different approach was required to visualize the SPH data. Using
SPLASH as inspiration, SPH smoothing pixelization operations were created using
smoothing operations via "scatter" and "gather" approaches. We estimate the
contributions of a particle to a single pixel by considering the point at the
centre of the pixel and using the standard SPH smoothing formula. The heavy
lifting in these functions is undertaken by cython functions.
It is now possible to generate slice plots, projection plots, covering grids and
arbitrary grids of smoothed quantities using these operations. The following
code demonstrates how this could be achieved. The following would use the scatter
method:
.. code-block:: python
import yt
ds = yt.load("snapshot_033/snap_033.0.hdf5")
plot = yt.SlicePlot(ds, 2, ("gas", "density"))
plot.save()
plot = yt.ProjectionPlot(ds, 2, ("gas", "density"))
plot.save()
arbitrary_grid = ds.arbitrary_grid([0.0, 0.0, 0.0], [25, 25, 25], dims=[16, 16, 16])
ag_density = arbitrary_grid[("gas", "density")]
covering_grid = ds.covering_grid(4, 0, 16)
cg_density = covering_grid[("gas", "density")]
In the above example the ``covering_grid`` and the ``arbitrary_grid`` will return
the same data. In fact, these containers are very similar but provide a
slightly different API.
The above code can be modified to use the gather approach by changing a global
setting for the dataset. This can be achieved with
``ds.sph_smoothing_style = "gather"``, so far, the gather approach is not
supported for projections.
The default behaviour for SPH interpolation is that the values are normalized
inline with Eq. 9 in `SPLASH, Price (2009) <https://arxiv.org/pdf/0709.0832.pdf>`_.
This can be disabled with ``ds.use_sph_normalization = False``. This will
disable the normalization for all future interpolations.
The gather approach requires finding nearest neighbors using the KDTree. The
first call will generate a KDTree for the entire dataset which will be stored in
a sidecar file. This will be loaded whenever necessary.
Off-Axis Projection for SPH Data
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The current ``OffAxisProjectionPlot`` class will now support SPH projection plots.
The following is a code example:
.. code-block:: python
import yt
ds = yt.load("Data/GadgetDiskGalaxy/snapshot_200.hdf5")
smoothing_field = ("gas", "density")
_, center = ds.find_max(smoothing_field)
sp = ds.sphere(center, (10, "kpc"))
normal_vector = sp.quantities.angular_momentum_vector()
prj = yt.OffAxisProjectionPlot(ds, normal_vector, smoothing_field, center, (20, "kpc"))
prj.save()
Smoothing Data onto an Octree
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Whilst the move away from the global octree is a promising one in terms of
performance and dealing with SPH data in a more intuitive manner, it does remove
a useful feature. We are aware that many users will have older scripts which take
advantage of the global octree.
As such, we have added support to smooth SPH data onto an octree when desired by
the users. The new octree is designed to give results consistent with those of
the previous octree, but the new octree takes advantage of the scatter and
gather machinery also added.
.. code-block:: python
import numpy as np
import yt
ds = yt.load("GadgetDiskGalaxy/snapshot_200.hdf5")
left = np.array([0, 0, 0], dtype="float64")
right = np.array([64000, 64000, 64000], dtype="float64")
# generate an octree
octree = ds.octree(left, right, n_ref=64)
# Scatter deposition is the default now, and thus this will print scatter
print(octree.sph_smoothing_style)
# the density will be calculated using SPH scatter
density = octree[("PartType0", "density")]
# this will return the x positions of the octs
x = octree[("index", "x")]
The above code can be modified to use the gather approach by using
``ds.sph_smoothing_style = 'gather'`` before any field access. The octree just
uses the smoothing style and number of neighbors defined by the dataset.
The octree implementation is very simple. It uses a recursive algorithm to build
a ``depth-first`` which is consistent with the results from yt-3. Depth-first
search (DFS) means that tree starts refining at the root node (this is the
largest node which contains every particles) and refines as far as possible
along each branch before backtracking.
.. _yt-units-is-now-unyt:
``yt.units`` Is Now a Wrapper for ``unyt``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We have extracted ``yt.units`` into ``unyt``, its own library that you can
install separately from yt from ``pypi`` and ``conda-forge``. You can find out
more about using ``unyt`` in `its documentation
<https://unyt.readthedocs.io/en/stable/>`_ and in `a paper in the Journal of
Open Source Software <http://joss.theoj.org/papers/10.21105/joss.00809>`_.
From the perspective of a user of yt, very little should change. While things in
``unyt`` have different names -- for example ``YTArray`` is now called
``unyt_array`` -- we have provided wrappers in ``yt.units`` so imports in your
old scripts should continue to work without issue. If you have any old scripts
that don't work due to issues with how yt is using ``unyt`` or units issues in
general please let us know by `filing an issue on GitHub
<https://github.com/yt-project/yt/issues/new>`_.
Moving ``unyt`` into its own library has made it much easier to add some cool
new features, which we detail below.
``ds.units``
~~~~~~~~~~~~
Each dataset now has a set of unit symbols and physical constants associated
with it, allowing easier customization and smoother interaction, especially in
workflows that need to use code units or cosmological units. The ``ds.units``
object has a large number of attributes corresponding to the names of units and
physical constants. All units known to the dataset will be available, including
custom units. In situations where you might have used ``ds.arr`` or ``ds.quan``
before, you can now safely use ``ds.units``:
>>> ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')
>>> u = ds.units
>>> ad = ds.all_data()
>>> data = ad['Enzo', 'Density']
>>> data + 12*u.code_mass/u.code_length**3
unyt_array([1.21784693e+01, 1.21789148e+01, 1.21788494e+01, ...,
4.08936836e+04, 5.78006836e+04, 3.97766906e+05], 'code_mass/code_length**3')
>>> data + .0001*u.mh/u.cm**3
unyt_array([6.07964513e+01, 6.07968968e+01, 6.07968314e+01, ...,
4.09423016e+04, 5.78493016e+04, 3.97815524e+05], 'code_mass/code_length**3')
Automatic Unit Simplification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Often the results of an operation will result in a unit expression that can be
simplified by cancelling pairs of factors. Before yt 4.0, these pairs of factors
were only cancelled if the same unit appeared in both the numerator and
denominator of an expression. Now, all pairs of factors have have inverse
dimensions are cancelled, and the appropriate scaling factor is incorporated
into the result. For example, ``Hz`` and ``s`` will now appropriately be recognized
as inverses:
>>> from yt.units import Hz, s
>>> frequency = 60*Hz
>>> time = 60*s
>>> frequency*time
unyt_quantity(3600, '(dimensionless)')
Similar simplifications will happen even if units aren't reciprocals of each
other, for example here ``hour`` and ``minute`` automatically cancel each other:
>>> from yt.units import erg, minute, hour
>>> power = [20, 40, 80] * erg / minute
>>> elapsed_time = 3*hour
>>> print(power*elapsed_time)
[ 3600. 7200. 14400.] erg
Alternate Unit Name Resolution
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It's now possible to use a number of common alternate spellings for unit names
and if ``unyt`` knows about the alternate spelling it will automatically resolve
alternate spellings to a canonical name. For example, it's now possible to do
things like this:
>>> import yt.units as u
>>> d = 20*u.mile
>>> d.to('km')
unyt_quantity(32.18688, 'km')
>>> d.to('kilometer')
unyt_quantity(32.18688, 'km')
>>> d.to('kilometre')
unyt_quantity(32.18688, 'km')
You can also use alternate unit names in more complex algebraic unit expressions:
>>> v = d / (20*u.minute)
>>> v.to('kilometre/hour')
unyt_quantity(96.56064, 'km/hr')
In this example the common british spelling ``"kilometre"`` is resolved to
``"km"`` and ``"hour"`` is resolved to ``"hr"``.
Field-Specific Configuration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can now set configuration values on a per-field basis. For instance, this
means that if you always want a particular colormap associated with a particular
field, you can do so!
This is documented under :ref:`per-field-plotconfig`, and was added in `PR
1931 <https://github.com/yt-project/yt/pull/1931>`_.
New Method for Accessing Sample Datasets
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
There is now a function entitled ``load_sample()`` that allows the user to
automatically load sample data from the yt hub in a local yt session.
Previously, users would have to explicitly download these data directly from
`https://yt-project.org/data <https://yt-project.org/data>`_, unpackage them,
and load them into a yt session, but now this occurs from within a python
session. For more information see:
:ref:`Loading Sample Data <loading-sample-data>`
Some Widgets
^^^^^^^^^^^^
In yt, we now have some simple display wrappers for objects if you are running
in a Jupyter environment with the `ipywidgets
<https://ipywidgets.readthedocs.io/>`_ package installed. For instance, the
``ds.fields`` object will now display field information in an interactive
widget, and three-element unyt arrays (such as ``ds.domain_left_edge``) will be
displayed interactively as well.
The package `widgyts <https://widgyts.readthedocs.io>`_ provides interactive,
yt-specific visualization of slices, projections, and additional dataset display
information.
New External Packages
^^^^^^^^^^^^^^^^^^^^^
As noted above (:ref:`yt-units-is-now-unyt`), unyt has been extracted from
yt, and we now use it as an external library. In addition, other parts of yt
such as :ref:`interactive_data_visualization` have been extracted, and we are
working toward a more modular approach for things such as Jupyter widgets and other "value-added" integrations.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/yt4differences.rst | yt4differences.rst |
.. _astropy-integrations:
AstroPy Integrations
====================
yt enables a number of integrations with the AstroPy package. These
are listed below, but more detailed descriptions are given at the
given documentation links.
Round-Trip Unit Conversions Between yt and AstroPy
--------------------------------------------------
AstroPy has a `symbolic units implementation <https://docs.astropy.org/en/stable/units/>`_
similar to that in yt. For this reason, we have implemented "round-trip"
conversions between :class:`~yt.units.yt_array.YTArray` objects
and AstroPy's :class:`~astropy.units.Quantity` objects. These are implemented
in the :meth:`~yt.units.yt_array.YTArray.from_astropy` and
:meth:`~yt.units.yt_array.YTArray.to_astropy` methods.
FITS Image File Reading and Writing
-----------------------------------
Reading and writing FITS files is supported in yt using
`AstroPy's FITS file handling. <https://docs.astropy.org/en/stable/io/fits/>`_
yt has basic support for reading two and three-dimensional image data from FITS
files. Some limited ability to parse certain types of data (e.g., spectral cubes,
images with sky coordinates, images written using the
:class:`~yt.visualization.fits_image.FITSImageData` class described below) is
possible. See :ref:`loading-fits-data` for more information.
Fixed-resolution two-dimensional images generated from datasets using yt (such as
slices or projections) and fixed-resolution three-dimensional grids can be written
to FITS files using yt's :class:`~yt.visualization.fits_image.FITSImageData` class
and its subclasses. Multiple images can be combined into a single file, operations
can be performed on the images and their coordinates, etc. See :ref:`writing_fits_images`
for more information.
Converting Field Container and 1D Profile Data to AstroPy Tables
----------------------------------------------------------------
Data in field containers, such as spheres, rectangular regions, rays,
cylinders, etc., are represented as 1D YTArrays. A set of these arrays
can then be exported to an
`AstroPy Table <http://docs.astropy.org/en/stable/table/>`_ object,
specifically a
`QTable <http://docs.astropy.org/en/stable/table/mixin_columns.html#quantity-and-qtable>`_.
``QTable`` is unit-aware, and can be manipulated in a number of ways
and written to disk in several formats, including ASCII text or FITS
files.
Similarly, 1D profile objects can also be exported to AstroPy
``QTable``, optionally writing all of the profile bins or only the ones
which are used. For more details, see :ref:`profile-astropy-export`.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/analyzing/astropy_integrations.rst | astropy_integrations.rst |
.. _fields:
Fields in yt
============
Fields are spatially-dependent quantities associated with a parent dataset.
Examples of fields are gas density, gas temperature, particle mass, etc.
The fundamental way to query data in yt is to access a field, either in its raw
form (by examining a data container) or a processed form (derived quantities,
projections, aggregations, and so on). "Field" is something of a loaded word,
as it can refer to quantities that are defined everywhere, which we refer to as
"mesh" or "fluid" fields, or discrete points that populate the domain,
traditionally thought of as "particle" fields. The word "particle" here is
gradually falling out of favor, as these discrete fields can be any type of
sparsely populated data.
If you are developing a frontend or need to customize what yt thinks of as the
fields for a given datast, see both :ref:`per-field-plotconfig` and
:ref:`per-field-config` for information on how to change the display units,
on-disk units, display name, etc.
.. _what-are-fields:
What are fields?
----------------
Fields in yt are denoted by a two-element string tuple, of the form ``(field_type,
field_name)``. The first element, the "field type" is a category for a
field. Possible field types used in yt include ``'gas'`` (for fluid mesh fields
defined on a mesh) or ``'io'`` (for fields defined at particle locations). Field
types can also correspond to distinct particle of fluid types in a single
simulation. For example, a plasma physics simulation using the Particle in Cell
method might have particle types corresponding to ``'electrons'`` and ``'ions'``. See
:ref:`known-field-types` below for more info about field types in yt.
The second element of field tuples, the ``field_name``, denotes the specific field
to select, given the field type. Possible field names include ``'density'``,
``'velocity_x'`` or ``'pressure'`` --- these three fields are examples of field names
that might be used for a fluid defined on a mesh. Examples of particle fields
include ``'particle_mass'``, ``'particle_position'`` or ``'particle_velocity_x'``. In
general, particle field names are prefixed by ``particle_``, which makes it easy
to distinguish between a particle field or a mesh field when no field type is
provided.
What fields are available?
--------------------------
We provide a full list of fields that yt recognizes by default at
:ref:`field-list`. If you want to create additional custom derived fields,
see :ref:`creating-derived-fields`.
Every dataset has an attribute, ``ds.fields``. This attribute possesses
attributes itself, each of which is a "field type," and each field type has as
its attributes the fields themselves. When one of these is printed, it returns
information about the field and things like units and so on. You can use this
for tab-completing as well as easier access to information.
Additionally, if you have `ipywidgets
<https://ipywidgets.readthedocs.io/en/stable/>`_ installed and are in a `Jupyter
environment <https://jupyter.org/>`_, you can view the rich representation of
the fields (including source code) by either typing ``ds.fields`` as the last
item in a cell or by calling ``display(ds.fields)``. The resulting output will
have tabs and source:
.. image:: _images/fields_ipywidget.png
:scale: 50%
As an example, you might browse the available fields like so:
.. code-block:: python
print(dir(ds.fields))
print(dir(ds.fields.gas))
print(ds.fields.gas.density)
On an Enzo dataset, the result from the final command would look something like
this:::
Alias Field for ('enzo', 'Density') ('gas', 'density'): (units: 'g/cm**3')
You can use this to easily explore available fields, particularly through
tab-completion in Jupyter/IPython.
It's also possible to iterate over the list of fields associated with each
field type. For example, to print all of the ``'gas'`` fields, one might do:
.. code-block:: python
for field in ds.fields.gas:
print(field)
You can also check if a given field is associated with a field type using
standard python syntax:
.. code-block:: python
# these examples evaluate to True for a dataset that has ('gas', 'density')
"density" in ds.fields.gas
("gas", "density") in ds.fields.gas
ds.fields.gas.density in ds.fields.gas
For a more programmatic method of accessing fields, you can utilize the
``ds.field_list``, ``ds.derived_field_list`` and some accessor methods to gain
information about fields. The full list of fields available for a dataset can
be found as the attribute ``field_list`` for native, on-disk fields and
``derived_field_list`` for derived fields (``derived_field_list`` is a superset
of ``field_list``). You can view these lists by examining a dataset like this:
.. code-block:: python
ds = yt.load("my_data")
print(ds.field_list)
print(ds.derived_field_list)
By using the ``field_info()`` class, one can access information about a given
field, like its default units or the source code for it.
.. code-block:: python
ds = yt.load("my_data")
ds.index
print(ds.field_info["gas", "pressure"].get_units())
print(ds.field_info["gas", "pressure"].get_source())
Using fields to access data
---------------------------
.. warning::
These *specific* operations will load the entire field -- which can be
extremely memory intensive with large datasets! If you are looking to
compute quantities, see :ref:`Data-objects` for methods for computing
aggregates, averages, subsets, regriddings, etc.
The primary *use* of fields in yt is to access data from a dataset. For example,
if I want to use a data object (see :ref:`Data-objects` for more detail about
data objects) to access the ``('gas', 'density')`` field, one can do any of the
following:
.. code-block:: python
ad = ds.all_data()
# just a field name
density = ad["density"]
# field tuple with no parentheses
density = ad["gas", "density"]
# full field tuple
density = ad[("gas", "density")]
# through the ds.fields object
density = ad[ds.fields.gas.density]
The first data access example is the simplest. In that example, the field type
is inferred from the name of the field. However, an error will be raised if there are multiple
field names that could be meant by this simple string access. The next two examples
use the field type explicitly, this might be necessary if there is more than one field
type with a ``'density'`` field defined in the same dataset. The third example is slightly
more verbose but is syntactically identical to the second example due to the way
indexing works in the Python language.
The final example uses the ``ds.fields`` object described above. This way of
accessing fields lends itself to interactive use, especially if you make heavy
use of IPython's tab completion features. Any of these ways of denoting the
``('gas', 'density')`` field can be used when supplying a field name to a yt
data object, analysis routines, or plotting and visualization function.
Accessing Fields without a Field Type
-------------------------------------
In previous versions of yt, there was a single mechanism of accessing fields on
a data container -- by their name, which was mandated to be a single string, and
which often varied between different code frontends. yt 3.0 allows for datasets
containing multiple different types of fluid fields, mesh fields, particles
(with overlapping or disjoint lists of fields). However, to preserve backward
compatibility and make interactive use simpler, yt 4.1 and newer will still
accept field names given as a string *if and only if they match exactly one
existing field*.
As an example, we may be in a situation where have multiple types of particles
which possess the ``'particle_position'`` field. In the case where a data
container, here called ``ad`` (short for "all data") contains a field, we can
specify which particular particle type we want to query:
.. code-block:: python
print(ad["dark_matter", "particle_position"])
print(ad["stars", "particle_position"])
print(ad["black_holes", "particle_position"])
Each of these three fields may have different sizes. In order to enable
falling back on asking only for a field by the name, yt will use the most
recently requested field type for subsequent queries. (By default, if no field
has been queried, it will look for the special field ``'all'``, which
concatenates all particle types.) For example, if I were to then query for the
velocity:
.. code-block:: python
print(ad["particle_velocity"])
it would select ``black_holes`` as the field type, since the last field accessed
used that field type.
The same operations work for fluid and mesh fields. As an example, in some
cosmology simulations, we may want to examine the mass of particles in a region
versus the mass of gas. We can do so by examining the special "deposit" field
types (described below) versus the gas fields:
.. code-block:: python
print(ad["deposit", "dark_matter_density"] / ad["gas", "density"])
The ``'deposit'`` field type is a mesh field, so it will have the same shape as
the gas density. If we weren't using ``'deposit'``, and instead directly
querying a particle field, this *wouldn't* work, as they are different shapes.
This is the primary difference, in practice, between mesh and particle fields
-- they will be different shapes and so cannot be directly compared without
translating one to the other, typically through a "deposition" or "smoothing"
step.
How are fields implemented?
---------------------------
There are two classes of fields in yt. The first are those fields that exist
external to yt, which are immutable and can be queried -- most commonly, these
are fields that exist on disk. These will often be returned in units that are
not in a known, external unit system (except possibly by design, on the part of
the code that wrote the data), and yt will take every effort possible to use
the names by which they are referred to by the data producer. The default
field type for mesh fields that are "on-disk" is the name of the code frontend.
(For example, ``'art'``, ``'enzo'``, ``'pyne'``, and so on.) The default name for
particle fields, if they do not have a particle type affiliated with them, is
``'io'``.
The second class of field is the "derived field." These are fields that are
functionally defined, either *ab initio* or as a transformation or combination
of other fields. For example, when dealing with simulation codes, often the
fields that are evolved and output to disk are not the fields that are the most
relevant to researchers. Rather than examining the internal gas energy, it is
more convenient to think of the temperature. By applying one or multiple
functions to on-disk quantities, yt can construct new derived fields from them.
Derived fields do not always have to relate to the data found on disk; special
fields such as ``'x'``, ``'y'``, ``'phi'`` and ``'dz'`` all relate exclusively to the
geometry of the mesh, and provide information about the mesh that can be used
elsewhere for further transformations.
For more information, see :ref:`creating-derived-fields`.
There is a third, borderline class of field in yt, as well. This is the
"alias" type, where a field on disk (for example, (``'*frontend*''``, ``'Density'``)) is
aliased into an internal yt-name (for example, (``'gas'``, ``'density'``)). The
aliasing process allows universally-defined derived fields to take advantage of
internal names, and it also provides an easy way to address what units something
should be returned in. If an aliased field is requested (and aliased fields
will always be lowercase, with underscores separating words) it will be returned
in the units specified by the unit system of the database, whereas if the
frontend-specific field is requested, it will not undergo any unit conversions
from its natural units. (This rule is occasionally violated for fields which
are mesh-dependent, specifically particle masses in some cosmology codes.)
.. _known-field-types:
Field types known to yt
-----------------------
Recall that fields are formally accessed in two parts: ``('*field type*',
'*field name*')``. Here we describe the different field types you will encounter:
* frontend-name -- Mesh or fluid fields that exist on-disk default to having
the name of the frontend as their type name (e.g., ``'enzo'``, ``'flash'``,
``'pyne'`` and so on). The units of these types are whatever units are
designated by the source frontend when it writes the data.
* ``'index'`` -- This field type refers to characteristics of the mesh, whether
that mesh is defined by the simulation or internally by an octree indexing
of particle data. A few handy fields are ``'x'``, ``'y'``, ``'z'``, ``'theta'``,
``'phi'``, ``'radius'``, ``'dx'``, ``'dy'``, ``'dz'`` and so on. Default units
are in CGS.
* ``'gas'`` -- This is the usual default for simulation frontends for fluid
types. These fields are typically aliased to the frontend-specific mesh
fields for grid-based codes or to the deposit fields for particle-based
codes. Default units are in the unit system of the dataset.
* particle type -- These are particle fields that exist on-disk as written
by individual frontends. If the frontend designates names for these particles
(i.e. particle type) those names are the field types.
Additionally, any particle unions or filters will be accessible as field
types. Examples of particle types are ``'Stars'``, ``'DM'``, ``'io'``, etc.
Like the front-end specific mesh or fluid fields, the units of these fields
are whatever was designated by the source frontend when written to disk.
* ``'io'`` -- If a data frontend does not have a set of multiple particle types,
this is the default for all particles.
* ``'all'`` and ``'nbody'`` -- These are special particle field types that represent a
concatenation of several particle field types using :ref:`particle-unions`.
``'all'`` contains every base particle types, while ``'nbody'`` contains only the ones
for which a ``'particle_mass'`` field is defined.
* ``'deposit'`` -- This field type refers to the deposition of particles
(discrete data) onto a mesh, typically to compute smoothing kernels, local
density estimates, counts, and the like. See :ref:`deposited-particle-fields`
for more information.
While it is best to be explicit access fields by their full names
(i.e. ``('*field type*', '*field name*')``), yt provides an abbreviated
interface for accessing common fields (i.e. ``'*field name*'``). In the abbreviated
case, yt will assume you want the last *field type* accessed. If you
haven't previously accessed a *field type*, it will default to *field type* =
``'all'`` in the case of particle fields and *field type* = ``'gas'`` in the
case of mesh fields.
Field Plugins
-------------
Derived fields are organized via plugins. Inside yt are a number of field
plugins, which take information about fields in a dataset and then construct
derived fields on top of them. This allows them to take into account
variations in naming system, units, data representations, and most importantly,
allows only the fields that are relevant to be added. This system will be
expanded in future versions to enable much deeper semantic awareness of the
data types being analyzed by yt.
The field plugin system works in this order:
* Available, inherent fields are identified by yt
* The list of enabled field plugins is iterated over. Each is called, and new
derived fields are added as relevant.
* Any fields which are not available, or which throw errors, are discarded.
* Remaining fields are added to the list of derived fields available for a
dataset
* Dependencies for every derived field are identified, to enable data
preloading
Field plugins can be loaded dynamically, although at present this is not
particularly useful. Plans for extending field plugins to dynamically load, to
enable simple definition of common types (divergence, curl, etc), and to
more verbosely describe available fields, have been put in place for future
versions.
The field plugins currently available include:
* Angular momentum fields for particles and fluids
* Astrophysical fields, such as those related to cosmology
* Vector fields for fluid fields, such as gradients and divergences
* Particle vector fields
* Magnetic field-related fields
* Species fields, such as for chemistry species (yt can recognize the entire
periodic table in field names and construct ionization fields as need be)
Field Labeling
--------------
By default yt formats field labels nicely for plots. To adjust the chosen
format you can use the ``ds.set_field_label_format`` method like so:
.. code-block:: python
ds = yt.load("my_data")
ds.set_field_label_format("ionization_label", "plus_minus")
The first argument accepts a ``format_property``, or specific aspect of the labeling, and the
second sets the corresponding ``value``. Currently available format properties are
* ``ionization_label``: sets how the ionization state of ions are labeled. Available
options are ``"plus_minus"`` and ``"roman_numeral"``
.. _efields:
Energy and Momemtum Fields
--------------------------
Fields in yt representing energy and momentum quantities follow a specific
naming convention (as of yt-4.x). In hydrodynamic simulations, the relevant
quantities are often energy per unit mass or volume, momentum, or momentum
density. To distinguish clearly between the different types of fields, the
following naming convention is adhered to:
* Energy per unit mass fields are named as ``'specific_*_energy'``
* Energy per unit volume fields are named as ``'*_energy_density'``
* Momentum fields should be named ``'momentum_density_*'`` for momentum per
unit density, or ``'momentum_*'`` for momentum, where the ``*`` indicates
one of three coordinate axes in any supported coordinate system.
For example, in the case of kinetic energy, the fields should be
``'kinetic_energy_density'`` and ``'specific_kinetic_energy'``.
In versions of yt previous to v4.0.0, these conventions were not adopted, and so
energy fields in particular could be ambiguous with respect to units. For
example, the ``'kinetic_energy'`` field was actually kinetic energy per unit
volume, whereas the ``'thermal_energy'`` field, usually defined by various
frontends, was typically thermal energy per unit mass. The above scheme
rectifies these problems, but for the time being the previous field names are
mapped to the current field naming scheme with a deprecation warning. These
aliases were removed in yt v4.1.0.
.. _bfields:
Magnetic Fields
---------------
Magnetic fields require special handling, because their dimensions are different in
different systems of units, in particular between the CGS and MKS (SI) systems of units.
Superficially, it would appear that they are in the same dimensions, since the units
of the magnetic field in the CGS and MKS system are gauss (:math:`\rm{G}`) and tesla
(:math:`\rm{T}`), respectively, and numerically :math:`1~\rm{G} = 10^{-4}~\rm{T}`. However,
if we examine the base units, we find that they do indeed have different dimensions:
.. math::
\rm{1~G = 1~\frac{\sqrt{g}}{\sqrt{cm}\cdot{s}}} \\
\rm{1~T = 1~\frac{kg}{A\cdot{s^2}}}
It is easier to see the difference between the dimensionality of the magnetic field in the two
systems in terms of the definition of the magnetic pressure and the Alfvén speed:
.. math::
p_B = \frac{B^2}{8\pi}~\rm{(cgs)} \\
p_B = \frac{B^2}{2\mu_0}~\rm{(MKS)}
.. math::
v_A = \frac{B}{\sqrt{4\pi\rho}}~\rm{(cgs)} \\
v_A = \frac{B}{\sqrt{\mu_0\rho}}~\rm{(MKS)}
where :math:`\mu_0 = 4\pi \times 10^{-7}~\rm{N/A^2}` is the vacuum permeability. This
different normalization in the definition of the magnetic field may show up in other
relevant quantities as well.
For certain frontends, a third definition of the magnetic field and the magnetic
pressure may be useful. In many MHD simulations and in some physics areas (such
as particle physics/GR) it is more common to use the "Lorentz-Heaviside" convention,
which results in:
.. math::
p_B = \frac{B^2}{2} \\
v_A = \frac{B}{\sqrt{\rho}}
Using this convention is currently only available for :ref:`Athena<loading-athena-data>`
and :ref:`Athena++<loading-athena-pp-data>` datasets, though it will likely be available
for more datasets in the future.
yt automatically detects on a per-frontend basis what units the magnetic should be in, and allows conversion between
different magnetic field units in the different unit systems as well. To determine how to set up special magnetic field handling when designing a new frontend, check out
:ref:`bfields-frontend`.
.. _species-fields:
Species Fields
--------------
For many types of data, yt is able to detect different chemical elements and molecules
within the dataset, as well as their abundances and ionization states. Examples include:
* CO (Carbon monoxide)
* Co (Cobalt)
* OVI (Oxygen ionized five times)
* H:math:`^{2+}` (Molecular Hydrogen ionized once)
* H:math:`^{-}` (Hydrogen atom with an additional electron)
The naming scheme for the fields starts with prefixes in the form ``MM[_[mp][NN]]``. ``MM``
is the molecule, defined as a concatenation of atomic symbols and numbers, with no spaces or
underscores. The second sequence is only required if ionization states are present in the
dataset, and is of the form ``p`` and ``m`` to indicate "plus" or "minus" respectively,
followed by the number. If a given species has no ionization states given, the prefix is
simply ``MM``.
For the examples above, the prefixes would be:
* ``CO``
* ``Co``
* ``O_p5``
* ``H2_p1``
* ``H_m1``
The name ``El`` is used for electron fields, as it is unambiguous and will not be
utilized elsewhere. Neutral ionic species (e.g. H I, O I) are represented as ``MM_p0``.
Additionally, the isotope of :math:`^2`H will be included as ``D``.
Finally, in those frontends which are single-fluid, these fields for each species are
defined:
* ``MM[_[mp][NN]]_fraction``
* ``MM[_[mp][NN]]_number_density``
* ``MM[_[mp][NN]]_density``
* ``MM[_[mp][NN]]_mass``
To refer to the number density of the entirety of a single atom or molecule (regardless
of its ionization state), please use the ``MM_nuclei_density`` fields.
Many datasets do not have species defined, but there may be an underlying assumption
of primordial abundances of H and He which are either fully ionized or fully neutral.
This will also determine the value of the mean molecular weight of the gas, which
will determine the value of the temperature if derived from another quantity like the
pressure or thermal energy. To allow for these possibilities, there is a keyword
argument ``default_species_fields`` which can be passed to :func:`~yt.loaders.load`:
.. code-block:: python
import yt
ds = yt.load(
"GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150", default_species_fields="ionized"
)
By default, the value of this optional argument is ``None``, which will not initialize
any default species fields. If the ``default_species_fields`` argument is not set to
``None``, then the following fields are defined:
* ``H_nuclei_density``
* ``He_nuclei_density``
More specifically, if ``default_species_fields="ionized"``, then these
additional fields are defined:
* ``H_p1_number_density`` (Ionized hydrogen: equal to the value of ``H_nuclei_density``)
* ``He_p2_number_density`` (Doubly ionized helium: equal to the value of ``He_nuclei_density``)
* ``El_number_density`` (Free electrons: assuming full ionization)
Whereas if ``default_species_fields="neutral"``, then these additional
fields are defined:
* ``H_p0_number_density`` (Neutral hydrogen: equal to the value of ``H_nuclei_density``)
* ``He_p0_number_density`` (Neutral helium: equal to the value of ``He_nuclei_density``)
In this latter case, because the gas is neutral, ``El_number_density`` is not defined.
The ``mean_molecular_weight`` field will be constructed from the abundances of the elements
in the dataset. If no element or molecule fields are defined, the value of this field
is determined by the value of ``default_species_fields``. If it is set to ``None`` or
``"ionized"``, the ``mean_molecular_weight`` field is set to :math:`\mu \approx 0.6`,
whereas if ``default_species_fields`` is set to ``"neutral"``, then the
``mean_molecular_weight`` field is set to :math:`\mu \approx 1.14`. Some frontends do
not directly store the gas temperature in their datasets, in which case it must be
computed from the pressure and/or thermal energy as well as the mean molecular weight,
so check this carefully!
Particle Fields
---------------
Naturally, particle fields contain properties of particles rather than
grid cells. By examining the particle field in detail, you can see that
each element of the field array represents a single particle, whereas in mesh
fields each element represents a single mesh cell. This means that for the
most part, operations cannot operate on both particle fields and mesh fields
simultaneously in the same way, like filters (see :ref:`filtering-data`).
However, many of the particle fields have corresponding mesh fields that
can be populated by "depositing" the particle values onto a yt grid as
described below.
.. _field_parameters:
Field Parameters
----------------
Certain fields require external information in order to be calculated. For
example, the radius field has to be defined based on some point of reference
and the radial velocity field needs to know the bulk velocity of the data object
so that it can be subtracted. This information is passed into a field function
by setting field parameters, which are user-specified data that can be associated
with a data object. The
:meth:`~yt.data_objects.data_containers.YTDataContainer.set_field_parameter`
and
:meth:`~yt.data_objects.data_containers.YTDataContainer.get_field_parameter`
functions are
used to set and retrieve field parameter values for a given data object. In the
cases above, the field parameters are ``center`` and ``bulk_velocity`` respectively --
the two most commonly used field parameters.
.. code-block:: python
ds = yt.load("my_data")
ad = ds.all_data()
ad.set_field_parameter("wickets", 13)
print(ad.get_field_parameter("wickets"))
If a field parameter is not set, ``get_field_parameter`` will return None.
Within a field function, these can then be retrieved and used in the same way.
.. code-block:: python
def _wicket_density(field, data):
n_wickets = data.get_field_parameter("wickets")
if n_wickets is None:
# use a default if unset
n_wickets = 88
return data["gas", "density"] * n_wickets
For a practical application of this, see :ref:`cookbook-radial-velocity`.
.. _gradient_fields:
Gradient Fields
---------------
yt provides a way to compute gradients of spatial fields using the
:meth:`~yt.data_objects.static_output.Dataset.add_gradient_fields`
method. If you have a spatially-based field such as density or temperature,
and want to calculate the gradient of that field, you can do it like so:
.. code-block:: python
ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150")
grad_fields = ds.add_gradient_fields(("gas", "temperature"))
where the ``grad_fields`` list will now have a list of new field names that can be used
in calculations, representing the 3 different components of the field and the magnitude
of the gradient, e.g., ``"temperature_gradient_x"``, ``"temperature_gradient_y"``,
``"temperature_gradient_z"``, and ``"temperature_gradient_magnitude"``. To see an example
of how to create and use these fields, see :ref:`cookbook-complicated-derived-fields`.
.. _relative_fields:
Relative Vector Fields
----------------------
yt makes use of "relative" fields for certain vector fields, which are fields
which have been defined relative to a particular origin in the space of that
field. For example, relative particle positions can be specified relative to
a center coordinate, and relative velocities can be specified relative to a
bulk velocity. These origin points are specified by setting field parameters
as detailed below (see :ref:`field_parameters` for more information).
The relative fields which are currently supported for gas fields are:
* ``('gas', 'relative_velocity_x')``, defined by setting the
``'bulk_velocity'`` field parameter
* ``('gas', 'relative_magnetic_field_x')``, defined by setting the
``'bulk_magnetic_field'`` field parameter
Note that fields ending in ``'_x'`` are defined for each component.
For particle fields, for a given particle type ``ptype``, the relative
fields which are supported are:
* ``(*ptype*, 'relative_particle_position')``, defined by setting the
``'center'`` field parameter
* ``(*ptype*, 'relative_particle_velocity')``, defined by setting the
``'bulk_velocity'`` field parameter
* ``(*ptype*, 'relative_particle_position_x')``, defined by setting the
``'center'`` field parameter
* ``(*ptype*, 'relative_particle_velocity_x')``, defined by setting the
``'bulk_velocity'`` field parameter
These fields are in use when defining magnitude fields, line-of-sight fields,
etc.. The ``'bulk_*'`` field parameters are ``[0.0, 0.0, 0.0]`` by default,
and the ``'center'`` field parameter depends on the data container in use.
There is currently no mechanism to create new relative fields, but one may be
added at a later time.
.. _los_fields:
Line of Sight Fields
--------------------
In astrophysics applications, one often wants to know the component of a vector
field along a given line of sight. If you are doing a projection of a vector
field along an axis, or just want to obtain the values of a vector field
component along an axis, you can use a line-of-sight field. For projections,
this will be handled automatically:
.. code-block:: python
prj = yt.ProjectionPlot(
ds,
"z",
fields=("gas", "velocity_los"),
weight_field=("gas", "density"),
)
Which, because the axis is ``'z'``, will give you the same result if you had
projected the ``'velocity_z'`` field. This also works for off-axis projections,
using an arbitrary normal vector
.. code-block:: python
prj = yt.ProjectionPlot(
ds,
[0.1, -0.2, 0.3],
fields=("gas", "velocity_los"),
weight_field=("gas", "density"),
)
This shows that the projection axis can be along a principle axis of the domain
or an arbitrary off-axis 3-vector (which will be automatically normalized). If
you want to examine a line-of-sight vector within a 3-D data object, set the
``'axis'`` field parameter:
.. code-block:: python
dd = ds.all_data()
# Set to one of [0, 1, 2] for ["x", "y", "z"] axes
dd.set_field_parameter("axis", 1)
print(dd["gas", "magnetic_field_los"])
# Set to a three-vector for an off-axis component
dd.set_field_parameter("axis", [0.3, 0.4, -0.7])
print(dd["gas", "velocity_los"])
.. warning::
If you need to change the axis of the line of sight on the *same* data container
(sphere, box, cylinder, or whatever), you will need to delete the field using
``del dd['velocity_los']`` and re-generate it.
At this time, this functionality is enabled for the velocity and magnetic vector
fields, ``('gas', 'velocity_los')`` and ``('gas', 'magnetic_field_los')``. The
following fields built into yt make use of these line-of-sight fields:
* ``('gas', 'sz_kinetic')`` uses ``('gas', 'velocity_los')``
* ``('gas', 'rotation_measure')`` uses ``('gas', 'magnetic_field_los')``
General Particle Fields
-----------------------
Every particle will contain both a ``'particle_position'`` and ``'particle_velocity'``
that tracks the position and velocity (respectively) in code units.
.. _deposited-particle-fields:
Deposited Particle Fields
-------------------------
In order to turn particle (discrete) fields into fields that are deposited in
some regular, space-filling way (even if that space is empty, it is defined
everywhere) yt provides mechanisms for depositing particles onto a mesh. These
are in the special field-type space ``'deposit'``, and are typically of the form
``('deposit', 'particletype_depositiontype')`` where ``depositiontype`` is the
mechanism by which the field is deposited, and ``particletype`` is the particle
type of the particles being deposited. If you are attempting to examine the
cloud-in-cell (``cic``) deposition of the ``all`` particle type, you would
access the field ``('deposit', 'all_cic''')``.
yt defines a few particular types of deposition internally, and creating new
ones can be done by modifying the files ``yt/geometry/particle_deposit.pyx``
and ``yt/fields/particle_fields.py``, although that is an advanced topic
somewhat outside the scope of this section. The default deposition types
available are:
* ``count`` - this field counts the total number of particles of a given type
in a given mesh zone. Note that because, in general, the mesh for particle
datasets is defined by the number of particles in a region, this may not be
the most useful metric. This may be made more useful by depositing particle
data onto an :ref:`arbitrary-grid`.
* ``density`` - this field takes the total sum of ``particle_mass`` in a given
mesh field and divides by the volume.
* ``mass`` - this field takes the total sum of ``particle_mass`` in each mesh
zone.
* ``cic`` - this field performs cloud-in-cell interpolation (see `Section 2.2
<http://ta.twi.tudelft.nl/dv/users/lemmens/MThesis.TTH/chapter4.html>`_ for more
information) of the density of particles in a given mesh zone.
* ``smoothed`` - this is a special deposition type. See discussion below for
more information, in :ref:`sph-fields`.
You can also directly use the
:meth:`~yt.data_objects.static_outputs.add_deposited_particle_field` function
defined on each dataset to depose any particle field onto the mesh like so:
.. code-block:: python
import yt
ds = yt.load("output_00080/info_00080.txt")
fname = ds.add_deposited_particle_field(
("all", "particle_velocity_x"), method="nearest"
)
print(f"The velocity of the particles are (stored in {fname}")
print(ds.r[fname])
.. note::
In this example, we are using the returned field name as our input. You
*could* also access it directly, but it might take a slightly different form
than you expect -- in this particular case, the field name will be
``("deposit", "all_nn_velocity_x")``, which has removed the prefix
``particle_`` from the deposited name!
Possible deposition methods are:
* ``'simple_smooth'`` - perform an SPH-like deposition of the field onto the mesh
optionally accepting a ``kernel_name``.
* ``'sum'`` - sums the value of the particle field for all particles found in
each cell.
* ``'std'`` - computes the standard deviation of the value of the particle field
for all particles found in each cell.
* ``'cic'`` - performs cloud-in-cell interpolation (see `Section 2.2
<http://ta.twi.tudelft.nl/dv/users/lemmens/MThesis.TTH/chapter4.html>`_ for more
information) of the particle field on a given mesh zone.
* ``'weighted_mean'`` - computes the mean of the particle field, weighted by
the field passed into ``weight_field`` (by default, it uses the particle
mass).
* ``'count'`` - counts the number of particles in each cell.
* ``'nearest'`` - assign to each cell the value of the closest particle.
In addition, the :meth:`~yt.data_objects.static_outputs.add_deposited_particle_field` function
returns the name of the newly created field.
Deposited particle fields can be useful for visualizing particle data, including
particles without defined smoothing lengths. See :ref:`particle-plotting-workarounds`
for more information.
.. _mesh-sampling-particle-fields:
Mesh Sampling Particle Fields
-----------------------------
In order to turn mesh fields into discrete particle field, yt provides
a mechanism to do sample mesh fields at particle locations. This operation is
the inverse operation of :ref:`deposited-particle-fields`: for each
particle the cell containing the particle is found and the value of
the field in the cell is assigned to the particle. This is for
example useful when using tracer particles to have access to the
Eulerian information for Lagrangian particles.
The particle fields are named ``('*ptype*', 'cell_*ftype*_*fname*')`` where
``ptype`` is the particle type onto which the deposition occurs,
``ftype`` is the mesh field type (e.g. ``'gas'``) and ``fname`` is the
field (e.g. ``'temperature'``, ``'density'``, ...). You can directly use
the :meth:`~yt.data_objects.static_output.Dataset.add_mesh_sampling_particle_field`
function defined on each dataset to impose a field onto the particles like so:
.. code-block:: python
import yt
ds = yt.load("output_00080/info_00080.txt")
ds.add_mesh_sampling_particle_field(("gas", "temperature"), ptype="all")
print("The temperature at the location of the particles is")
print(ds.r["all", "cell_gas_temperature"])
For octree codes (e.g. RAMSES), you can trigger the build of an index so
that the next sampling operations will be mush faster
.. code-block:: python
import yt
ds = yt.load("output_00080/info_00080.txt")
ds.add_mesh_sampling_particle_field(("gas", "temperature"), ptype="all")
ad = ds.all_data()
ad[
"all", "cell_index"
] # Trigger the build of the index of the cell containing the particles
ad["all", "cell_gas_temperature"] # This is now much faster
.. _sph-fields:
SPH Fields
----------
See :ref:`yt4differences`.
In previous versions of yt, there were ways of computing the distance to the
N-th nearest neighbor of a particle, as well as computing the nearest particle
value on a mesh. Unfortunately, because of changes to the way that particles
are regarded in yt, these are not currently available. We hope that this will
be rectified in future versions and are tracking this in `Issue 3301
<https://github.com/yt-project/yt/issues/3301>`_. You can read a bit more
about the way yt now handles particles in the section :ref:`demeshening`.
**But!** It is possible to compute the smoothed values from SPH particles on
grids. For example, one can construct a covering grid that extends over the
entire domain of a simulation, with resolution 256x256x256, and compute the gas
density with this reasonable terse command:
.. code-block:: python
import yt
ds = yt.load("snapshot_033/snap_033.0.hdf5")
cg = ds.r[::256j, ::256j, ::256j]
smoothed_values = cg["gas", "density"]
This will work for any smoothed field; any field that is under the ``'gas'``
field type will be a smoothed field in an SPH-based simulation. Here we have
used the ``ds.r[]`` notation, as described in :ref:`quickly-selecting-data` for
creating what's called an "arbitrary grid"
(:class:`~yt.data_objects.construction_data_containers.YTArbitraryGrid`). You
can, of course, also supply left and right edges to make the grid take up a
much smaller portion of the domain, as well, by supplying the arguments as
detailed in :ref:`arbitrary-grid-selection` and supplying the bounds as the
first and second elements in each element of the slice.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/analyzing/fields.rst | fields.rst |
.. _analyzing:
General Data Analysis
=====================
This documentation describes much of the yt infrastructure for manipulating
one's data to extract the relevant information. Fields, data objects, and
units are at the heart of how yt represents data. Beyond this, we provide
a full description for how to filter your datasets based on specific criteria,
how to analyze chronological datasets from the same underlying simulation or
source (i.e. time series analysis), and how to run yt in parallel on
multiple processors to accomplish tasks faster.
.. toctree::
:maxdepth: 2
fields
../developing/creating_derived_fields
objects
units
filtering
generating_processed_data
saving_data
time_series_analysis
particle_trajectories
parallel_computation
astropy_integrations
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/analyzing/index.rst | index.rst |
Let us demonstrate this with an example using the same dataset as we used with the boolean masks.
```
import yt
ds = yt.load("Enzo_64/DD0042/data0042")
```
The only argument to a cut region is a conditional on field output from a data object. The only catch is that you *must* denote the data object in the conditional as "obj" regardless of the actual object's name.
Here we create three new data objects which are copies of the all_data object (a region object covering the entire spatial domain of the simulation), but we've filtered on just "hot" material, the "dense" material, and the "overpressure and fast" material.
```
ad = ds.all_data()
hot_ad = ad.cut_region(['obj["gas", "temperature"] > 1e6'])
dense_ad = ad.cut_region(['obj["gas", "density"] > 5e-30'])
# you can chain cut regions in two ways:
dense_and_cool_ad = dense_ad.cut_region(['obj["gas", "temperature"] < 1e5'])
overpressure_and_fast_ad = ad.cut_region(
[
'(obj["gas", "pressure"] > 1e-14) & (obj["gas", "velocity_magnitude"].in_units("km/s") > 1e2)'
]
)
```
You can also construct a cut_region using the include_ and exclude_ functions as well.
```
ad = ds.all_data()
hot_ad = ad.include_above(("gas", "temperature"), 1e6)
dense_ad = ad.include_above(("gas", "density"), 5e-30)
# These can be chained as well
dense_and_cool_ad = dense_ad.include_below(("gas", "temperature"), 1e5)
overpressure_and_fast_ad = ad.include_above(("gas", "pressure"), 1e-14)
overpressure_and_fast_ad = overpressure_and_fast_ad.include_above(
("gas", "velocity_magnitude"), 1e2, "km/s"
)
```
Upon inspection of our "hot_ad" object, we can still get the same results as we got with the boolean masks example above:
```
print(
"Temperature of all cells:\n ad['temperature'] = \n%s\n" % ad["gas", "temperature"]
)
print(
"Temperatures of all \"hot\" cells:\n hot_ad['temperature'] = \n%s"
% hot_ad["gas", "temperature"]
)
print(
"Density of dense, cool material:\n dense_and_cool_ad['density'] = \n%s\n"
% dense_and_cool_ad["gas", "density"]
)
print(
"Temperature of dense, cool material:\n dense_and_cool_ad['temperature'] = \n%s"
% dense_and_cool_ad["gas", "temperature"]
)
```
Now that we've constructed a `cut_region`, we can use it as a data source for further analysis. To create a plot based on a `cut_region`, use the `data_source` keyword argument provided by yt's plotting objects.
Here's an example using projections:
```
proj1 = yt.ProjectionPlot(ds, "x", ("gas", "density"), weight_field=("gas", "density"))
proj1.annotate_title("No Cuts")
proj1.set_figure_size(5)
proj1.show()
proj2 = yt.ProjectionPlot(
ds, "x", ("gas", "density"), weight_field=("gas", "density"), data_source=hot_ad
)
proj2.annotate_title("Hot Gas")
proj2.set_zlim(("gas", "density"), 3e-31, 3e-27)
proj2.set_figure_size(5)
proj2.show()
```
The `data_source` keyword argument is also accepted by `SlicePlot`, `ProfilePlot` and `PhasePlot`:
```
slc1 = yt.SlicePlot(ds, "x", ("gas", "density"), center="m")
slc1.set_zlim(("gas", "density"), 3e-31, 3e-27)
slc1.annotate_title("No Cuts")
slc1.set_figure_size(5)
slc1.show()
slc2 = yt.SlicePlot(ds, "x", ("gas", "density"), center="m", data_source=dense_ad)
slc2.set_zlim(("gas", "density"), 3e-31, 3e-27)
slc2.annotate_title("Dense Gas")
slc2.set_figure_size(5)
slc2.show()
ph1 = yt.PhasePlot(
ad, ("gas", "density"), ("gas", "temperature"), ("gas", "mass"), weight_field=None
)
ph1.set_xlim(3e-31, 3e-27)
ph1.annotate_title("No Cuts")
ph1.set_figure_size(5)
ph1.show()
ph1 = yt.PhasePlot(
dense_ad,
("gas", "density"),
("gas", "temperature"),
("gas", "mass"),
weight_field=None,
)
ph1.set_xlim(3e-31, 3e-27)
ph1.annotate_title("Dense Gas")
ph1.set_figure_size(5)
ph1.show()
```
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/analyzing/mesh_filter.ipynb | mesh_filter.ipynb |
.. _units:
Symbolic Units
==============
This section describes yt's symbolic unit capabilities. This is provided as
quick introduction for those who are already familiar with yt but want to learn
more about the unit system. Please see :ref:`analyzing` and :ref:`visualizing`
for more detail about querying, analyzing, and visualizing data in yt.
Originally the unit system was a part of yt proper but since the yt 4.0 release,
the unit system has been split off into `its own library
<https://github.com/yt-project/unyt>`_, ``unyt``.
For a detailed discussion of how to use ``unyt``, we suggest taking a look at
the unyt documentation available at https://unyt.readthedocs.io/, however yt
adds additional capabilities above and beyond what is provided by ``unyt``
alone, we describe those capabilities below.
Selecting data from a data object
---------------------------------
The data returned by yt will have units attached to it. For example, let's query
a data object for the ``('gas', 'density')`` field:
>>> import yt
>>> ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')
>>> dd = ds.all_data()
>>> dd['gas', 'density']
unyt_array([4.92775113e-31, 4.94005233e-31, 4.93824694e-31, ...,
1.12879234e-25, 1.59561490e-25, 1.09824903e-24], 'g/cm**3')
We can see how we get back a ``unyt_array`` instance. A ``unyt_array`` is a
subclass of NumPy's NDarray type that has units attached to it:
>>> dd['gas', 'density'].units
g/cm**3
It is straightforward to convert data to different units:
>>> dd['gas', 'density'].to('Msun/kpc**3')
unyt_array([7.28103608e+00, 7.29921182e+00, 7.29654424e+00, ...,
1.66785569e+06, 2.35761291e+06, 1.62272618e+07], 'Msun/kpc**3')
For more details about working with ``unyt_array``, see the `the documentation
<https://unyt.readthedocs.io>`__ for ``unyt``.
Applying Units to Data
----------------------
A ``unyt_array`` can be created from a list, tuple, or NumPy array using
multiplication with a ``Unit`` object. For convenience, each yt dataset has a
``units`` attribute one can use to obtain unit objects for this purpose:
>>> data = np.random.random((100, 100))
>>> data_with_units = data * ds.units.gram
All units known to the dataset will be available via ``ds.units``, including
code units and comoving units.
Derived Field Units
-------------------
Special care often needs to be taken to ensure the result of a derived field
will come out in the correct units. The yt unit system will double-check for you
to make sure you are not accidentally making a unit conversion mistake. To see
what that means in practice, let's define a derived field corresponding to the
square root of the gas density:
>>> import yt
>>> import numpy as np
>>> def root_density(field, data):
... return np.sqrt(data['gas', 'density'])
>>> ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')
>>> ds.add_field(("gas", "root_density"), units="(g/cm**3)**(1/2)",
... function=root_density, sampling_type='cell')
>>> ad = ds.all_data()
>>> ad['gas', 'root_density']
unyt_array([7.01979425e-16, 7.02855059e-16, 7.02726614e-16, ...,
3.35975050e-13, 3.99451486e-13, 1.04797377e-12], 'sqrt(g)/cm**(3/2)')
No special unit logic needs to happen inside of the function: the result of
``np.sqrt`` will have the correct units:
>>> np.sqrt(ad['gas', 'density'])
unyt_array([7.01979425e-16, 7.02855059e-16, 7.02726614e-16, ...,
3.35975050e-13, 3.99451486e-13, 1.04797377e-12], 'sqrt(g)/cm**(3/2)')
One could also specify any other units that have dimensions of square root of
density and yt would automatically convert the return value of the field
function to the specified units. An error would be raised if the units are not
dimensionally equivalent to the return value of the field function.
Code Units
----------
All yt datasets are associated with a "code" unit system that corresponds to
whatever unit system the data is represented in on-disk. Let's take a look at
the data in an Enzo simulation, specifically the ``("enzo", "Density")`` field:
>>> import yt
>>> ds = yt.load('Enzo_64/DD0043/data0043')
>>> ad = ds.all_data()
>>> ad["enzo", "Density"]
unyt_array([6.74992726e-02, 6.12111635e-02, 8.92988636e-02, ...,
9.09875931e+01, 5.66932465e+01, 4.27780263e+01], 'code_mass/code_length**3')
we see we get back data from yt in units of ``code_mass/code_length**3``. This
is the density unit formed out of the base units of mass and length in the
internal unit system in the simulation. We can see the values of these units by
looking at the ``length_unit`` and ``mass_unit`` attributes of the dataset
object:
>>> ds.length_unit
unyt_quantity(128, 'Mpccm/h')
>>> ds.mass_unit
unyt_quantity(4.89045159e+50, 'g')
And we can see that both of these have values of 1 in the code unit system.
>>> ds.length_unit.to('code_length')
unyt_quantity(1., 'code_length')
>>> ds.mass_unit.to('code_mass')
unyt_quantity(1., 'code_mass')
In addition to ``length_unit`` and ``mass_unit``, there are also ``time_unit``,
``velocity_unit``, and ``magnetic_unit`` attributes for this dataset. Some
frontends also define a ``density_unit``, ``pressure_unit``,
``temperature_unit``, and ``specific_energy`` attribute. If these are not defined
then the corresponding unit is calculated from the base length, mass, and time unit.
Each of these attributes corresponds to a unit in the code unit system:
>>> [un for un in dir(ds.units) if un.startswith('code')]
['code_density',
'code_length',
'code_magnetic',
'code_mass',
'code_metallicity',
'code_pressure',
'code_specific_energy',
'code_temperature',
'code_time',
'code_velocity']
You can use these unit names to convert arbitrary data into a dataset's code
unit system:
>>> u = ds.units
>>> data = 10**-30 * u.g / u.cm**3
>>> data.to('code_density')
unyt_quantity(0.36217187, 'code_density')
Note how in this example we used ``ds.units`` instead of the top-level ``unyt``
namespace or ``yt.units``. This is because the units from ``ds.units`` know
about the dataset's code unit system and can convert data into it. Unit objects
from ``unyt`` or ``yt.units`` will not know about any particular dataset's unit
system.
.. _cosmological-units:
Comoving units for Cosmological Simulations
-------------------------------------------
The length unit of the dataset I used above uses a cosmological unit:
>>> print(ds.length_unit)
128 Mpccm/h
In English, this says that the length unit is 128 megaparsecs in the comoving
frame, scaled as if the hubble constant were 100 km/s/Mpc. Although :math:`h`
isn't really a unit, yt treats it as one for the purposes of the unit system.
As an aside, `Darren Croton's research note <https://arxiv.org/abs/1308.4150>`_
on the history, use, and interpretation of :math:`h` as it appears in the
astronomical literature is pretty much required reading for anyone who has to
deal with factors of :math:`h` every now and then.
In yt, comoving length unit symbols are named following the pattern ``< length
unit >cm``, i.e. ``pccm`` for comoving parsec or ``mcm`` for a comoving
meter. A comoving length unit is different from the normal length unit by a
factor of :math:`(1+z)`:
>>> u = ds.units
>>> print((1*u.Mpccm)/(1*u.Mpc))
0.9986088499304777 dimensionless
>>> 1 / (1 + ds.current_redshift)
0.9986088499304776
As we saw before, h is treated like any other unit symbol. It has dimensionless
units, just like a scalar:
>>> (1*u.Mpc)/(1*u.Mpc/u.h)
unyt_quantity(0.71, '(dimensionless)')
>>> ds.hubble_constant
0.71
Using parsec as an example,
* ``pc``
Proper parsecs, :math:`\rm{pc}`.
* ``pccm``
Comoving parsecs, :math:`\rm{pc}/(1+z)`.
* ``pccm/h``
Comoving parsecs normalized by the scaled hubble constant, :math:`\rm{pc}/h/(1+z)`.
* ``pc/h``
Proper parsecs, normalized by the scaled hubble constant, :math:`\rm{pc}/h`.
Overriding Code Unit Definitions
--------------------------------
On occasion, you might have a dataset for a supported frontend that does not
have the conversions to code units accessible or you may want to change them
outright. ``yt`` provides a mechanism so that one may provide their own code
unit definitions to ``yt.load``, which override the default rules for a given
frontend for defining code units.
This is provided through the ``units_override`` argument to ``yt.load``. We'll
use an example of an Athena dataset. First, a call to ``yt.load`` without
``units_override``:
>>> ds = yt.load("MHDSloshing/virgo_low_res.0054.vtk")
>>> ds.length_unit
unyt_quantity(1., 'cm')
>>> ds.mass_unit
unyt_quantity(1., 'g')
>>> ds.time_unit
unyt_quantity(1., 's')
>>> sp1 = ds1.sphere("c", (0.1, "unitary"))
>>> print(sp1["gas", "density"])
[0.05134981 0.05134912 0.05109047 ... 0.14608461 0.14489453 0.14385277] g/cm**3
This particular simulation is of a galaxy cluster merger so these density values
are way, way too high. This is happening because Athena does not encode any
information about the unit system used in the simulation or the output data, so
yt cannot infer that information and must make an educated guess. In this case
it incorrectly assumes the data are in CGS units.
However, we know *a priori* what the unit system *should* be, and we can supply
a ``units_override`` dictionary to ``yt.load`` to override the incorrect
assumptions yt is making about this dataset. Let's define:
>>> units_override = {"length_unit": (1.0, "Mpc"),
... "time_unit": (1.0, "Myr"),
... "mass_unit": (1.0e14, "Msun")}
The ``units_override`` dictionary can take the following keys:
* ``length_unit``
* ``time_unit``
* ``mass_unit``
* ``magnetic_unit``
* ``temperature_unit``
and the associated values can be ``(value, "unit")`` tuples, ``unyt_quantity``
instances, or floats (in the latter case they are assumed to have the
corresponding cgs unit). Now let's reload the dataset using our
``units_override`` dict:
>>> ds = yt.load("MHDSloshing/virgo_low_res.0054.vtk",
... units_override=units_override)
>>> sp = ds.sphere("c",(0.1,"unitary"))
>>> print(sp["gas", "density"])
[3.47531683e-28 3.47527018e-28 3.45776515e-28 ... 9.88689766e-28
9.80635384e-28 9.73584863e-28] g/cm**3
and we see how the data now have much more sensible values for a galaxy cluster
merge simulation.
Comparing Units From Different Simulations
------------------------------------------
The code units from different simulations will have different conversions to
physical coordinates. This can get confusing when working with data from more
than one simulation or from a single simulation where the units change with
time.
As an example, let's load up two enzo datasets from different redshifts in the
same cosmology simulation, one from high redshift:
>>> ds1 = yt.load('Enzo_64/DD0002/data0002')
>>> ds1.current_redshift
7.8843748886903
>>> ds1.length_unit
unyt_quantity(128, 'Mpccm/h')
>>> ds1.length_unit.in_cgs()
unyt_quantity(6.26145538e+25, 'cm')
And another from low redshift:
>>> ds2 = yt.load('Enzo_64/DD0043/data0043')
>>> ds2.current_redshift
0.0013930880640796
>>> ds2.length_unit
unyt_quantity(128, 'Mpccm/h')
>>> ds2.length_unit.in_cgs()
unyt_quantity(5.55517285e+26, 'cm')
Now despite the fact that ``'Mpccm/h'`` means different things for the two
datasets, it's still a well-defined operation to take the ratio of the two
length units:
>>> ds2.length_unit / ds1.length_unit
unyt_quantity(8.87201539, '(dimensionless)')
Because code units and comoving units are defined relative to a physical unit
system, ``unyt`` is able to give the correct answer here. So long as the result
comes out dimensionless or in a physical unit then the answer will be
well-defined. However, if we want the answer to come out in the internal units
of one particular dataset, additional care must be taken. For an example where
this might be an issue, let's try to compute the sum of two comoving distances
from each simulation:
>>> d1 = 12 * ds1.units.Mpccm
>>> d2 = 12 * ds2.units.Mpccm
>>> d1 + d2
unyt_quantity(118.46418468, 'Mpccm')
>>> d2 + d1
unyt_quantity(13.35256754, 'Mpccm')
So this is definitely weird - addition appears to not be associative anymore!
However, both answers are correct, the confusion is arising because ``"Mpccm"``
is ambiguous in these expressions. In situations like this, ``unyt`` will use
the definition for units from the leftmost term in an expression, so the first
example is returning data in high-redshift comoving megaparsecs, while the
second example returns data in low-redshift comoving megaparsecs.
Wherever possible it's best to do calculations in physical units when working
with more than one dataset. If you need to use comoving units or code units then
extra care must be taken in your code to avoid ambiguity.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/analyzing/units.rst | units.rst |
Let us go through a full worked example. Here we have a Tipsy SPH dataset. By general
inspection, we see that there are stars present in the dataset, since
there are fields with field type: `Stars` in the `ds.field_list`. Let's look
at the `derived_field_list` for all of the `Stars` fields.
```
import numpy as np
import yt
ds = yt.load("TipsyGalaxy/galaxy.00300")
for field in ds.derived_field_list:
if field[0] == "Stars":
print(field)
```
We will filter these into young stars and old stars by masking on the ('Stars', 'creation_time') field.
In order to do this, we first make a function which applies our desired cut. This function must accept two arguments: `pfilter` and `data`. The first argument is a `ParticleFilter` object that contains metadata about the filter its self. The second argument is a yt data container.
Let's call "young" stars only those stars with ages less 5 million years. Since Tipsy assigns a very large `creation_time` for stars in the initial conditions, we need to also exclude stars with negative ages.
Conversely, let's define "old" stars as those stars formed dynamically in the simulation with ages greater than 5 Myr. We also include stars with negative ages, since these stars were included in the simulation initial conditions.
We make use of `pfilter.filtered_type` so that the filter definition will use the same particle type as the one specified in the call to `add_particle_filter` below. This makes the filter definition usable for arbitrary particle types. Since we're only filtering the `"Stars"` particle type in this example, we could have also replaced `pfilter.filtered_type` with `"Stars"` and gotten the same result.
```
def young_stars(pfilter, data):
age = data.ds.current_time - data[pfilter.filtered_type, "creation_time"]
filter = np.logical_and(age.in_units("Myr") <= 5, age >= 0)
return filter
def old_stars(pfilter, data):
age = data.ds.current_time - data[pfilter.filtered_type, "creation_time"]
filter = np.logical_or(age.in_units("Myr") >= 5, age < 0)
return filter
```
Now we define these as particle filters within the yt universe with the
`add_particle_filter()` function.
```
yt.add_particle_filter(
"young_stars",
function=young_stars,
filtered_type="Stars",
requires=["creation_time"],
)
yt.add_particle_filter(
"old_stars", function=old_stars, filtered_type="Stars", requires=["creation_time"]
)
```
Let us now apply these filters specifically to our dataset.
Let's double check that it worked by looking at the derived_field_list for any new fields created by our filter.
```
ds.add_particle_filter("young_stars")
ds.add_particle_filter("old_stars")
for field in ds.derived_field_list:
if "young_stars" in field or "young_stars" in field[1]:
print(field)
```
We see all of the new `young_stars` fields as well as the 4 deposit fields. These deposit fields are `mesh` fields generated by depositing particle fields on the grid. Let's generate a couple of projections of where the young and old stars reside in this simulation by accessing some of these new fields.
```
p = yt.ProjectionPlot(
ds,
"z",
[("deposit", "young_stars_cic"), ("deposit", "old_stars_cic")],
width=(40, "kpc"),
center="m",
)
p.set_figure_size(5)
p.show()
```
We see that young stars are concentrated in regions of active star formation, while old stars are more spatially extended.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/analyzing/particle_filter.ipynb | particle_filter.ipynb |
One can create particle trajectories from a `DatasetSeries` object for a specified list of particles identified by their unique indices using the `particle_trajectories` method.
```
%matplotlib inline
import glob
from os.path import join
import yt
from yt.config import ytcfg
path = ytcfg.get("yt", "test_data_dir")
import matplotlib.pyplot as plt
```
First, let's start off with a FLASH dataset containing only two particles in a mutual circular orbit. We can get the list of filenames this way:
```
my_fns = glob.glob(join(path, "Orbit", "orbit_hdf5_chk_00[0-9][0-9]"))
my_fns.sort()
```
And let's define a list of fields that we want to include in the trajectories. The position fields will be included by default, so let's just ask for the velocity fields:
```
fields = ["particle_velocity_x", "particle_velocity_y", "particle_velocity_z"]
```
There are only two particles, but for consistency's sake let's grab their indices from the dataset itself:
```
ds = yt.load(my_fns[0])
dd = ds.all_data()
indices = dd["all", "particle_index"].astype("int")
print(indices)
```
which is what we expected them to be. Now we're ready to create a `DatasetSeries` object and use it to create particle trajectories:
```
ts = yt.DatasetSeries(my_fns)
# suppress_logging=True cuts down on a lot of noise
trajs = ts.particle_trajectories(indices, fields=fields, suppress_logging=True)
```
The `ParticleTrajectories` object `trajs` is essentially a dictionary-like container for the particle fields along the trajectory, and can be accessed as such:
```
print(trajs["all", "particle_position_x"])
print(trajs["all", "particle_position_x"].shape)
```
Note that each field is a 2D NumPy array with the different particle indices along the first dimension and the times along the second dimension. As such, we can access them individually by indexing the field:
```
plt.figure(figsize=(6, 6))
plt.plot(trajs["all", "particle_position_x"][0], trajs["all", "particle_position_y"][0])
plt.plot(trajs["all", "particle_position_x"][1], trajs["all", "particle_position_y"][1])
```
And we can plot the velocity fields as well:
```
plt.figure(figsize=(6, 6))
plt.plot(trajs["all", "particle_velocity_x"][0], trajs["all", "particle_velocity_y"][0])
plt.plot(trajs["all", "particle_velocity_x"][1], trajs["all", "particle_velocity_y"][1])
```
If we want to access the time along the trajectory, we use the key `"particle_time"`:
```
plt.figure(figsize=(6, 6))
plt.plot(trajs["particle_time"], trajs["particle_velocity_x"][1])
plt.plot(trajs["particle_time"], trajs["particle_velocity_y"][1])
```
Alternatively, if we know the particle index we'd like to examine, we can get an individual trajectory corresponding to that index:
```
particle1 = trajs.trajectory_from_index(1)
plt.figure(figsize=(6, 6))
plt.plot(particle1["all", "particle_time"], particle1["all", "particle_position_x"])
plt.plot(particle1["all", "particle_time"], particle1["all", "particle_position_y"])
```
Now let's look at a more complicated (and fun!) example. We'll use an Enzo cosmology dataset. First, we'll find the maximum density in the domain, and obtain the indices of the particles within some radius of the center. First, let's have a look at what we're getting:
```
ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046")
slc = yt.SlicePlot(
ds,
"x",
[("gas", "density"), ("gas", "dark_matter_density")],
center="max",
width=(3.0, "Mpc"),
)
slc.show()
```
So far, so good--it looks like we've centered on a galaxy cluster. Let's grab all of the dark matter particles within a sphere of 0.5 Mpc (identified by `"particle_type == 1"`):
```
sp = ds.sphere("max", (0.5, "Mpc"))
indices = sp["all", "particle_index"][sp["all", "particle_type"] == 1]
```
Next we'll get the list of datasets we want, and create trajectories for these particles:
```
my_fns = glob.glob(join(path, "enzo_tiny_cosmology/DD*/*.hierarchy"))
my_fns.sort()
ts = yt.DatasetSeries(my_fns)
trajs = ts.particle_trajectories(indices, fields=fields, suppress_logging=True)
```
Matplotlib can make 3D plots, so let's pick three particle trajectories at random and look at them in the volume:
```
fig = plt.figure(figsize=(8.0, 8.0))
ax = fig.add_subplot(111, projection="3d")
ax.plot(
trajs["all", "particle_position_x"][100],
trajs["all", "particle_position_y"][100],
trajs["all", "particle_position_z"][100],
)
ax.plot(
trajs["all", "particle_position_x"][8],
trajs["all", "particle_position_y"][8],
trajs["all", "particle_position_z"][8],
)
ax.plot(
trajs["all", "particle_position_x"][25],
trajs["all", "particle_position_y"][25],
trajs["all", "particle_position_z"][25],
)
```
It looks like these three different particles fell into the cluster along different filaments. We can also look at their x-positions only as a function of time:
```
plt.figure(figsize=(6, 6))
plt.plot(trajs["all", "particle_time"], trajs["all", "particle_position_x"][100])
plt.plot(trajs["all", "particle_time"], trajs["all", "particle_position_x"][8])
plt.plot(trajs["all", "particle_time"], trajs["all", "particle_position_x"][25])
```
Suppose we wanted to know the gas density along the particle trajectory, but there wasn't a particle field corresponding to that in our dataset. Never fear! If the field exists as a grid field, yt will interpolate this field to the particle positions and add the interpolated field to the trajectory. To add such a field (or any field, including additional particle fields) we can call the `add_fields` method:
```
trajs.add_fields([("gas", "density")])
```
We also could have included `"density"` in our original field list. Now, plot up the gas density for each particle as a function of time:
```
plt.figure(figsize=(6, 6))
plt.plot(trajs["all", "particle_time"], trajs["gas", "density"][100])
plt.plot(trajs["all", "particle_time"], trajs["gas", "density"][8])
plt.plot(trajs["all", "particle_time"], trajs["gas", "density"][25])
plt.yscale("log")
```
Finally, the particle trajectories can be written to disk. Two options are provided: ASCII text files with a column for each field and the time, and HDF5 files:
```
trajs.write_out(
"halo_trajectories"
) # This will write a separate file for each trajectory
trajs.write_out_h5(
"halo_trajectories.h5"
) # This will write all trajectories to a single file
```
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/analyzing/Particle_Trajectories.ipynb | Particle_Trajectories.ipynb |
.. _generating-processed-data:
Generating Processed Data
=========================
Although yt provides a number of built-in visualization methods that can
process data and construct from that plots, it is often useful to generate the
data by hand and construct plots which can then be combined with other plots,
modified in some way, or even (gasp) created and modified in some other tool or
program.
.. _exporting-container-data:
Exporting Container Data
------------------------
Fields from data containers such as regions, spheres, cylinders, etc. can be exported
tabular format using either a :class:`~pandas.DataFrame` or an :class:`~astropy.table.QTable`.
To export to a :class:`~pandas.DataFrame`, use
:meth:`~yt.data_objects.data_containers.YTDataContainer.to_dataframe`:
.. code-block:: python
sp = ds.sphere("c", (0.2, "unitary"))
df2 = sp.to_dataframe([("gas", "density"), ("gas", "temperature")])
To export to a :class:`~astropy.table.QTable`, use
:meth:`~yt.data_objects.data_containers.YTDataContainer.to_astropy_table`:
.. code-block:: python
sp = ds.sphere("c", (0.2, "unitary"))
at2 = sp.to_astropy_table(fields=[("gas", "density"), ("gas", "temperature")])
For exports to :class:`~pandas.DataFrame` objects, the unit information is lost, but for
exports to :class:`~astropy.table.QTable` objects, the :class:`~yt.units.yt_array.YTArray`
objects are converted to :class:`~astropy.units.Quantity` objects.
.. _generating-2d-image-arrays:
2D Image Arrays
---------------
When making a slice, a projection or an oblique slice in yt, the resultant
:class:`~yt.data_objects.data_containers.YTSelectionContainer2D` object is created and
contains flattened arrays of the finest available data. This means a set of
arrays for the x, y, (possibly z), dx, dy, (possibly dz) and data values, for
every point that constitutes the object.
This presents something of a challenge for visualization, as it will require
the transformation of a variable mesh of points consisting of positions and
sizes into a fixed-size array that appears like an image. This process is that
of pixelization, which yt handles transparently internally. You can access
this functionality by constructing a
:class:`~yt.visualization.fixed_resolution.FixedResolutionBuffer` and supplying
to it your :class:`~yt.data_objects.data_containers.YTSelectionContainer2D`
object, as well as some information about how you want the final image to look.
You can specify both the bounds of the image (in the appropriate x-y plane) and
the resolution of the output image. You can then have yt pixelize any field
you like.
.. note:: In previous versions of yt, there was a special class of
FixedResolutionBuffer for off-axis slices. This is no longer
necessary.
To create :class:`~yt.data_objects.data_containers.YTSelectionContainer2D` objects, you can
access them as described in :ref:`data-objects`, specifically the section
:ref:`available-objects`. Here is an example of how to window into a slice
of resolution(512, 512) with bounds of (0.3, 0.5) and (0.6, 0.8). The next
step is to generate the actual 2D image array, which is accomplished by
accessing the desired field.
.. code-block:: python
sl = ds.slice(0, 0.5)
frb = FixedResolutionBuffer(sl, (0.3, 0.5, 0.6, 0.8), (512, 512))
my_image = frb["density"]
This image may then be used in a hand-constructed Matplotlib image, for instance using
:func:`~matplotlib.pyplot.imshow`.
The buffer arrays can be saved out to disk in either HDF5 or FITS format:
.. code-block:: python
frb.save_as_dataset("my_images.h5", fields=[("gas", "density"), ("gas", "temperature")])
frb.export_fits(
"my_images.fits",
fields=[("gas", "density"), ("gas", "temperature")],
clobber=True,
units="kpc",
)
In the HDF5 case, the created file can be reloaded just like a regular dataset with
``yt.load`` and will, itself, be a first-class dataset. For more information on
this, see :ref:`saving-grid-data-containers`.
In the FITS case, there is an option for setting the ``units`` of the coordinate system in
the file. If you want to overwrite a file with the same name, set ``clobber=True``.
The :class:`~yt.visualization.fixed_resolution.FixedResolutionBuffer` can even be exported
as a 2D dataset itself, which may be operated on in the same way as any other dataset in yt:
.. code-block:: python
ds_frb = frb.export_dataset(
fields=[("gas", "density"), ("gas", "temperature")], nprocs=8
)
sp = ds_frb.sphere("c", (100.0, "kpc"))
where the ``nprocs`` parameter can be used to decompose the image into ``nprocs`` number of grids.
.. _generating-profiles-and-histograms:
Profiles and Histograms
-----------------------
Profiles and histograms can also be generated using the
:class:`~yt.visualization.profile_plotter.ProfilePlot` and
:class:`~yt.visualization.profile_plotter.PhasePlot` functions
(described in :ref:`how-to-make-1d-profiles` and
:ref:`how-to-make-2d-profiles`). These generate profiles transparently, but the
objects they handle and create can be handled manually, as well, for more
control and access. The :func:`~yt.data_objects.profiles.create_profile` function
can be used to generate 1, 2, and 3D profiles.
Profile objects can be created from any data object (see :ref:`data-objects`,
specifically the section :ref:`available-objects` for more information) and are
best thought of as distribution calculations. They can either sum up or average
one quantity with respect to one or more other quantities, and they do this over
all the data contained in their source object. When calculating average values,
the standard deviation will also be calculated.
To generate a profile, one need only specify the binning fields and the field
to be profiled. The binning fields are given together in a list. The
:func:`~yt.data_objects.profiles.create_profile` function will guess the
dimensionality of the profile based on the number of fields given. For example,
a one-dimensional profile of the mass-weighted average temperature as a function of
density within a sphere can be created in the following way:
.. code-block:: python
import yt
ds = yt.load("galaxy0030/galaxy0030")
source = ds.sphere("c", (10, "kpc"))
profile = source.profile(
[("gas", "density")], # the bin field
[
("gas", "temperature"), # profile field
("gas", "radial_velocity"),
], # profile field
weight_field=("gas", "mass"),
)
The binning, weight, and profile data can now be access as:
.. code-block:: python
print(profile.x) # bin field
print(profile.weight) # weight field
print(profile["gas", "temperature"]) # profile field
print(profile["gas", "radial_velocity"]) # profile field
The ``profile.used`` attribute gives a boolean array of the bins which actually
have data.
.. code-block:: python
print(profile.used)
If a weight field was given, the profile data will represent the weighted mean
of a field. In this case, the weighted standard deviation will be calculated
automatically and can be access via the ``profile.standard_deviation``
attribute.
.. code-block:: python
print(profile.standard_deviation["gas", "temperature"])
A two-dimensional profile of the total gas mass in bins of density and
temperature can be created as follows:
.. code-block:: python
profile2d = source.profile(
[
("gas", "density"),
("gas", "temperature"),
], # the x bin field # the y bin field
[("gas", "mass")], # the profile field
weight_field=None,
)
Accessing the x, y, and profile fields work just as with one-dimensional profiles:
.. code-block:: python
print(profile2d.x)
print(profile2d.y)
print(profile2d["gas", "mass"])
One of the more interesting things that is enabled with this approach is
the generation of 1D profiles that correspond to 2D profiles. For instance, a
phase plot that shows the distribution of mass in the density-temperature
plane, with the average temperature overplotted. The
:func:`~matplotlib.pyplot.pcolormesh` function can be used to manually plot
the 2D profile. If you want to generate a default profile plot, you can simply
call:::
profile.plot()
Three-dimensional profiles can be generated and accessed following
the same procedures. Additional keyword arguments are available to control
the following for each of the bin fields: the number of bins, min and max, units,
whether to use a log or linear scale, and whether or not to do accumulation to
create a cumulative distribution function. For more information, see the API
documentation on the :func:`~yt.data_objects.profiles.create_profile` function.
For custom bins the other keyword arguments can be overridden using the
``override_bins`` keyword argument. This accepts a dictionary with an array
for each bin field or ``None`` to use the default settings.
.. code-block:: python
custom_bins = np.array([1e-27, 1e-25, 2e-25, 5e-25, 1e-23])
profile2d = source.profile(
[("gas", "density"), ("gas", "temperature")],
[("gas", "mass")],
override_bins={("gas", "density"): custom_bins, ("gas", "temperature"): None},
)
.. _profile-dataframe-export:
Exporting Profiles to DataFrame
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
One-dimensional profile data can be exported to a :class:`~pandas.DataFrame` object
using the :meth:`yt.data_objects.profiles.Profile1D.to_dataframe` method. Bins which
do not have data will have their fields filled with ``NaN``, except for the bin field
itself. If you only want to export the bins which are used, set ``only_used=True``,
and if you want to export the standard deviation of the profile as well, set
``include_std=True``:
.. code-block:: python
# Adds all of the data to the DataFrame, but non-used bins are filled with NaNs
df = profile.to_dataframe()
# Only adds the used bins to the DataFrame
df_used = profile.to_dataframe(only_used=True)
# Only adds the density and temperature fields
df2 = profile.to_dataframe(fields=[("gas", "density"), ("gas", "temperature")])
# Include standard deviation
df3 = profile.to_dataframe(include_std=True)
The :class:`~pandas.DataFrame` can then analyzed and/or written to disk using pandas
methods. Note that unit information is lost in this export.
.. _profile-astropy-export:
Exporting Profiles to QTable
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
One-dimensional profile data also can be exported to an AstroPy :class:`~astropy.table.QTable`
object. This table can then be written to disk in a number of formats, such as ASCII text
or FITS files, and manipulated in a number of ways. Bins which do not have data
will have their mask values set to ``False``. If you only want to export the bins
which are used, set ``only_used=True``. If you want to include the standard deviation
of the field in the export, set ``include_std=True``. Units are preserved in the table
by converting each :class:`~yt.units.yt_array.YTArray` to an :class:`~astropy.units.Quantity`.
To export the 1D profile to a Table object, simply call
:meth:`yt.data_objects.profiles.Profile1D.to_astropy_table`:
.. code-block:: python
# Adds all of the data to the Table, but non-used bins are masked
t = profile.to_astropy_table()
# Only adds the used bins to the Table
t_used = profile.to_astropy_table(only_used=True)
# Only adds the density and temperature fields
t2 = profile.to_astropy_table(fields=[("gas", "density"), ("gas", "temperature")])
# Export the standard deviation
t3 = profile.to_astropy_table(include_std=True)
.. _generating-line-queries:
Line Queries and Planar Integrals
---------------------------------
To calculate the values along a line connecting two points in a simulation, you
can use the object :class:`~yt.data_objects.selection_data_containers.YTRay`,
accessible as the ``ray`` property on a index. (See :ref:`data-objects`
for more information on this.) To do so, you can supply two points and access
fields within the returned object. For instance, this code will generate a ray
between the points (0.3, 0.5, 0.9) and (0.1, 0.8, 0.5) and examine the density
along that ray:
.. code-block:: python
ray = ds.ray((0.3, 0.5, 0.9), (0.1, 0.8, 0.5))
print(ray["gas", "density"])
The points are not ordered, so you may need to sort the data (see the
example in the
:class:`~yt.data_objects.selection_data_containers.YTRay` docs). Also
note, the ray is traversing cells of varying length, as well as
taking a varying distance to cross each cell. To determine the
distance traveled by the ray within each cell (for instance, for
integration) the field ``dt`` is available; this field will sum to
1.0, as the ray's path will be normalized to 1.0, independent of how
far it travels through the domain. To determine the value of ``t`` at
which the ray enters each cell, the field ``t`` is available. For
instance:
.. code-block:: python
print(ray["dts"].sum())
print(ray["t"])
These can be used as inputs to, for instance, the Matplotlib function
:func:`~matplotlib.pyplot.plot`, or they can be saved to disk.
The volume rendering functionality in yt can also be used to calculate
off-axis plane integrals, using the
:class:`~yt.visualization.volume_rendering.transfer_functions.ProjectionTransferFunction`
in a manner similar to that described in :ref:`volume_rendering`.
.. _generating-xarray:
Regular Grids to xarray
-----------------------
Objects that subclass from
:class:`~yt.data_objects.construction_data_containers.YTCoveringGrid` are able
to export to `xarray <https://xarray.pydata.org/>`_. This enables
interoperability with anything that can take xarray data. The classes that can do this are
:class:`~yt.data_objects.construction_data_containers.YTCoveringGrid`,
:class:`~yt.data_objects.construction_data_containers.YTArbitraryGrid`, and
:class:`~yt.data_objects.construction_data_containers.YTSmoothedCoveringGrid`. For example, you can:
.. code-block:: python
grid = ds.r[::256j, ::256j, ::256j]
obj = grid.to_xarray(fields=[("gas", "density"), ("gas", "temperature")])
The returned object, ``obj``, will now have the correct labelled axes and so forth.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/analyzing/generating_processed_data.rst | generating_processed_data.rst |
.. _parallel-computation:
Parallel Computation With yt
============================
yt has been instrumented with the ability to compute many -- most, even --
quantities in parallel. This utilizes the package
`mpi4py <https://bitbucket.org/mpi4py/mpi4py>`_ to parallelize using the Message
Passing Interface, typically installed on clusters.
.. _capabilities:
Capabilities
------------
Currently, yt is able to perform the following actions in parallel:
* Projections (:ref:`projection-plots`)
* Slices (:ref:`slice-plots`)
* Cutting planes (oblique slices) (:ref:`off-axis-slices`)
* Covering grids (:ref:`examining-grid-data-in-a-fixed-resolution-array`)
* Derived Quantities (total mass, angular momentum, etc)
* 1-, 2-, and 3-D profiles (:ref:`generating-profiles-and-histograms`)
* Halo analysis (:ref:`halo-analysis`)
* Volume rendering (:ref:`volume_rendering`)
* Isocontours & flux calculations (:ref:`extracting-isocontour-information`)
This list covers just about every action yt can take! Additionally, almost all
scripts will benefit from parallelization with minimal modification. The goal
of Parallel-yt has been to retain API compatibility and abstract all
parallelism.
Setting Up Parallel yt
--------------------------
To run scripts in parallel, you must first install `mpi4py
<https://bitbucket.org/mpi4py/mpi4py>`_ as well as an MPI library, if one is not
already available on your system. Instructions for doing so are provided on the
mpi4py website, but you may have luck by just running:
.. code-block:: bash
$ python -m pip install mpi4py
If you have an Anaconda installation of yt and there is no MPI library on the
system you are using try:
.. code-block:: bash
$ conda install mpi4py
This will install `MPICH2 <https://www.mpich.org/>`_ and will interfere with
other MPI libraries that are already installed. Therefore, it is preferable to
use the ``pip`` installation method.
Once mpi4py has been installed, you're all done! You just need to launch your
scripts with ``mpirun`` (or equivalent) and signal to yt that you want to
run them in parallel by invoking the ``yt.enable_parallelism()`` function in
your script. In general, that's all it takes to get a speed benefit on a
multi-core machine. Here is an example on an 8-core desktop:
.. code-block:: bash
$ mpirun -np 8 python script.py
Throughout its normal operation, yt keeps you aware of what is happening with
regular messages to the stderr usually prefaced with:
.. code-block:: bash
yt : [INFO ] YYY-MM-DD HH:MM:SS
However, when operating in parallel mode, yt outputs information from each
of your processors to this log mode, as in:
.. code-block:: bash
P000 yt : [INFO ] YYY-MM-DD HH:MM:SS
P001 yt : [INFO ] YYY-MM-DD HH:MM:SS
in the case of two cores being used.
It's important to note that all of the processes listed in :ref:`capabilities`
work in parallel -- and no additional work is necessary to parallelize those
processes.
Running a yt Script in Parallel
-------------------------------
Many basic yt operations will run in parallel if yt's parallelism is enabled at
startup. For example, the following script finds the maximum density location
in the simulation and then makes a plot of the projected density:
.. code-block:: python
import yt
yt.enable_parallelism()
ds = yt.load("RD0035/RedshiftOutput0035")
v, c = ds.find_max(("gas", "density"))
print(v, c)
p = yt.ProjectionPlot(ds, "x", ("gas", "density"))
p.save()
If this script is run in parallel, two of the most expensive operations -
finding of the maximum density and the projection will be calculated in
parallel. If we save the script as ``my_script.py``, we would run it on 16 MPI
processes using the following Bash command:
.. code-block:: bash
$ mpirun -np 16 python my_script.py
.. note::
If you run into problems, the you can use :ref:`remote-debugging` to examine
what went wrong.
How do I run my yt job on a subset of available processes
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++
You can set the ``communicator`` keyword in the
:func:`~yt.utilities.parallel_tools.parallel_analysis_interface.enable_parallelism`
call to a specific MPI communicator to specify a subset of available MPI
processes. If none is specified, it defaults to ``COMM_WORLD``.
Creating Parallel and Serial Sections in a Script
+++++++++++++++++++++++++++++++++++++++++++++++++
Many yt operations will automatically run in parallel (see the next section for
a full enumeration), however some operations, particularly ones that print
output or save data to the filesystem, will be run by all processors in a
parallel script. For example, in the script above the lines ``print(v, c)`` and
``p.save()`` will be run on all 16 processors. This means that your terminal
output will contain 16 repetitions of the output of the print statement and the
plot will be saved to disk 16 times (overwritten each time).
yt provides two convenience functions that make it easier to run most of a
script in parallel but run some subset of the script on only one processor. The
first, :func:`~yt.funcs.is_root`, returns ``True`` if run on the 'root'
processor (the processor with MPI rank 0) and ``False`` otherwise. One could
rewrite the above script to take advantage of :func:`~yt.funcs.is_root` like
so:
.. code-block:: python
import yt
yt.enable_parallelism()
ds = yt.load("RD0035/RedshiftOutput0035")
v, c = ds.find_max(("gas", "density"))
p = yt.ProjectionPlot(ds, "x", ("gas", "density"))
if yt.is_root():
print(v, c)
p.save()
The second function, :func:`~yt.funcs.only_on_root` accepts the name of a
function as well as a set of parameters and keyword arguments to pass to the
function. This is useful when the serial component of your parallel script
would clutter the script or if you like writing your scripts as a series of
isolated function calls. I can rewrite the example from the beginning of this
section once more using :func:`~yt.funcs.only_on_root` to give you the flavor of
how to use it:
.. code-block:: python
import yt
yt.enable_parallelism()
def print_and_save_plot(v, c, plot, verbose=True):
if verbose:
print(v, c)
plot.save()
ds = yt.load("RD0035/RedshiftOutput0035")
v, c = ds.find_max(("gas", "density"))
p = yt.ProjectionPlot(ds, "x", ("gas", "density"))
yt.only_on_root(print_and_save_plot, v, c, plot, verbose=True)
Types of Parallelism
--------------------
In order to divide up the work, yt will attempt to send different tasks to
different processors. However, to minimize inter-process communication, yt
will decompose the information in different ways based on the task.
Spatial Decomposition
+++++++++++++++++++++
During this process, the index will be decomposed along either all three
axes or along an image plane, if the process is that of projection. This type
of parallelism is overall less efficient than grid-based parallelism, but it
has been shown to obtain good results overall.
The following operations use spatial decomposition:
* :ref:`halo-analysis`
* :ref:`volume_rendering`
Grid Decomposition
++++++++++++++++++
The alternative to spatial decomposition is a simple round-robin of data chunks,
which could be grids, octs, or whatever chunking mechanism is used by the code
frontend begin used. This process allows yt to pool data access to a given
data file, which ultimately results in faster read times and better parallelism.
The following operations use chunk decomposition:
* Projections (see :ref:`available-objects`)
* Slices (see :ref:`available-objects`)
* Cutting planes (see :ref:`available-objects`)
* Covering grids (see :ref:`construction-objects`)
* Derived Quantities (see :ref:`derived-quantities`)
* 1-, 2-, and 3-D profiles (see :ref:`generating-profiles-and-histograms`)
* Isocontours & flux calculations (see :ref:`surfaces`)
Parallelization over Multiple Objects and Datasets
++++++++++++++++++++++++++++++++++++++++++++++++++
If you have a set of computational steps that need to apply identically and
independently to several different objects or datasets, a so-called
`embarrassingly parallel <https://en.wikipedia.org/wiki/Embarrassingly_parallel>`_
task, yt can do that easily. See the sections below on
:ref:`parallelizing-your-analysis` and :ref:`parallel-time-series-analysis`.
Use of ``piter()``
^^^^^^^^^^^^^^^^^^
If you use parallelism over objects or datasets, you will encounter
the :func:`~yt.data_objects.time_series.DatasetSeries.piter` function.
:func:`~yt.data_objects.time_series.DatasetSeries.piter` is a parallel iterator,
which effectively doles out each item of a DatasetSeries object to a different
processor. In serial processing, you might iterate over a DatasetSeries by:
.. code-block:: python
for dataset in dataset_series:
... # process
But in parallel, you can use ``piter()`` to force each dataset to go to
a different processor:
.. code-block:: python
yt.enable_parallelism()
for dataset in dataset_series.piter():
... # process
In order to store information from the parallel processing step to
a data structure that exists on all of the processors operating in parallel
we offer the ``storage`` keyword in the
:func:`~yt.data_objects.time_series.DatasetSeries.piter` function.
You may define an empty dictionary and include it as the keyword argument
``storage`` to :func:`~yt.data_objects.time_series.DatasetSeries.piter`.
Then, during the processing step, you can access
this dictionary as the ``sto`` object. After the
loop is finished, the dictionary is re-aggregated from all of the processors,
and you can access the contents:
.. code-block:: python
yt.enable_parallelism()
my_dictionary = {}
for sto, dataset in dataset_series.piter(storage=my_dictionary):
... # process
sto.result = ... # some information processed for this dataset
sto.result_id = ... # some identifier for this dataset
print(my_dictionary)
By default, the dataset series will be divided as equally as possible
among the cores. Often some datasets will require more work than
others. We offer the ``dynamic`` keyword in the
:func:`~yt.data_objects.time_series.DatasetSeries.piter` function to
enable dynamic load balancing with a task queue. Dynamic load
balancing works best with more cores and a variable workload. Here
one process will act as a server to assign the next available dataset
to any free client. For example, a 16 core job will have 15 cores
analyzing the data with 1 core acting as the task manager.
.. _parallelizing-your-analysis:
Parallelizing over Multiple Objects
-----------------------------------
It is easy within yt to parallelize a list of tasks, as long as those tasks
are independent of one another. Using object-based parallelism, the function
:func:`~yt.utilities.parallel_tools.parallel_analysis_interface.parallel_objects`
will automatically split up a list of tasks over the specified number of
processors (or cores). Please see this heavily-commented example:
.. code-block:: python
# As always...
import yt
yt.enable_parallelism()
import glob
# The number 4, below, is the number of processes to parallelize over, which
# is generally equal to the number of MPI tasks the job is launched with.
# If num_procs is set to zero or a negative number, the for loop below
# will be run such that each iteration of the loop is done by a single MPI
# task. Put another way, setting it to zero means that no matter how many
# MPI tasks the job is run with, num_procs will default to the number of
# MPI tasks automatically.
num_procs = 4
# fns is a list of all the simulation data files in the current directory.
fns = glob.glob("./plot*")
fns.sort()
# This dict will store information collected in the loop, below.
# Inside the loop each task will have a local copy of the dict, but
# the dict will be combined once the loop finishes.
my_storage = {}
# In this example, because the storage option is used in the
# parallel_objects function, the loop yields a tuple, which gets used
# as (sto, fn) inside the loop.
# In the loop, sto is essentially my_storage, but a local copy of it.
# If data does not need to be combined after the loop is done, the line
# would look like:
# for fn in parallel_objects(fns, num_procs):
for sto, fn in yt.parallel_objects(fns, num_procs, storage=my_storage):
# Open a data file, remembering that fn is different on each task.
ds = yt.load(fn)
dd = ds.all_data()
# This copies fn and the min/max of density to the local copy of
# my_storage
sto.result_id = fn
sto.result = dd.quantities.extrema(("gas", "density"))
# Makes and saves a plot of the gas density.
p = yt.ProjectionPlot(ds, "x", ("gas", "density"))
p.save()
# At this point, as the loop exits, the local copies of my_storage are
# combined such that all tasks now have an identical and full version of
# my_storage. Until this point, each task is unaware of what the other
# tasks have produced.
# Below, the values in my_storage are printed by only one task. The other
# tasks do nothing.
if yt.is_root():
for fn, vals in sorted(my_storage.items()):
print(fn, vals)
This example above can be modified to loop over anything that can be saved to
a Python list: halos, data files, arrays, and more.
.. _parallel-time-series-analysis:
Parallelization over Multiple Datasets (including Time Series)
--------------------------------------------------------------
The same ``parallel_objects`` machinery discussed above is turned on by
default when using a :class:`~yt.data_objects.time_series.DatasetSeries` object
(see :ref:`time-series-analysis`) to iterate over simulation outputs. The
syntax for this is very simple. As an example, we can use the following script
to find the angular momentum vector in a 1 pc sphere centered on the maximum
density cell in a large number of simulation outputs:
.. code-block:: python
import yt
yt.enable_parallelism()
# Load all of the DD*/output_* files into a DatasetSeries object
# in this case it is a Time Series
ts = yt.load("DD*/output_*")
# Define an empty storage dictionary for collecting information
# in parallel through processing
storage = {}
# Use piter() to iterate over the time series, one proc per dataset
# and store the resulting information from each dataset in
# the storage dictionary
for sto, ds in ts.piter(storage=storage):
sphere = ds.sphere("max", (1.0, "pc"))
sto.result = sphere.quantities.angular_momentum_vector()
sto.result_id = str(ds)
# Print out the angular momentum vector for all of the datasets
for L in sorted(storage.items()):
print(L)
Note that this script can be run in serial or parallel with an arbitrary number
of processors. When running in parallel, each output is given to a different
processor.
You can also request a fixed number of processors to calculate each
angular momentum vector. For example, the following script will calculate each
angular momentum vector using 4 workgroups, splitting up the pool available
processors. Note that parallel=1 implies that the analysis will be run using
1 workgroup, whereas parallel=True will run with Nprocs workgroups.
.. code-block:: python
import yt
yt.enable_parallelism()
ts = yt.DatasetSeries("DD*/output_*", parallel=4)
for ds in ts.piter():
sphere = ds.sphere("max", (1.0, "pc"))
L_vecs = sphere.quantities.angular_momentum_vector()
If you do not want to use ``parallel_objects`` parallelism when using a
DatasetSeries object, set ``parallel = False``. When running python in parallel,
this will use all of the available processors to evaluate the requested
operation on each simulation output. Some care and possibly trial and error
might be necessary to estimate the correct settings for your simulation
outputs.
Note, when iterating over several large datasets, running out of memory may
become an issue as the internal data structures associated with each dataset
may not be properly de-allocated at the end of an iteration. If memory use
becomes a problem, it may be necessary to manually delete some of the larger
data structures.
.. code-block:: python
import yt
yt.enable_parallelism()
ts = yt.DatasetSeries("DD*/output_*", parallel=4)
for ds in ts.piter():
# do analysis here
ds.index.clear_all_data()
Multi-level Parallelism
-----------------------
By default, the
:func:`~yt.utilities.parallel_tools.parallel_analysis_interface.parallel_objects`
and :func:`~yt.data_objects.time_series.DatasetSeries.piter` functions will allocate a
single processor to each iteration of the parallelized loop. However, there may be
situations in which it is advantageous to have multiple processors working together
on each loop iteration. Like with any traditional for loop, nested loops with multiple
calls to :func:`~yt.utilities.parallel_tools.parallel_analysis_interface.enable_parallelism`
can be used to parallelize the functionality within a given loop iteration.
In the example below, we will create projections along the x, y, and z axis of the
density and temperature fields. We will assume a total of 6 processors are available,
allowing us to allocate to processors to each axis and project each field with a
separate processor.
.. code-block:: python
import yt
yt.enable_parallelism()
# assume 6 total cores
# allocate 3 work groups of 2 cores each
for ax in yt.parallel_objects("xyz", njobs=3):
# project each field with one of the two cores in the workgroup
for field in yt.parallel_objects([("gas", "density"), ("gas", "temperature")]):
p = yt.ProjectionPlot(ds, ax, field, weight_field=("gas", "density"))
p.save("figures/")
Note, in the above example, if the inner
:func:`~yt.utilities.parallel_tools.parallel_analysis_interface.parallel_objects`
call were removed from the loop, the two-processor work group would work together to
project each of the density and temperature fields. This is because the projection
functionality itself is parallelized internally.
The :func:`~yt.data_objects.time_series.DatasetSeries.piter` function can also be used
in the above manner with nested
:func:`~yt.utilities.parallel_tools.parallel_analysis_interface.parallel_objects`
loops to allocate multiple processors to work on each dataset. As discussed above in
:ref:`parallel-time-series-analysis`, the ``parallel`` keyword is used to control
the number of workgroups created for iterating over multiple datasets.
Parallel Performance, Resources, and Tuning
-------------------------------------------
Optimizing parallel jobs in yt is difficult; there are many parameters that
affect how well and quickly the job runs. In many cases, the only way to find
out what the minimum (or optimal) number of processors is, or amount of memory
needed, is through trial and error. However, this section will attempt to
provide some insight into what are good starting values for a given parallel
task.
Chunk Decomposition
+++++++++++++++++++
In general, these types of parallel calculations scale very well with number of
processors. They are also fairly memory-conservative. The two limiting factors
is therefore the number of chunks in the dataset, and the speed of the disk the
data is stored on. There is no point in running a parallel job of this kind
with more processors than chunks, because the extra processors will do absolutely
nothing, and will in fact probably just serve to slow down the whole calculation
due to the extra overhead. The speed of the disk is also a consideration - if
it is not a high-end parallel file system, adding more tasks will not speed up
the calculation if the disk is already swamped with activity.
The best advice for these sort of calculations is to run with just a few
processors and go from there, seeing if it the runtime improves noticeably.
**Projections, Slices, Cutting Planes and Covering Grids**
Projections, slices and cutting planes are the most common methods of creating
two-dimensional representations of data. All three have been parallelized in a
chunk-based fashion.
* **Projections**: projections are parallelized utilizing a quad-tree approach.
Data is loaded for each processor, typically by a process that consolidates
open/close/read operations, and each grid is then iterated over and cells are
deposited into a data structure that stores values corresponding to positions
in the two-dimensional plane. This provides excellent load balancing, and in
serial is quite fast. However, the operation by which quadtrees are joined
across processors scales poorly; while memory consumption scales well, the
time to completion does not. As such, projections can often be done very
fast when operating only on a single processor! The quadtree algorithm can
be used inline (and, indeed, it is for this reason that it is slow.) It is
recommended that you attempt to project in serial before projecting in
parallel; even for the very largest datasets (Enzo 1024^3 root grid with 7
levels of refinement) in the absence of IO the quadtree algorithm takes only
three minutes or so on a decent processor.
* **Slices**: to generate a slice, chunks that intersect a given slice are iterated
over and their finest-resolution cells are deposited. The chunks are
decomposed via standard load balancing. While this operation is parallel,
**it is almost never necessary to slice a dataset in parallel**, as all data is
loaded on demand anyway. The slice operation has been parallelized so as to
enable slicing when running *in situ*.
* **Cutting planes**: cutting planes are parallelized exactly as slices are.
However, in contrast to slices, because the data-selection operation can be
much more time consuming, cutting planes often benefit from parallelism.
* **Covering Grids**: covering grids are parallelized exactly as slices are.
Object-Based
++++++++++++
Like chunk decomposition, it does not help to run with more processors than the
number of objects to be iterated over.
There is also the matter of the kind of work being done on each object, and
whether it is disk-intensive, cpu-intensive, or memory-intensive.
It is up to the user to figure out what limits the performance of their script,
and use the correct amount of resources, accordingly.
Disk-intensive jobs are limited by the speed of the file system, as above,
and extra processors beyond its capability are likely counter-productive.
It may require some testing or research (e.g. supercomputer documentation)
to find out what the file system is capable of.
If it is cpu-intensive, it's best to use as many processors as possible
and practical.
For a memory-intensive job, each processor needs to be able to allocate enough
memory, which may mean using fewer than the maximum number of tasks per compute
node, and increasing the number of nodes.
The memory used per processor should be calculated, compared to the memory
on each compute node, which dictates how many tasks per node.
After that, the number of processors used overall is dictated by the
disk system or CPU-intensity of the job.
Domain Decomposition
++++++++++++++++++++
The various types of analysis that utilize domain decomposition use them in
different enough ways that they are discussed separately.
**Halo-Finding**
Halo finding, along with the merger tree that uses halo finding, operates on the
particles in the volume, and is therefore mostly chunk-agnostic. Generally, the
biggest concern for halo finding is the amount of memory needed. There is
subtle art in estimating the amount of memory needed for halo finding, but a
rule of thumb is that the HOP halo finder is the most memory intensive
(:func:`HaloFinder`), and Friends of Friends (:func:`FOFHaloFinder`) being the
most memory-conservative. For more information, see :ref:`halo-analysis`.
**Volume Rendering**
The simplest way to think about volume rendering, is that it load-balances over
the i/o chunks in the dataset. Each processor is given roughly the same sized
volume to operate on. In practice, there are just a few things to keep in mind
when doing volume rendering. First, it only uses a power of two number of
processors. If the job is run with 100 processors, only 64 of them will
actually do anything. Second, the absolute maximum number of processors is the
number of chunks. In order to keep work distributed evenly, typically the
number of processors should be no greater than one-eighth or one-quarter the
number of processors that were used to produce the dataset.
For more information, see :ref:`volume_rendering`.
Additional Tips
---------------
* Don't be afraid to change how a parallel job is run. Change the
number of processors, or memory allocated, and see if things work better
or worse. After all, it's just a computer, it doesn't pass moral judgment!
* Similarly, human time is more valuable than computer time. Try increasing
the number of processors, and see if the runtime drops significantly.
There will be a sweet spot between speed of run and the waiting time in
the job scheduler queue; it may be worth trying to find it.
* If you are using object-based parallelism but doing CPU-intensive computations
on each object, you may find that setting ``num_procs`` equal to the
number of processors per compute node can lead to significant speedups.
By default, most mpi implementations will assign tasks to processors on a
'by-slot' basis, so this setting will tell yt to do computations on a single
object using only the processors on a single compute node. A nice application
for this type of parallelism is calculating a list of derived quantities for
a large number of simulation outputs.
* It is impossible to tune a parallel operation without understanding what's
going on. Read the documentation, look at the underlying code, or talk to
other yt users. Get informed!
* Sometimes it is difficult to know if a job is cpu, memory, or disk
intensive, especially if the parallel job utilizes several of the kinds of
parallelism discussed above. In this case, it may be worthwhile to put
some simple timers in your script (as below) around different parts.
.. code-block:: python
import time
import yt
yt.enable_parallelism()
ds = yt.load("DD0152")
t0 = time.time()
bigstuff, hugestuff = StuffFinder(ds)
BigHugeStuffParallelFunction(ds, bigstuff, hugestuff)
t1 = time.time()
for i in range(1000000):
tinystuff, ministuff = GetTinyMiniStuffOffDisk("in%06d.txt" % i)
array = TinyTeensyParallelFunction(ds, tinystuff, ministuff)
SaveTinyMiniStuffToDisk("out%06d.txt" % i, array)
t2 = time.time()
if yt.is_root():
print(
"BigStuff took {:.5e} sec, TinyStuff took {:.5e} sec".format(t1 - t0, t2 - t1)
)
* Remember that if the script handles disk IO explicitly, and does not use
a built-in yt function to write data to disk,
care must be taken to
avoid `race-conditions <https://en.wikipedia.org/wiki/Race_conditions>`_.
Be explicit about which MPI task writes to disk using a construction
something like this:
.. code-block:: python
if yt.is_root():
file = open("out.txt", "w")
file.write(stuff)
file.close()
* Many supercomputers allow users to ssh into the nodes that their job is
running on.
Many job schedulers send the names of the nodes that are
used in the notification emails, or a command like ``qstat -f NNNN``, where
``NNNN`` is the job ID, will also show this information.
By ssh-ing into nodes, the memory usage of each task can be viewed in
real-time as the job runs (using ``top``, for example),
and can give valuable feedback about the
resources the task requires.
An Advanced Worked Example
--------------------------
Below is a script used to calculate the redshift of first 99.9% ionization in a
simulation. This script was designed to analyze a set of 100 outputs on
Gordon, running on 128 processors. This script goes through three phases:
#. Define a new derived field, which calculates the fraction of ionized
hydrogen as a function only of the total hydrogen density.
#. Load a time series up, specifying ``parallel = 8``. This means that it
will decompose into 8 jobs. So if we ran on 128 processors, we would have
16 processors assigned to each output in the time series.
#. Creating a big cube that will hold our results for this set of processors.
Note that this will be only for each output considered by this processor,
and this cube will not necessarily be filled in every cell.
#. For each output, distribute the grids to each of the sixteen processors
working on that output. Each of these takes the max of the ionized
redshift in their zone versus the accumulation cube.
#. Iterate over slabs and find the maximum redshift in each slab of our
accumulation cube.
At the end, the root processor (of the global calculation) writes out an
ionization cube that contains the redshift of first reionization for each zone
across all outputs.
.. literalinclude:: ionization_cube.py
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/analyzing/parallel_computation.rst | parallel_computation.rst |
.. _Data-objects:
Data Objects
============
What are Data Objects in yt?
----------------------------
Data objects (also called *Data Containers*) are used in yt as convenience
structures for grouping data in logical ways that make sense in the context
of the dataset as a whole. Some of the data objects are geometrical groupings
of data (e.g. sphere, box, cylinder, etc.). Others represent
data products derived from your dataset (e.g. slices, streamlines, surfaces).
Still other data objects group multiple objects together or filter them
(e.g. data collection, cut region).
To generate standard plots, objects rarely need to be directly constructed.
However, for detailed data inspection as well as hand-crafted derived data,
objects can be exceptionally useful and even necessary.
How to Create and Use an Object
-------------------------------
To create an object, you usually only need a loaded dataset, the name of
the object type, and the relevant parameters for your object. Here is a common
example for creating a ``Region`` object that covers all of your data volume.
.. code-block:: python
import yt
ds = yt.load("RedshiftOutput0005")
ad = ds.all_data()
Alternatively, we could create a sphere object of radius 1 kpc on location
[0.5, 0.5, 0.5]:
.. code-block:: python
import yt
ds = yt.load("RedshiftOutput0005")
sp = ds.sphere([0.5, 0.5, 0.5], (1, "kpc"))
After an object has been created, it can be used as a data_source to certain
tasks like ``ProjectionPlot`` (see
:class:`~yt.visualization.plot_window.ProjectionPlot`), one can compute the
bulk quantities associated with that object (see :ref:`derived-quantities`),
or the data can be examined directly. For example, if you want to figure out
the temperature at all indexed locations in the central sphere of your
dataset you could:
.. code-block:: python
import yt
ds = yt.load("RedshiftOutput0005")
sp = ds.sphere([0.5, 0.5, 0.5], (1, "kpc"))
# Show all temperature values
print(sp["gas", "temperature"])
# Print things in a more human-friendly manner: one temperature at a time
print("(x, y, z) Temperature")
print("-----------------------")
for i in range(sp["gas", "temperature"].size):
print(
"(%f, %f, %f) %f"
% (
sp["gas", "x"][i],
sp["gas", "y"][i],
sp["gas", "z"][i],
sp["gas", "temperature"][i],
)
)
Data objects can also be cloned; for instance:
.. code-block:: python
import yt
ds = yt.load("RedshiftOutput0005")
sp = ds.sphere([0.5, 0.5, 0.5], (1, "kpc"))
sp_copy = sp.clone()
This can be useful for when manually chunking data or exploring different field
parameters.
.. _quickly-selecting-data:
Slicing Syntax for Selecting Data
---------------------------------
yt provides a mechanism for easily selecting data while doing interactive work
on the command line. This allows for region selection based on the full domain
of the object. Selecting in this manner is exposed through a slice-like
syntax. All of these attributes are exposed through the ``RegionExpression``
object, which is an attribute of a ``DataSet`` object, called ``r``.
Getting All The Data
^^^^^^^^^^^^^^^^^^^^
The ``.r`` attribute serves as a persistent means of accessing the full data
from a dataset. You can access this shorthand operation by querying any field
on the ``.r`` object, like so:
.. code-block:: python
ds = yt.load("RedshiftOutput0005")
rho = ds.r["gas", "density"]
This will return a *flattened* array of data. The region expression object
(``r``) doesn't have any derived quantities on it. This is completely
equivalent to this set of statements:
.. code-block:: python
ds = yt.load("RedshiftOutput0005")
dd = ds.all_data()
rho = dd["gas", "density"]
.. warning::
One thing to keep in mind with accessing data in this way is that it is
*persistent*. It is loaded into memory, and then retained until the dataset
is deleted or garbage collected.
Selecting Multiresolution Regions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To select rectilinear regions, where the data is selected the same way that it
is selected in a :ref:`region-reference`, you can utilize slice-like syntax,
supplying start and stop, but not supplying a step argument. This requires
that three components of the slice must be specified. These take a start and a
stop, and are for the three axes in simulation order (if your data is ordered
z, y, x for instance, this would be in z, y, x order).
The slices can have both position and, optionally, unit values. These define
the value with respect to the ``domain_left_edge`` of the dataset. So for
instance, you could specify it like so:
.. code-block:: python
ds.r[(100, "kpc"):(200, "kpc"), :, :]
This would return a region that included everything between 100 kpc from the
left edge of the dataset to 200 kpc from the left edge of the dataset in the
first dimension, and which spans the entire dataset in the second and third
dimensions. By default, if the units are unspecified, they are in the "native"
code units of the dataset.
This works in all types of datasets, as well. For instance, if you have a
geographic dataset (which is usually ordered latitude, longitude, altitude) you
can easily select, for instance, one hemisphere with a region selection:
.. code-block:: python
ds.r[:, -180:0, :]
If you specify a single slice, it will be repeated along all three dimensions.
For instance, this will give all data:
.. code-block:: python
ds.r[:]
And this will select a box running from 0.4 to 0.6 along all three
dimensions:
.. code-block:: python
ds.r[0.4:0.6]
.. _arbitrary-grid-selection:
Selecting Fixed Resolution Regions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
yt also provides functionality for selecting regions that have been turned into
voxels. This returns an :ref:`arbitrary-grid` object. It can be created by
specifying a complex slice "step", where the start and stop follow the same
rules as above. This is similar to how the numpy ``mgrid`` operation works.
For instance, this code block will generate a grid covering the full domain,
but converted to being 21x35x100 dimensions:
.. code-block:: python
region = ds.r[::21j, ::35j, ::100j]
The left and right edges, as above, can be specified to provide bounds as well.
For instance, to select a 10 meter cube, with 24 cells in each dimension, we
could supply:
.. code-block:: python
region = ds.r[(20, "m"):(30, "m"):24j, (30, "m"):(40, "m"):24j, (7, "m"):(17, "m"):24j]
This can select both particles and mesh fields. Mesh fields will be 3D arrays,
and generated through volume-weighted overlap calculations.
Selecting Slices
^^^^^^^^^^^^^^^^
If one dimension is specified as a single value, that will be the dimension
along which a slice is made. This provides a simple means of generating a
slice from a subset of the data. For instance, to create a slice of a dataset,
you can very simply specify the full domain along two axes:
.. code-block:: python
sl = ds.r[:, :, 0.25]
This can also be very easily plotted:
.. code-block:: python
sl = ds.r[:, :, 0.25]
sl.plot()
This accepts arguments the same way:
.. code-block:: python
sl = ds.r[(20.1, "km"):(31.0, "km"), (504.143, "m"):(1000.0, "m"), (900.1, "m")]
sl.plot()
Making Image Buffers
^^^^^^^^^^^^^^^^^^^^
Using the slicing syntax above for choosing a slice, if you also provide an
imaginary step value you can obtain a
:class:`~yt.visualization.api.FixedResolutionBuffer` of the chosen resolution.
For instance, to obtain a 1024 by 1024 buffer covering the entire
domain but centered at 0.5 in code units, you can do:
.. code-block:: python
frb = ds.r[0.5, ::1024j, ::1024j]
This ``frb`` object then can be queried like a normal fixed resolution buffer,
and it will return arrays of shape (1024, 1024).
Making Rays
^^^^^^^^^^^
The slicing syntax can also be used select 1D rays of points, whether along
an axis or off-axis. To create a ray along an axis:
.. code-block:: python
ortho_ray = ds.r[(500.0, "kpc"), (200, "kpc"):(300.0, "kpc"), (-2.0, "Mpc")]
To create a ray off-axis, use a single slice between the start and end points
of the ray:
.. code-block:: python
start = [0.1, 0.2, 0.3] # interpreted in code_length
end = [0.4, 0.5, 0.6] # interpreted in code_length
ray = ds.r[start:end]
As for the other slicing options, combinations of unitful quantities with even
different units can be used. Here's a somewhat convoluted (yet working) example:
.. code-block:: python
start = ((500.0, "kpc"), (0.2, "Mpc"), (100.0, "kpc"))
end = ((1.0, "Mpc"), (300.0, "kpc"), (0.0, "kpc"))
ray = ds.r[start:end]
Making Fixed-Resolution Rays
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Rays can also be constructed to have fixed resolution if an imaginary step value
is provided, similar to the 2 and 3-dimensional cases described above. This
works for rays directed along an axis:
.. code-block:: python
ortho_ray = ds.r[0.1:0.6:500j, 0.3, 0.2]
or off-axis rays as well:
.. code-block:: python
start = [0.1, 0.2, 0.3] # interpreted in code_length
end = [0.4, 0.5, 0.6] # interpreted in code_length
ray = ds.r[start:end:100j]
Selecting Points
^^^^^^^^^^^^^^^^
Finally, you can quickly select a single point within the domain by providing
a single coordinate for every axis:
.. code-block:: python
pt = ds.r[(10.0, "km"), (200, "m"), (1.0, "km")]
Querying this object for fields will give you the value of the field at that
point.
.. _available-objects:
Available Objects
-----------------
As noted above, there are numerous types of objects. Here we group them
into:
* *Geometric Objects*
Data is selected based on spatial shapes in the dataset
* *Filtering Objects*
Data is selected based on other field criteria
* *Collection Objects*
Multiple objects grouped together
* *Construction Objects*
Objects represent some sort of data product constructed by additional analysis
If you want to create your own custom data object type, see
:ref:`creating-objects`.
.. _geometric-objects:
Geometric Objects
^^^^^^^^^^^^^^^^^
For 0D, 1D, and 2D geometric objects, if the extent of the object
intersects a grid cell, then the cell is included in the object; however,
for 3D objects the *center* of the cell must be within the object in order
for the grid cell to be incorporated.
0D Objects
""""""""""
**Point**
| Class :class:`~yt.data_objects.selection_data_containers.YTPoint`
| Usage: ``point(coord, ds=None, field_parameters=None, data_source=None)``
| A point defined by a single cell at specified coordinates.
1D Objects
""""""""""
**Ray (Axis-Aligned)**
| Class :class:`~yt.data_objects.selection_data_containers.YTOrthoRay`
| Usage: ``ortho_ray(axis, coord, ds=None, field_parameters=None, data_source=None)``
| A line (of data cells) stretching through the full domain
aligned with one of the x,y,z axes. Defined by an axis and a point
to be intersected. Please see this
:ref:`note about ray data value ordering <ray-data-ordering>`.
**Ray (Arbitrarily-Aligned)**
| Class :class:`~yt.data_objects.selection_data_containers.YTRay`
| Usage: ``ray(start_coord, end_coord, ds=None, field_parameters=None, data_source=None)``
| A line (of data cells) defined by arbitrary start and end coordinates.
Please see this
:ref:`note about ray data value ordering <ray-data-ordering>`.
2D Objects
""""""""""
**Slice (Axis-Aligned)**
| Class :class:`~yt.data_objects.selection_data_containers.YTSlice`
| Usage: ``slice(axis, coord, center=None, ds=None, field_parameters=None, data_source=None)``
| A plane normal to one of the axes and intersecting a particular
coordinate.
**Slice (Arbitrarily-Aligned)**
| Class :class:`~yt.data_objects.selection_data_containers.YTCuttingPlane`
| Usage: ``cutting(normal, coord, north_vector=None, ds=None, field_parameters=None, data_source=None)``
| A plane normal to a specified vector and intersecting a particular
coordinate.
.. _region-reference:
3D Objects
""""""""""
**All Data**
| Function :meth:`~yt.data_objects.static_output.Dataset.all_data`
| Usage: ``all_data(find_max=False)``
| ``all_data()`` is a wrapper on the Box Region class which defaults to
creating a Region covering the entire dataset domain. It is effectively
``ds.region(ds.domain_center, ds.domain_left_edge, ds.domain_right_edge)``.
**Box Region**
| Class :class:`~yt.data_objects.selection_data_containers.YTRegion`
| Usage: ``region(center, left_edge, right_edge, fields=None, ds=None, field_parameters=None, data_source=None)``
| Alternatively: ``box(left_edge, right_edge, fields=None, ds=None, field_parameters=None, data_source=None)``
| A box-like region aligned with the grid axis orientation. It is
defined by a left_edge, a right_edge, and a center. The left_edge
and right_edge are the minimum and maximum bounds in the three axes
respectively. The center is arbitrary and must only be contained within
the left_edge and right_edge. By using the ``box`` wrapper, the center
is assumed to be the midpoint between the left and right edges.
**Disk/Cylinder**
| Class: :class:`~yt.data_objects.selection_data_containers.YTDisk`
| Usage: ``disk(center, normal, radius, height, fields=None, ds=None, field_parameters=None, data_source=None)``
| A cylinder defined by a point at the center of one of the circular bases,
a normal vector to it defining the orientation of the length of the
cylinder, and radius and height values for the cylinder's dimensions.
Note: ``height`` is the distance from midplane to the top or bottom of the
cylinder, i.e., ``height`` is half that of the cylinder object that is
created.
**Ellipsoid**
| Class :class:`~yt.data_objects.selection_data_containers.YTEllipsoid`
| Usage: ``ellipsoid(center, semi_major_axis_length, semi_medium_axis_length, semi_minor_axis_length, semi_major_vector, tilt, fields=None, ds=None, field_parameters=None, data_source=None)``
| An ellipsoid with axis magnitudes set by ``semi_major_axis_length``,
``semi_medium_axis_length``, and ``semi_minor_axis_length``. ``semi_major_vector``
sets the direction of the ``semi_major_axis``. ``tilt`` defines the orientation
of the semi-medium and semi_minor axes.
**Sphere**
| Class :class:`~yt.data_objects.selection_data_containers.YTSphere`
| Usage: ``sphere(center, radius, ds=None, field_parameters=None, data_source=None)``
| A sphere defined by a central coordinate and a radius.
**Minimal Bounding Sphere**
| Class :class:`~yt.data_objects.selection_data_containers.YTMinimalSphere`
| Usage: ``minimal_sphere(points, ds=None, field_parameters=None, data_source=None)``
| A sphere that contains all the points passed as argument.
.. _collection-objects:
Filtering and Collection Objects
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
See also the section on :ref:`filtering-data`.
**Intersecting Regions**
| Most Region objects provide a data_source parameter, which allows you to subselect
| one region from another (in the coordinate system of the DataSet). Note, this can
| easily lead to empty data for non-intersecting regions.
| Usage: ``slice(axis, coord, ds, data_source=sph)``
**Union Regions**
| Usage: ``union()``
| See :ref:`boolean_data_objects`.
**Intersection Regions**
| Usage: ``intersection()``
| See :ref:`boolean_data_objects`.
**Filter**
| Class :class:`~yt.data_objects.selection_data_containers.YTCutRegion`
| Usage: ``cut_region(base_object, conditionals, ds=None, field_parameters=None)``
| A ``cut_region`` is a filter which can be applied to any other data
object. The filter is defined by the conditionals present, which
apply cuts to the data in the object. A ``cut_region`` will work
for either particle fields or mesh fields, but not on both simultaneously.
For more detailed information and examples, see :ref:`cut-regions`.
**Collection of Data Objects**
| Class :class:`~yt.data_objects.selection_data_containers.YTDataCollection`
| Usage: ``data_collection(center, obj_list, ds=None, field_parameters=None)``
| A ``data_collection`` is a list of data objects that can be
sampled and processed as a whole in a single data object.
.. _construction-objects:
Construction Objects
^^^^^^^^^^^^^^^^^^^^
**Fixed-Resolution Region**
| Class :class:`~yt.data_objects.construction_data_containers.YTCoveringGrid`
| Usage: ``covering_grid(level, left_edge, dimensions, fields=None, ds=None, num_ghost_zones=0, use_pbar=True, field_parameters=None)``
| A 3D region with all data extracted to a single, specified resolution.
See :ref:`examining-grid-data-in-a-fixed-resolution-array`.
**Fixed-Resolution Region with Smoothing**
| Class :class:`~yt.data_objects.construction_data_containers.YTSmoothedCoveringGrid`
| Usage: ``smoothed_covering_grid(level, left_edge, dimensions, fields=None, ds=None, num_ghost_zones=0, use_pbar=True, field_parameters=None)``
| A 3D region with all data extracted and interpolated to a single,
specified resolution. Identical to covering_grid, except that it
interpolates as necessary from coarse regions to fine. See
:ref:`examining-grid-data-in-a-fixed-resolution-array`.
**Fixed-Resolution Region**
| Class :class:`~yt.data_objects.construction_data_containers.YTArbitraryGrid`
| Usage: ``arbitrary_grid(left_edge, right_edge, dimensions, ds=None, field_parameters=None)``
| When particles are deposited on to mesh fields, they use the existing
mesh structure, but this may have too much or too little resolution
relative to the particle locations (or it may not exist at all!). An
`arbitrary_grid` provides a means for generating a new independent mesh
structure for particle deposition and simple mesh field interpolation.
See :ref:`arbitrary-grid` for more information.
**Projection**
| Class :class:`~yt.data_objects.construction_data_containers.YTQuadTreeProj`
| Usage: ``proj(field, axis, weight_field=None, center=None, ds=None, data_source=None, method="integrate", field_parameters=None)``
| A 2D projection of a 3D volume along one of the axis directions.
By default, this is a line integral through the entire simulation volume
(although it can be a subset of that volume specified by a data object
with the ``data_source`` keyword). Alternatively, one can specify
a weight_field and different ``method`` values to change the nature
of the projection outcome. See :ref:`projection-types` for more information.
**Streamline**
| Class :class:`~yt.data_objects.construction_data_containers.YTStreamline`
| Usage: ``streamline(coord_list, length, fields=None, ds=None, field_parameters=None)``
| A ``streamline`` can be traced out by identifying a starting coordinate (or
list of coordinates) and allowing it to trace a vector field, like gas
velocity. See :ref:`streamlines` for more information.
**Surface**
| Class :class:`~yt.data_objects.construction_data_containers.YTSurface`
| Usage: ``surface(data_source, field, field_value)``
| The surface defined by all an isocontour in any mesh field. An existing
data object must be provided as the source, as well as a mesh field
and the value of the field which you desire the isocontour. See
:ref:`extracting-isocontour-information`.
.. _derived-quantities:
Processing Objects: Derived Quantities
--------------------------------------
Derived quantities are a way of calculating some bulk quantities associated
with all of the grid cells contained in a data object.
Derived quantities can be accessed via the ``quantities`` interface.
Here is an example of how to get the angular momentum vector calculated from
all the cells contained in a sphere at the center of our dataset.
.. code-block:: python
import yt
ds = yt.load("my_data")
sp = ds.sphere("c", (10, "kpc"))
print(sp.quantities.angular_momentum_vector())
Some quantities can be calculated for a specific particle type only. For example, to
get the center of mass of only the stars within the sphere:
.. code-block:: python
import yt
ds = yt.load("my_data")
sp = ds.sphere("c", (10, "kpc"))
print(
sp.quantities.center_of_mass(
use_gas=False, use_particles=True, particle_type="star"
)
)
Quickly Processing Data
^^^^^^^^^^^^^^^^^^^^^^^
Most data objects now have multiple numpy-like methods that allow you to
quickly process data. More of these methods will be added over time and added
to this list. Most, if not all, of these map to other yt operations and are
designed as syntactic sugar to slightly simplify otherwise somewhat obtuse
pipelines.
These operations are parallelized.
You can compute the extrema of a field by using the ``max`` or ``min``
functions. This will cache the extrema in between, so calling ``min`` right
after ``max`` will be considerably faster. Here is an example.
.. code-block:: python
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
reg = ds.r[0.3:0.6, 0.2:0.4, 0.9:0.95]
min_rho = reg.min(("gas", "density"))
max_rho = reg.max(("gas", "density"))
This is equivalent to:
.. code-block:: python
min_rho, max_rho = reg.quantities.extrema(("gas", "density"))
The ``max`` operation can also compute the maximum intensity projection:
.. code-block:: python
proj = reg.max(("gas", "density"), axis="x")
proj.plot()
This is equivalent to:
.. code-block:: python
proj = ds.proj(("gas", "density"), "x", data_source=reg, method="max")
proj.plot()
The same can be done with the ``min`` operation, computing a minimum
intensity projection:
.. code-block:: python
proj = reg.min(("gas", "density"), axis="x")
proj.plot()
This is equivalent to:
.. code-block:: python
proj = ds.proj(("gas", "density"), "x", data_source=reg, method="min")
proj.plot()
You can also compute the ``mean`` value, which accepts a field, axis, and weight
function. If the axis is not specified, it will return the average value of
the specified field, weighted by the weight argument. The weight argument
defaults to ``ones``, which performs an arithmetic average. For instance:
.. code-block:: python
mean_rho = reg.mean(("gas", "density"))
rho_by_vol = reg.mean(("gas", "density"), weight=("gas", "cell_volume"))
This is equivalent to:
.. code-block:: python
mean_rho = reg.quantities.weighted_average(
("gas", "density"), weight_field=("index", "ones")
)
rho_by_vol = reg.quantities.weighted_average(
("gas", "density"), weight_field=("gas", "cell_volume")
)
If an axis is provided, it will project along that axis and return it to you:
.. code-block:: python
rho_proj = reg.mean(("gas", "temperature"), axis="y", weight=("gas", "density"))
rho_proj.plot()
You can also compute the ``std`` (standard deviation), which accepts a field,
axis, and weight function. If the axis is not specified, it will
return the standard deviation of the specified field, weighted by the weight
argument. The weight argument defaults to ``ones``. For instance:
.. code-block:: python
std_rho = reg.std(("gas", "density"))
std_rho_by_vol = reg.std(("gas", "density"), weight=("gas", "cell_volume"))
This is equivalent to:
.. code-block:: python
std_rho = reg.quantities.weighted_standard_deviation(
("gas", "density"), weight_field=("index", "ones")
)
std_rho_by_vol = reg.quantities.weighted_standard_deviation(
("gas", "density"), weight_field=("gas", "cell_volume")
)
If an axis is provided, it will project along that axis and return it to you:
.. code-block:: python
vy_std = reg.std(("gas", "velocity_y"), axis="y", weight=("gas", "density"))
vy_std.plot()
The ``sum`` function will add all the values in the data object. It accepts a
field and, optionally, an axis. If the axis is left unspecified, it will sum
the values in the object:
.. code-block:: python
vol = reg.sum(("gas", "cell_volume"))
If the axis is specified, it will compute a projection using the method ``sum``
(which does *not* take into account varying path length!) and return that to
you.
.. code-block:: python
cell_count = reg.sum(("index", "ones"), axis="z")
cell_count.plot()
To compute a projection where the path length *is* taken into account, you can
use the ``integrate`` function:
.. code-block:: python
proj = reg.integrate(("gas", "density"), "x")
All of these projections supply the data object as their base input.
Often, it can be useful to sample a field at the minimum and maximum of a
different field. You can use the ``argmax`` and ``argmin`` operations to do
this.
.. code-block:: python
reg.argmin(("gas", "density"), axis=("gas", "temperature"))
This will return the temperature at the minimum density.
If you don't specify an ``axis``, it will return the spatial position of
the maximum value of the queried field. Here is an example::
x, y, z = reg.argmin(("gas", "density"))
Available Derived Quantities
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
**Angular Momentum Vector**
| Class :class:`~yt.data_objects.derived_quantities.AngularMomentumVector`
| Usage: ``angular_momentum_vector(use_gas=True, use_particles=True, particle_type='all')``
| The mass-weighted average angular momentum vector of the particles, gas,
or both. The quantity can be calculated for all particles or a given
particle_type only.
**Bulk Velocity**
| Class :class:`~yt.data_objects.derived_quantities.BulkVelocity`
| Usage: ``bulk_velocity(use_gas=True, use_particles=True, particle_type='all')``
| The mass-weighted average velocity of the particles, gas, or both.
The quantity can be calculated for all particles or a given
particle_type only.
**Center of Mass**
| Class :class:`~yt.data_objects.derived_quantities.CenterOfMass`
| Usage: ``center_of_mass(use_cells=True, use_particles=False, particle_type='all')``
| The location of the center of mass. By default, it computes of
the *non-particle* data in the object, but it can be used on
particles, gas, or both. The quantity can be
calculated for all particles or a given particle_type only.
**Extrema**
| Class :class:`~yt.data_objects.derived_quantities.Extrema`
| Usage: ``extrema(fields, non_zero=False)``
| The extrema of a field or list of fields.
**Maximum Location Sampling**
| Class :class:`~yt.data_objects.derived_quantities.SampleAtMaxFieldValues`
| Usage: ``sample_at_max_field_values(fields, sample_fields)``
| The value of sample_fields at the maximum value in fields.
**Minimum Location Sampling**
| Class :class:`~yt.data_objects.derived_quantities.SampleAtMinFieldValues`
| Usage: ``sample_at_min_field_values(fields, sample_fields)``
| The value of sample_fields at the minimum value in fields.
**Minimum Location**
| Class :class:`~yt.data_objects.derived_quantities.MinLocation`
| Usage: ``min_location(fields)``
| The minimum of a field or list of fields as well
as the x,y,z location of that minimum.
**Maximum Location**
| Class :class:`~yt.data_objects.derived_quantities.MaxLocation`
| Usage: ``max_location(fields)``
| The maximum of a field or list of fields as well
as the x,y,z location of that maximum.
**Spin Parameter**
| Class :class:`~yt.data_objects.derived_quantities.SpinParameter`
| Usage: ``spin_parameter(use_gas=True, use_particles=True, particle_type='all')``
| The spin parameter for the baryons using the particles, gas, or both. The
quantity can be calculated for all particles or a given particle_type only.
**Total Mass**
| Class :class:`~yt.data_objects.derived_quantities.TotalMass`
| Usage: ``total_mass()``
| The total mass of the object as a tuple of (total gas, total particle)
mass.
**Total of a Field**
| Class :class:`~yt.data_objects.derived_quantities.TotalQuantity`
| Usage: ``total_quantity(fields)``
| The sum of a given field (or list of fields) over the entire object.
**Weighted Average of a Field**
| Class :class:`~yt.data_objects.derived_quantities.WeightedAverageQuantity`
| Usage: ``weighted_average_quantity(fields, weight)``
| The weighted average of a field (or list of fields)
over an entire data object. If you want an unweighted average,
then set your weight to be the field: ``ones``.
**Weighted Standard Deviation of a Field**
| Class :class:`~yt.data_objects.derived_quantities.WeightedStandardDeviation`
| Usage: ``weighted_standard_deviation(fields, weight)``
| The weighted standard deviation of a field (or list of fields)
over an entire data object and the weighted mean.
If you want an unweighted standard deviation, then
set your weight to be the field: ``ones``.
.. _arbitrary-grid:
Arbitrary Grids Objects
-----------------------
The covering grid and smoothed covering grid objects mandate that they be
exactly aligned with the mesh. This is a
holdover from the time when yt was used exclusively for data that came in
regularly structured grid patches, and does not necessarily work as well for
data that is composed of discrete objects like particles. To augment this, the
:class:`~yt.data_objects.construction_data_containers.YTArbitraryGrid` object
was created, which enables construction of meshes (onto which particles can be
deposited or smoothed) in arbitrary regions. This eliminates any assumptions
on yt's part about how the data is organized, and will allow for more
fine-grained control over visualizations.
An example of creating an arbitrary grid would be to construct one, then query
the deposited particle density, like so:
.. code-block:: python
import yt
ds = yt.load("snapshot_010.hdf5")
obj = ds.arbitrary_grid([0.0, 0.0, 0.0], [0.99, 0.99, 0.99], dims=[128, 128, 128])
print(obj["deposit", "all_density"])
While these cannot yet be used as input to projections or slices, slices and
projections can be taken of the data in them and visualized by hand.
These objects, as of yt 3.3, are now also able to "voxelize" mesh fields. This
means that you can query the "density" field and it will return the density
field as deposited, identically to how it would be deposited in a fixed
resolution buffer. Note that this means that contributions from misaligned or
partially-overlapping cells are added in a volume-weighted way, which makes it
inappropriate for some types of analysis.
.. _boolean_data_objects:
Combining Objects: Boolean Data Objects
---------------------------------------
A special type of data object is the *boolean* data object, which works with
data selection objects of any dimension. It is built by relating already existing
data objects with the bitwise operators for AND, OR and XOR, as well as the
subtraction operator. These are created by using the operators ``&`` for an
intersection ("AND"), ``|`` for a union ("OR"), ``^`` for an exclusive or
("XOR"), and ``+`` and ``-`` for addition ("OR") and subtraction ("NEG").
Here are some examples:
.. code-block:: python
import yt
ds = yt.load("snapshot_010.hdf5")
sp1 = ds.sphere("c", (0.1, "unitary"))
sp2 = ds.sphere(sp1.center + 2.0 * sp1.radius, (0.2, "unitary"))
sp3 = ds.sphere("c", (0.05, "unitary"))
new_obj = sp1 + sp2
cutout = sp1 - sp3
sp4 = sp1 ^ sp2
sp5 = sp1 & sp2
Note that the ``+`` operation and the ``|`` operation are identical. For when
multiple objects are to be combined in an intersection or a union, there are
the data objects ``intersection`` and ``union`` which can be called, and which
will yield slightly higher performance than a sequence of calls to ``+`` or
``&``. For instance:
.. code-block:: python
import yt
ds = yt.load("Enzo_64/DD0043/data0043")
sp1 = ds.sphere((0.1, 0.2, 0.3), (0.05, "unitary"))
sp2 = ds.sphere((0.2, 0.2, 0.3), (0.10, "unitary"))
sp3 = ds.sphere((0.3, 0.2, 0.3), (0.15, "unitary"))
isp = ds.intersection([sp1, sp2, sp3])
usp = ds.union([sp1, sp2, sp3])
The ``isp`` and ``usp`` objects will act the same as a set of chained ``&`` and
``|`` operations (respectively) but are somewhat easier to construct.
.. _extracting-connected-sets:
Connected Sets and Clump Finding
--------------------------------
The underlying machinery used in :ref:`clump_finding` is accessible from any
data object. This includes the ability to obtain and examine topologically
connected sets. These sets are identified by examining cells between two
threshold values and connecting them. What is returned to the user is a list
of the intervals of values found, and extracted regions that contain only those
cells that are connected.
To use this, call
:meth:`~yt.data_objects.data_containers.YTSelectionContainer3D.extract_connected_sets` on
any 3D data object. This requests a field, the number of levels of levels sets to
extract, the min and the max value between which sets will be identified, and
whether or not to conduct it in log space.
.. code-block:: python
sp = ds.sphere("max", (1.0, "pc"))
contour_values, connected_sets = sp.extract_connected_sets(
("gas", "density"), 3, 1e-30, 1e-20
)
The first item, ``contour_values``, will be an array of the min value for each
set of level sets. The second (``connected_sets``) will be a dict of dicts.
The key for the first (outer) dict is the level of the contour, corresponding
to ``contour_values``. The inner dict returned is keyed by the contour ID. It
contains :class:`~yt.data_objects.selection_data_containers.YTCutRegion`
objects. These can be queried just as any other data object. The clump finder
(:ref:`clump_finding`) differs from the above method in that the contour
identification is performed recursively within each individual structure, and
structures can be kept or remerged later based on additional criteria, such as
gravitational boundedness.
.. _object-serialization:
Storing and Loading Objects
---------------------------
Often, when operating interactively or via the scripting interface, it is
convenient to save an object to disk and then restart the calculation later or
transfer the data from a container to another filesystem. This can be
particularly useful when working with extremely large datasets. Field data
can be saved to disk in a format that allows for it to be reloaded just like
a regular dataset. For information on how to do this, see
:ref:`saving-data-containers`.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/analyzing/objects.rst | objects.rst |
.. _time-series-analysis:
Time Series Analysis
====================
Often, one wants to analyze a continuous set of outputs from a simulation in a
uniform manner. A simple example would be to calculate the peak density in a
set of outputs that were written out. The problem with time series analysis in
yt is general an issue of verbosity and clunkiness. Typically, one sets up a
loop:
.. code-block:: python
for dsi in range(30):
fn = "DD%04i/DD%04i" % (dsi, dsi)
ds = load(fn)
process_output(ds)
But this is not really very nice. This ends up requiring a lot of maintenance.
The :class:`~yt.data_objects.time_series.DatasetSeries` object has been
designed to remove some of this clunkiness and present an easier, more unified
approach to analyzing sets of data. Even better,
:class:`~yt.data_objects.time_series.DatasetSeries` works in parallel by
default (see :ref:`parallel-computation`), so you can use a ``DatasetSeries``
object to quickly and easily parallelize your analysis. Since doing the same
analysis task on many simulation outputs is 'embarrassingly' parallel, this
naturally allows for almost arbitrary speedup - limited only by the number of
available processors and the number of simulation outputs.
The idea behind the current implementation of time series analysis is that
the underlying data and the operators that act on that data can and should be
distinct. There are several operators provided, as well as facilities for
creating your own, and these operators can be applied either to datasets on the
whole or to subregions of individual datasets.
The simplest mechanism for creating a ``DatasetSeries`` object is to pass a glob
pattern to the ``yt.load`` function.
.. code-block:: python
import yt
ts = yt.load("DD????/DD????")
This will create a new time series, populated with all datasets that match the
pattern "DD" followed by four digits. This object, here called ``ts``, can now
be analyzed in bulk. Alternately, you can specify an already formatted list of
filenames directly to the :class:`~yt.data_objects.time_series.DatasetSeries`
initializer:
.. code-block:: python
import yt
ts = yt.DatasetSeries(["DD0030/DD0030", "DD0040/DD0040"])
Analyzing Each Dataset In Sequence
----------------------------------
The :class:`~yt.data_objects.time_series.DatasetSeries` object has two primary
methods of iteration. The first is a very simple iteration, where each object
is returned for iteration:
.. code-block:: python
import yt
ts = yt.load("*/*.index")
for ds in ts:
print(ds.current_time)
This can also operate in parallel, using
:meth:`~yt.data_objects.time_series.DatasetSeries.piter`. For more examples,
see:
* :ref:`parallel-time-series-analysis`
* The cookbook recipe for :ref:`cookbook-time-series-analysis`
* :class:`~yt.data_objects.time_series.DatasetSeries`
.. _analyzing-an-entire-simulation:
Analyzing an Entire Simulation
------------------------------
.. note:: Implemented for the Enzo, Gadget, OWLS, and Exodus II frontends.
The parameter file used to run a simulation contains all the information
necessary to know what datasets should be available. The ``simulation``
convenience function allows one to create a ``DatasetSeries`` object of all
or a subset of all data created by a single simulation.
To instantiate, give the parameter file and the simulation type.
.. code-block:: python
import yt
my_sim = yt.load_simulation("enzo_tiny_cosmology/32Mpc_32.enzo", "Enzo")
Then, create a ``DatasetSeries`` object with the
:meth:`frontends.enzo.simulation_handling.EnzoSimulation.get_time_series`
function. With no additional keywords, the time series will include every
dataset. If the ``find_outputs`` keyword is set to ``True``, a search of the
simulation directory will be performed looking for potential datasets. These
datasets will be temporarily loaded in order to figure out the time and
redshift associated with them. This can be used when simulation data was
created in a non-standard way, making it difficult to guess the corresponding
time and redshift information
.. code-block:: python
my_sim.get_time_series()
After this, time series analysis can be done normally.
.. code-block:: python
for ds in my_sim.piter():
all_data = ds.all_data()
print(all_data.quantities.extrema(("gas", "density")))
Additional keywords can be given to
:meth:`frontends.enzo.simulation_handling.EnzoSimulation.get_time_series`
to select a subset of the total data:
* ``time_data`` (*bool*): Whether or not to include time outputs when
gathering datasets for time series. Default: True. (Enzo only)
* ``redshift_data`` (*bool*): Whether or not to include redshift outputs
when gathering datasets for time series. Default: True. (Enzo only)
* ``initial_time`` (*float*): The earliest time for outputs to be included.
If None, the initial time of the simulation is used. This can be used in
combination with either ``final_time`` or ``final_redshift``. Default: None.
* ``final_time`` (*float*): The latest time for outputs to be included. If
None, the final time of the simulation is used. This can be used in
combination with either ``initial_time`` or ``initial_redshift``. Default: None.
* ``times`` (*list*): A list of times for which outputs will be found.
Default: None.
* ``initial_redshift`` (*float*): The earliest redshift for outputs to be
included. If None, the initial redshift of the simulation is used. This
can be used in combination with either ``final_time`` or ``final_redshift``.
Default: None.
* ``final_redshift`` (*float*): The latest redshift for outputs to be included.
If None, the final redshift of the simulation is used. This can be used
in combination with either ``initial_time`` or ``initial_redshift``.
Default: None.
* ``redshifts`` (*list*): A list of redshifts for which outputs will be found.
Default: None.
* ``initial_cycle`` (*float*): The earliest cycle for outputs to be
included. If None, the initial cycle of the simulation is used. This can
only be used with final_cycle. Default: None. (Enzo only)
* ``final_cycle`` (*float*): The latest cycle for outputs to be included.
If None, the final cycle of the simulation is used. This can only be used
in combination with initial_cycle. Default: None. (Enzo only)
* ``tolerance`` (*float*): Used in combination with ``times`` or ``redshifts``
keywords, this is the tolerance within which outputs are accepted given
the requested times or redshifts. If None, the nearest output is always
taken. Default: None.
* ``parallel`` (*bool*/*int*): If True, the generated ``DatasetSeries`` will
divide the work such that a single processor works on each dataset. If an
integer is supplied, the work will be divided into that number of jobs.
Default: True.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/analyzing/time_series_analysis.rst | time_series_analysis.rst |
.. _domain-analysis:
Domain-Specific Analysis
========================
yt powers a number modules that provide specialized analysis tools
relevant to one or a few domains. Some of these are internal to yt,
but many exist as external packages, either maintained by the yt
project or independently.
Internal Analysis Modules
-------------------------
These modules exist within yt itself.
.. note::
As of yt version 3.5, most of the astrophysical analysis tools
have been moved to the :ref:`yt-astro` and :ref:`attic`
packages. See below for more information.
.. toctree::
:maxdepth: 2
cosmology_calculator
clump_finding
xray_emission_fields
xray_data_README
External Analysis Modules
-------------------------
These are external packages maintained by the yt project.
.. _yt-astro:
yt Astro Analysis
^^^^^^^^^^^^^^^^^
Source: https://github.com/yt-project/yt_astro_analysis
Documentation: https://yt-astro-analysis.readthedocs.io/
The ``yt_astro_analysis`` package houses most of the astrophysical
analysis tools that were formerly in the ``yt.analysis_modules``
import. These include halo finding, custom halo analysis, synthetic
observations, and exports to radiative transfer codes. See
:ref:`yt_astro_analysis:modules` for a list of available
functionality.
.. _attic:
yt Attic
^^^^^^^^
Source: https://github.com/yt-project/yt_attic
Documentation: https://yt-attic.readthedocs.io/
The ``yt_attic`` contains former yt analysis modules that have
fallen by the wayside. These may have small bugs or were simply
not kept up to date as yt evolved. Tools in here are looking for
a new owner and a new home. If you find something in here that
you'd like to bring back to life, either by adding it to
:ref:`yt-astro` or as part of your own package, you are welcome
to it! If you'd like any help, let us know! See
:ref:`yt_attic:attic-modules` for a list of inventory of the
attic.
Extensions
----------
There are a number of independent, yt-related packages for things
like visual effects, interactive widgets, synthetic absorption
spectra, X-ray observations, and merger-trees. See the
`yt Extensions <http://yt-project.org/extensions.html>`_ page for
a list of available extension packages.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/analyzing/domain_analysis/index.rst | index.rst |
.. _clump_finding:
Clump Finding
=============
The clump finder uses a contouring algorithm to identified topologically
disconnected structures within a dataset. This works by first creating a
single contour over the full range of the contouring field, then continually
increasing the lower value of the contour until it reaches the maximum value
of the field. As disconnected structures are identified as separate contours,
the routine continues recursively through each object, creating a hierarchy of
clumps. Individual clumps can be kept or removed from the hierarchy based on
the result of user-specified functions, such as checking for gravitational
boundedness. A sample recipe can be found in :ref:`cookbook-find_clumps`.
Setting up the Clump Finder
---------------------------
The clump finder requires a data object (see :ref:`data-objects`) and a field
over which the contouring is to be performed. The data object is then used
to create the initial
:class:`~yt.data_objects.level_sets.clump_handling.Clump` object that
acts as the base for clump finding.
.. code:: python
import yt
from yt.data_objects.level_sets.api import *
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
data_source = ds.disk([0.5, 0.5, 0.5], [0.0, 0.0, 1.0], (8, "kpc"), (1, "kpc"))
master_clump = Clump(data_source, ("gas", "density"))
Clump Validators
----------------
At this point, every isolated contour will be considered a clump,
whether this is physical or not. Validator functions can be added to
determine if an individual contour should be considered a real clump.
These functions are specified with the
:func:`~yt.data_objects.level_sets.clump_handling.Clump.add_validator`
function. Current, two validators exist: a minimum number of cells and gravitational
boundedness.
.. code:: python
master_clump.add_validator("min_cells", 20)
master_clump.add_validator("gravitationally_bound", use_particles=False)
As many validators as desired can be added, and a clump is only kept if all
return True. If not, a clump is remerged into its parent. Custom validators
can easily be added. A validator function must only accept a ``Clump`` object
and either return True or False.
.. code:: python
def _minimum_gas_mass(clump, min_mass):
return clump["gas", "mass"].sum() >= min_mass
add_validator("minimum_gas_mass", _minimum_gas_mass)
The :func:`~yt.data_objects.level_sets.clump_validators.add_validator`
function adds the validator to a registry that can
be accessed by the clump finder. Then, the validator can be added to the
clump finding just like the others.
.. code:: python
master_clump.add_validator("minimum_gas_mass", ds.quan(1.0, "Msun"))
Running the Clump Finder
------------------------
Clump finding then proceeds by calling the
:func:`~yt.data_objects.level_sets.clump_handling.find_clumps` function.
This function accepts the
:class:`~yt.data_objects.level_sets.clump_handling.Clump` object, the initial
minimum and maximum of the contouring field, and the step size. The lower value
of the contour finder will be continually multiplied by the step size.
.. code:: python
c_min = data_source["gas", "density"].min()
c_max = data_source["gas", "density"].max()
step = 2.0
find_clumps(master_clump, c_min, c_max, step)
Calculating Clump Quantities
----------------------------
By default, a number of quantities will be calculated for each clump when the
clump finding process has finished. The default quantities are: ``total_cells``,
``mass``, ``mass_weighted_jeans_mass``, ``volume_weighted_jeans_mass``,
``max_grid_level``, ``min_number_density``, and ``max_number_density``.
Additional items can be added with the
:func:`~yt.data_objects.level_sets.clump_handling.Clump.add_info_item`
function.
.. code:: python
master_clump.add_info_item("total_cells")
Just like the validators, custom info items can be added by defining functions
that minimally accept a
:class:`~yt.data_objects.level_sets.clump_handling.Clump` object and return
a format string to be printed and the value. These are then added to the list
of available info items by calling
:func:`~yt.data_objects.level_sets.clump_info_items.add_clump_info`:
.. code:: python
def _mass_weighted_jeans_mass(clump):
jeans_mass = clump.data.quantities.weighted_average_quantity(
"jeans_mass", ("gas", "mass")
).in_units("Msun")
return "Jeans Mass (mass-weighted): %.6e Msolar." % jeans_mass
add_clump_info("mass_weighted_jeans_mass", _mass_weighted_jeans_mass)
Then, add it to the list:
.. code:: python
master_clump.add_info_item("mass_weighted_jeans_mass")
Once you have run the clump finder, you should be able to access the data for
the info item you have defined via the ``info`` attribute of a ``Clump`` object:
.. code:: python
clump = leaf_clumps[0]
print(clump.info["mass_weighted_jeans_mass"])
Besides the quantities calculated by default, the following are available:
``center_of_mass`` and ``distance_to_main_clump``.
Working with Clumps
-------------------
After the clump finding has finished, the master clump will represent the top
of a hierarchy of clumps. The ``children`` attribute within a
:class:`~yt.data_objects.level_sets.clump_handling.Clump` object
contains a list of all sub-clumps. Each sub-clump is also a
:class:`~yt.data_objects.level_sets.clump_handling.Clump` object
with its own ``children`` attribute, and so on.
.. code:: python
print(master_clump["gas", "density"])
print(master_clump.children)
print(master_clump.children[0]["gas", "density"])
The entire clump tree can traversed with a loop syntax:
.. code:: python
for clump in master_clump:
print(clump.clump_id)
The ``leaves`` attribute of a ``Clump`` object will return a list of the
individual clumps that have no children of their own (the leaf clumps).
.. code:: python
# Get a list of just the leaf nodes.
leaf_clumps = master_clump.leaves
print(leaf_clumps[0]["gas", "density"])
print(leaf_clumps[0]["all", "particle_mass"])
print(leaf_clumps[0].quantities.total_mass())
Visualizing Clumps
------------------
Clumps can be visualized using the ``annotate_clumps`` callback.
.. code:: python
prj = yt.ProjectionPlot(ds, 2, ("gas", "density"), center="c", width=(20, "kpc"))
prj.annotate_clumps(leaf_clumps)
prj.save("clumps")
Saving and Reloading Clump Data
-------------------------------
The clump tree can be saved as a reloadable dataset with the
:func:`~yt.data_objects.level_sets.clump_handling.Clump.save_as_dataset`
function. This will save all info items that have been calculated as well as
any field values specified with the *fields* keyword. This function
can be called for any clump in the tree, saving that clump and all those
below it.
.. code:: python
fn = master_clump.save_as_dataset(fields=["density", "particle_mass"])
The clump tree can then be reloaded as a regular dataset. The ``tree`` attribute
associated with the dataset provides access to the clump tree. The tree can be
iterated over in the same fashion as the original tree.
.. code:: python
ds_clumps = yt.load(fn)
for clump in ds_clumps.tree:
print(clump.clump_id)
The ``leaves`` attribute returns a list of all leaf clumps.
.. code:: python
print(ds_clumps.leaves)
Info items for each clump can be accessed with the ``"clump"`` field type. Gas
or grid fields should be accessed using the ``"grid"`` field type and particle
fields should be access using the specific particle type.
.. code:: python
my_clump = ds_clumps.leaves[0]
print(my_clumps["clump", "mass"])
print(my_clumps["grid", "density"])
print(my_clumps["all", "particle_mass"])
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/analyzing/domain_analysis/clump_finding.rst | clump_finding.rst |
> Note: If you came here trying to figure out how to create simulated X-ray photons and observations,
you should go [here](http://hea-www.cfa.harvard.edu/~jzuhone/pyxsim/) instead.
This functionality provides the ability to create metallicity-dependent X-ray luminosity, emissivity, and photon emissivity fields for a given photon energy range. This works by interpolating from emission tables created from the photoionization code [Cloudy](https://www.nublado.org/) or the collisional ionization database [AtomDB](http://www.atomdb.org). These can be downloaded from https://yt-project.org/data from the command line like so:
`# Put the data in a directory you specify`
`yt download cloudy_emissivity_v2.h5 /path/to/data`
`# Put the data in the location set by "supp_data_dir"`
`yt download apec_emissivity_v3.h5 supp_data_dir`
The data path can be a directory on disk, or it can be "supp_data_dir", which will download the data to the directory specified by the `"supp_data_dir"` yt configuration entry. It is easiest to put these files in the directory from which you will be running yt or `"supp_data_dir"`, but see the note below about putting them in alternate locations.
Emission fields can be made for any energy interval between 0.1 keV and 100 keV, and will always be created for luminosity $(\rm{erg~s^{-1}})$, emissivity $\rm{(erg~s^{-1}~cm^{-3})}$, and photon emissivity $\rm{(photons~s^{-1}~cm^{-3})}$. The only required arguments are the
dataset object, and the minimum and maximum energies of the energy band. However, typically one needs to decide what will be used for the metallicity. This can either be a floating-point value representing a spatially constant metallicity, or a prescription for a metallicity field, e.g. `("gas", "metallicity")`. For this first example, where the dataset has no metallicity field, we'll just assume $Z = 0.3~Z_\odot$ everywhere:
```
import yt
ds = yt.load(
"GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150", default_species_fields="ionized"
)
xray_fields = yt.add_xray_emissivity_field(
ds, 0.5, 7.0, table_type="apec", metallicity=0.3
)
```
> Note: If you place the HDF5 emissivity tables in a location other than the current working directory or the location
specified by the "supp_data_dir" configuration value, you will need to specify it in the call to
`add_xray_emissivity_field`:
`xray_fields = yt.add_xray_emissivity_field(ds, 0.5, 7.0, data_dir="/path/to/data", table_type='apec', metallicity=0.3)`
Having made the fields, one can see which fields were made:
```
print(xray_fields)
```
The luminosity field is useful for summing up in regions like this:
```
sp = ds.sphere("c", (2.0, "Mpc"))
print(sp.quantities.total_quantity(("gas", "xray_luminosity_0.5_7.0_keV")))
```
Whereas the emissivity fields may be useful in derived fields or for plotting:
```
slc = yt.SlicePlot(
ds,
"z",
[
("gas", "xray_emissivity_0.5_7.0_keV"),
("gas", "xray_photon_emissivity_0.5_7.0_keV"),
],
width=(0.75, "Mpc"),
)
slc.show()
```
The emissivity and the luminosity fields take the values one would see in the frame of the source. However, if one wishes to make projections of the X-ray emission from a cosmologically distant object, the energy band will be redshifted. For this case, one can supply a `redshift` parameter and a `Cosmology` object (either from the dataset or one made on your own) to compute X-ray intensity fields along with the emissivity and luminosity fields.
This example shows how to do that, Where we also use a spatially dependent metallicity field and the Cloudy tables instead of the APEC tables we used previously:
```
ds2 = yt.load("D9p_500/10MpcBox_HartGal_csf_a0.500.d", default_species_fields="ionized")
# In this case, use the redshift and cosmology from the dataset,
# but in theory you could put in something different
xray_fields2 = yt.add_xray_emissivity_field(
ds2,
0.5,
2.0,
redshift=ds2.current_redshift,
cosmology=ds2.cosmology,
metallicity=("gas", "metallicity"),
table_type="cloudy",
)
```
Now, one can see that two new fields have been added, corresponding to X-ray intensity / surface brightness when projected:
```
print(xray_fields2)
```
Note also that the energy range now corresponds to the *observer* frame, whereas in the source frame the energy range is between `emin*(1+redshift)` and `emax*(1+redshift)`. Let's zoom in on a galaxy and make a projection of the energy intensity field:
```
prj = yt.ProjectionPlot(
ds2, "x", ("gas", "xray_intensity_0.5_2.0_keV"), center="max", width=(40, "kpc")
)
prj.set_zlim("xray_intensity_0.5_2.0_keV", 1.0e-32, 5.0e-24)
prj.show()
```
> Warning: The X-ray fields depend on the number density of hydrogen atoms, given by the yt field
`H_nuclei_density`. In the case of the APEC model, this assumes that all of the hydrogen in your
dataset is ionized, whereas in the Cloudy model the ionization level is taken into account. If
this field is not defined (either in the dataset or by the user), it will be constructed using
abundance information from your dataset. Finally, if your dataset contains no abundance information,
a primordial hydrogen mass fraction (X = 0.76) will be assumed.
Finally, if you want to place the source at a local, non-cosmological distance, you can forego the `redshift` and `cosmology` arguments and supply a `dist` argument instead, which is either a `(value, unit)` tuple or a `YTQuantity`. Note that here the redshift is assumed to be zero.
```
xray_fields3 = yt.add_xray_emissivity_field(
ds2,
0.5,
2.0,
dist=(1.0, "Mpc"),
metallicity=("gas", "metallicity"),
table_type="cloudy",
)
prj = yt.ProjectionPlot(
ds2,
"x",
("gas", "xray_photon_intensity_0.5_2.0_keV"),
center="max",
width=(40, "kpc"),
)
prj.set_zlim("xray_photon_intensity_0.5_2.0_keV", 1.0e-24, 5.0e-16)
prj.show()
```
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/analyzing/domain_analysis/XrayEmissionFields.ipynb | XrayEmissionFields.ipynb |
.. _cosmology-calculator:
Cosmology Calculator
====================
The cosmology calculator can be used to calculate cosmological distances and
times given a set of cosmological parameters. A cosmological dataset, ``ds``,
will automatically have a cosmology calculator configured with the correct
parameters associated with it as ``ds.cosmology``. A standalone
:class:`~yt.utilities.cosmology.Cosmology` calculator object can be created
in the following way:
.. code-block:: python
from yt.utilities.cosmology import Cosmology
co = Cosmology(
hubble_constant=0.7,
omega_matter=0.3,
omega_lambda=0.7,
omega_curvature=0.0,
omega_radiation=0.0,
)
Once created, various distance calculations as well as conversions between
redshift and time are available:
.. notebook-cell::
from yt.utilities.cosmology import Cosmology
co = Cosmology()
# Hubble distance (c / h)
print("hubble distance", co.hubble_distance())
# distance from z = 0 to 0.5
print("comoving radial distance", co.comoving_radial_distance(0, 0.5).in_units("Mpccm/h"))
# transverse distance
print("transverse distance", co.comoving_transverse_distance(0, 0.5).in_units("Mpccm/h"))
# comoving volume
print("comoving volume", co.comoving_volume(0, 0.5).in_units("Gpccm**3"))
# angular diameter distance
print("angular diameter distance", co.angular_diameter_distance(0, 0.5).in_units("Mpc/h"))
# angular scale
print("angular scale", co.angular_scale(0, 0.5).in_units("Mpc/degree"))
# luminosity distance
print("luminosity distance", co.luminosity_distance(0, 0.5).in_units("Mpc/h"))
# time between two redshifts
print("lookback time", co.lookback_time(0, 0.5).in_units("Gyr"))
# critical density
print("critical density", co.critical_density(0))
# Hubble parameter at a given redshift
print("hubble parameter", co.hubble_parameter(0).in_units("km/s/Mpc"))
# convert time after Big Bang to redshift
my_t = co.quan(8, "Gyr")
print("z from t", co.z_from_t(my_t))
# convert redshift to time after Big Bang
print("t from z", co.t_from_z(0.5).in_units("Gyr"))
.. warning::
Cosmological distance calculations return values that are either
in the comoving or proper frame, depending on the specific quantity. For
simplicity, the proper and comoving frames are set equal to each other
within the cosmology calculator. This means that for some distance value,
x, x.to("Mpc") and x.to("Mpccm") will be the same. The user should take
care to understand which reference frame is correct for the given calculation.
The helper functions, ``co.quan``
and ``co.arr`` exist to create unitful ``YTQuantities`` and ``YTArray`` with the
unit registry of the cosmology calculator. For more information on the usage
and meaning of each calculation, consult the reference documentation at
:ref:`cosmology-calculator-ref`.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/analyzing/domain_analysis/cosmology_calculator.rst | cosmology_calculator.rst |
.. _xray_data_README:
Auxiliary Data Files for use with yt's Photon Simulator
=======================================================
Included in the `xray_data <https://yt-project.org/data/xray_data.tar.gz>`_ package are a number of files that you may find
useful when working with yt's X-ray `photon_simulator
<photon_simulator.html>`_ analysis module. They have been tested to give spectral fitting results
consistent with input parameters.
Spectral Model Tables
---------------------
* tbabs_table.h5:
Tabulated values of the galactic absorption cross-section in HDF5
format, generated from the routines at http://pulsar.sternwarte.uni-erlangen.de/wilms/research/tbabs/
ARFs and RMFs
-------------
We have tested the following ARFs and RMFs with the photon
simulator. These can be used to generate a very simplified
representation of an X-ray observation, using a uniform, on-axis
response. For more accurate models of X-ray observations we suggest
using MARX or SIMX (detailed below_).
* Chandra: chandra_ACIS-S3_onaxis_arf.fits, chandra_ACIS-S3_onaxis_rmf.fits
Generated from the CIAO tools, on-axis on the ACIS-S3 chip.
* XMM-Newton: pn-med.arf, pn-med.rmf
EPIC pn CCDs (medium filter), taken from SIMX
* Astro-H: sxt-s_100208_ts02um_intall.arf, ah_sxs_7ev_basefilt_20090216.rmf
SXT-S+SXS responses taken from http://astro-h.isas.jaxa.jp/researchers/sim/response.html
* NuSTAR: nustarA.arf, nustarA.rmf
Averaged responses for NuSTAR telescope A generated by Dan Wik (NASA/GSFC)
.. _below:
Other Useful Things Not Included Here
-------------------------------------
* AtomDB: http://www.atomdb.org
FITS table data for emission lines and continuum emission. Must have
it installed to use the TableApecModel spectral model.
* PyXspec: https://heasarc.gsfc.nasa.gov/xanadu/xspec/python/html/
Python interface to the XSPEC spectral-fitting program. Two of the
spectral models for the photon simulator use it.
* MARX: https://space.mit.edu/ASC/MARX/
Detailed ray-trace simulations of Chandra.
* SIMX: http://hea-www.harvard.edu/simx/
Simulates a photon-counting detector's response to an input source,
including a simplified models of telescopes.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/analyzing/domain_analysis/xray_data_README.rst | xray_data_README.rst |
.. _aboutyt:
About yt
========
.. contents::
:depth: 1
:local:
:backlinks: none
What is yt?
-----------
yt is a toolkit for analyzing and visualizing quantitative data. Originally
written to analyze 3D grid-based astrophysical simulation data,
it has grown to handle any kind of data represented in a 2D or 3D volume.
yt is an Python-based open source project and is open for anyone to use or
contribute code. The entire source code and history is available to all
at https://github.com/yt-project/yt .
.. _who-is-yt:
Who is yt?
----------
As an open-source project, yt has a large number of user-developers.
In September of 2014, the yt developer community collectively decided to endow
the title of *member* on individuals who had contributed in a significant way
to the project. For a list of those members and a description of their
contributions to the code, see
`our members website. <https://yt-project.org/members.html>`_
History of yt
-------------
yt was originally created to study datasets generated by cosmological
simulations of galaxy and star formation conducted by the simulation code Enzo.
After expanding to address data output by other simulation platforms, it further
broadened to include alternate, grid-free methods of simulating -- particularly,
particles and unstructured meshes.
With the release of yt 4.0, we are proud that the community has continued to
expand, that yt continues to participate in the broader ecosystem, and that the
development process is continuing to improve in both inclusivity and openness.
For a more personal retrospective by the original author, Matthew Turk, you can
see this `blog post from
2017 <https://medium.com/@matthewturk/10-years-of-yt-c93b2f1cef8c>`_.
How do I contact yt?
--------------------
If you have any questions about the code, please contact the `yt users email
list <https://mail.python.org/archives/list/[email protected]/>`_. If
you're having other problems, please follow the steps in
:ref:`asking-for-help`, particularly including Slack and GitHub issues.
How do I cite yt?
-----------------
If you use yt in a publication, we'd very much appreciate a citation! You
should feel free to cite the `ApJS paper
<https://ui.adsabs.harvard.edu/abs/2011ApJS..192....9T>`_ with the following BibTeX
entry: ::
@ARTICLE{2011ApJS..192....9T,
author = {{Turk}, M.~J. and {Smith}, B.~D. and {Oishi}, J.~S. and {Skory}, S. and
{Skillman}, S.~W. and {Abel}, T. and {Norman}, M.~L.},
title = "{yt: A Multi-code Analysis Toolkit for Astrophysical Simulation Data}",
journal = {The Astrophysical Journal Supplement Series},
archivePrefix = "arXiv",
eprint = {1011.3514},
primaryClass = "astro-ph.IM",
keywords = {cosmology: theory, methods: data analysis, methods: numerical },
year = 2011,
month = jan,
volume = 192,
eid = {9},
pages = {9},
doi = {10.1088/0067-0049/192/1/9},
adsurl = {https://ui.adsabs.harvard.edu/abs/2011ApJS..192....9T},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
While this paper is somewhat out of date -- and certainly does not include the
appropriate list of authors -- we are preparing a new method paper as well as
preparing a new strategy for ensuring equal credit distribution for
contributors. Some of this work can be found at the `yt-4.0-paper
<https://github.com/yt-project/yt-4.0-paper/>`_ repository.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/about/index.rst | index.rst |
# Loading Spherical Data
With version 3.0 of yt, it has gained the ability to load data from non-Cartesian systems. This support is still being extended, but here is an example of how to load spherical data from a regularly-spaced grid. For irregularly spaced grids, a similar setup can be used, but the `load_hexahedral_mesh` method will have to be used instead.
Note that in yt, "spherical" means that it is ordered $r$, $\theta$, $\phi$, where $\theta$ is the declination from the azimuth (running from $0$ to $\pi$ and $\phi$ is the angle around the zenith (running from $0$ to $2\pi$).
We first start out by loading yt.
```
import numpy as np
import yt
```
Now, we create a few derived fields. The first three are just straight translations of the Cartesian coordinates, so that we can see where we are located in the data, and understand what we're seeing. The final one is just a fun field that is some combination of the three coordinates, and will vary in all dimensions.
```
@yt.derived_field(name="sphx", units="cm", take_log=False, sampling_type="cell")
def sphx(field, data):
return np.cos(data["phi"]) * np.sin(data["theta"]) * data["r"]
@yt.derived_field(name="sphy", units="cm", take_log=False, sampling_type="cell")
def sphy(field, data):
return np.sin(data["phi"]) * np.sin(data["theta"]) * data["r"]
@yt.derived_field(name="sphz", units="cm", take_log=False, sampling_type="cell")
def sphz(field, data):
return np.cos(data["theta"]) * data["r"]
@yt.derived_field(name="funfield", units="cm", take_log=False, sampling_type="cell")
def funfield(field, data):
return (np.sin(data["phi"]) ** 2 + np.cos(data["theta"]) ** 2) * (
1.0 * data["r"].uq + data["r"]
)
```
## Loading Data
Now we can actually load our data. We use the `load_uniform_grid` function here. Normally, the first argument would be a dictionary of field data, where the keys were the field names and the values the field data arrays. Here, we're just going to look at derived fields, so we supply an empty one.
The next few arguments are the number of dimensions, the bounds, and we then specify the geometry as spherical.
```
ds = yt.load_uniform_grid(
{},
[128, 128, 128],
bbox=np.array([[0.0, 1.0], [0.0, np.pi], [0.0, 2 * np.pi]]),
geometry="spherical",
)
```
## Looking at Data
Now we can take slices. The first thing we will try is making a slice of data along the "phi" axis, here $\pi/2$, which will be along the y axis in the positive direction. We use the `.slice` attribute, which creates a slice, and then we convert this into a plot window. Note that here 2 is used to indicate the third axis (0-indexed) which for spherical data is $\phi$.
This is the manual way of creating a plot -- below, we'll use the standard, automatic ways. Note that the coordinates run from $-r$ to $r$ along the $z$ axis and from $0$ to $r$ along the $R$ axis. We use the capital $R$ to indicate that it's the $R$ along the $x-y$ plane.
```
s = ds.slice(2, np.pi / 2)
p = s.to_pw("funfield", origin="native")
p.set_zlim("all", 0.0, 4.0)
p.show()
```
We can also slice along $r$. For now, this creates a regular grid with *incorrect* units for phi and theta. We are currently exploring two other options -- a simple aitoff projection, and fixing it to use the correct units as-is.
```
s = yt.SlicePlot(ds, "r", "funfield")
s.set_zlim("all", 0.0, 4.0)
s.show()
```
We can also slice at constant $\theta$. But, this is a weird thing! We're slicing at a constant declination from the azimuth. What this means is that when thought of in a Cartesian domain, this slice is actually a cone. The axes have been labeled appropriately, to indicate that these are not exactly the $x$ and $y$ axes, but instead differ by a factor of $\sin(\theta))$.
```
s = yt.SlicePlot(ds, "theta", "funfield")
s.set_zlim("all", 0.0, 4.0)
s.show()
```
We've seen lots of the `funfield` plots, but we can also look at the Cartesian axes. This next plot plots the Cartesian $x$, $y$ and $z$ values on a $\theta$ slice. Because we're not supplying an argument to the `center` parameter, yt will place it at the center of the $\theta$ axis, which will be at $\pi/2$, where it will be aligned with the $x-y$ plane. The slight change in `sphz` results from the cells themselves migrating, and plotting the center of those cells.
```
s = yt.SlicePlot(ds, "theta", ["sphx", "sphy", "sphz"])
s.show()
```
We can do the same with the $\phi$ axis.
```
s = yt.SlicePlot(ds, "phi", ["sphx", "sphy", "sphz"])
s.show()
```
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/examining/Loading_Spherical_Data.ipynb | Loading_Spherical_Data.ipynb |
Even if your data is not strictly related to fields commonly used in
astrophysical codes or your code is not supported yet, you can still feed it to
yt to use its advanced visualization and analysis facilities. The only
requirement is that your data can be represented as three-dimensional NumPy arrays with a consistent grid structure. What follows are some common examples of loading in generic array data that you may find useful.
## Generic Unigrid Data
The simplest case is that of a single grid of data spanning the domain, with one or more fields. The data could be generated from a variety of sources; we'll just give three common examples:
### Data generated "on-the-fly"
The most common example is that of data that is generated in memory from the currently running script or notebook.
```
import numpy as np
import yt
```
In this example, we'll just create a 3-D array of random floating-point data using NumPy:
```
arr = np.random.random(size=(64, 64, 64))
```
To load this data into yt, we need associate it with a field. The `data` dictionary consists of one or more fields, each consisting of a tuple of a NumPy array and a unit string. Then, we can call `load_uniform_grid`:
```
data = {"density": (arr, "g/cm**3")}
bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [-1.5, 1.5]])
ds = yt.load_uniform_grid(data, arr.shape, length_unit="Mpc", bbox=bbox, nprocs=64)
```
`load_uniform_grid` takes the following arguments and optional keywords:
* `data` : This is a dict of numpy arrays, where the keys are the field names
* `domain_dimensions` : The domain dimensions of the unigrid
* `length_unit` : The unit that corresponds to `code_length`, can be a string, tuple, or floating-point number
* `bbox` : Size of computational domain in units of `code_length`
* `nprocs` : If greater than 1, will create this number of subarrays out of data
* `sim_time` : The simulation time in seconds
* `mass_unit` : The unit that corresponds to `code_mass`, can be a string, tuple, or floating-point number
* `time_unit` : The unit that corresponds to `code_time`, can be a string, tuple, or floating-point number
* `velocity_unit` : The unit that corresponds to `code_velocity`
* `magnetic_unit` : The unit that corresponds to `code_magnetic`, i.e. the internal units used to represent magnetic field strengths. NOTE: if you want magnetic field units to be in the SI unit system, you must specify it here, e.g. `magnetic_unit=(1.0, "T")`
* `periodicity` : A tuple of booleans that determines whether the data will be treated as periodic along each axis
* `geometry` : The geometry of the dataset, can be `cartesian`, `cylindrical`, `polar`, `spherical`, `geographic` or `spectral_cube`
* `default_species_fields` : if set to `ionized` or `neutral`, default species fields are accordingly created for H and He which also set mean molecular weight
* `axis_order` : The order of the axes in the data array, e.g. `("z", "y", "x")` with cartesian geometry
* `cell_widths` : If set, specify the cell widths along each dimension. Must be consistent with the `domain_dimensions` argument
* `parameters` : A dictionary of dataset parameters, , useful for storing dataset metadata
* `dataset_name` : The name of the dataset. Stream datasets will use this value in place of a filename (in image prefixing, etc.)
This example creates a yt-native dataset `ds` that will treat your array as a
density field in cubic domain of 3 Mpc edge size and simultaneously divide the
domain into `nprocs` = 64 chunks, so that you can take advantage
of the underlying parallelism.
The optional unit keyword arguments allow for the default units of the dataset to be set. They can be:
* A string, e.g. `length_unit="Mpc"`
* A tuple, e.g. `mass_unit=(1.0e14, "Msun")`
* A floating-point value, e.g. `time_unit=3.1557e13`
In the latter case, the unit is assumed to be cgs.
The resulting `ds` functions exactly like a dataset like any other yt can handle--it can be sliced, and we can show the grid boundaries:
```
slc = yt.SlicePlot(ds, "z", ("gas", "density"))
slc.set_cmap(("gas", "density"), "Blues")
slc.annotate_grids(cmap=None)
slc.show()
```
Particle fields are detected as one-dimensional fields. Particle fields are then added as one-dimensional arrays in
a similar manner as the three-dimensional grid fields:
```
posx_arr = np.random.uniform(low=-1.5, high=1.5, size=10000)
posy_arr = np.random.uniform(low=-1.5, high=1.5, size=10000)
posz_arr = np.random.uniform(low=-1.5, high=1.5, size=10000)
data = {
"density": (np.random.random(size=(64, 64, 64)), "Msun/kpc**3"),
"particle_position_x": (posx_arr, "code_length"),
"particle_position_y": (posy_arr, "code_length"),
"particle_position_z": (posz_arr, "code_length"),
}
bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [-1.5, 1.5]])
ds = yt.load_uniform_grid(
data,
data["density"][0].shape,
length_unit=(1.0, "Mpc"),
mass_unit=(1.0, "Msun"),
bbox=bbox,
nprocs=4,
)
```
In this example only the particle position fields have been assigned. If no particle arrays are supplied, then the number of particles is assumed to be zero. Take a slice, and overlay particle positions:
```
slc = yt.SlicePlot(ds, "z", ("gas", "density"))
slc.set_cmap(("gas", "density"), "Blues")
slc.annotate_particles(0.25, p_size=12.0, col="Red")
slc.show()
```
### HDF5 data
HDF5 is a convenient format to store data. If you have unigrid data stored in an HDF5 file, it is possible to load it into memory and then use `load_uniform_grid` to get it into yt:
```
from os.path import join
import h5py
from yt.config import ytcfg
data_dir = ytcfg.get("yt", "test_data_dir")
from yt.utilities.physical_ratios import cm_per_kpc
f = h5py.File(
join(data_dir, "UnigridData", "turb_vels.h5"), "r"
) # Read-only access to the file
```
The HDF5 file handle's keys correspond to the datasets stored in the file:
```
print(f.keys())
```
We need to add some unit information. It may be stored in the file somewhere, or we may know it from another source. In this case, the units are simply cgs:
```
units = [
"gauss",
"gauss",
"gauss",
"g/cm**3",
"erg/cm**3",
"K",
"cm/s",
"cm/s",
"cm/s",
"cm/s",
"cm/s",
"cm/s",
]
```
We can iterate over the items in the file handle and the units to get the data into a dictionary, which we will then load:
```
data = {k: (v[()], u) for (k, v), u in zip(f.items(), units)}
bbox = np.array([[-0.5, 0.5], [-0.5, 0.5], [-0.5, 0.5]])
ds = yt.load_uniform_grid(
data,
data["Density"][0].shape,
length_unit=250.0 * cm_per_kpc,
bbox=bbox,
nprocs=8,
periodicity=(False, False, False),
)
```
In this case, the data came from a simulation which was 250 kpc on a side. An example projection of two fields:
```
prj = yt.ProjectionPlot(
ds, "z", ["z-velocity", "Temperature", "Bx"], weight_field="Density"
)
prj.set_log("z-velocity", False)
prj.set_log("Bx", False)
prj.show()
```
### Volume Rendering Loaded Data
Volume rendering requires defining a `TransferFunction` to map data to color and opacity and a `camera` to create a viewport and render the image.
```
# Find the min and max of the field
mi, ma = ds.all_data().quantities.extrema("Temperature")
# Reduce the dynamic range
mi = mi.value + 1.5e7
ma = ma.value - 0.81e7
```
Define the properties and size of the `camera` viewport:
```
# Choose a vector representing the viewing direction.
L = [0.5, 0.5, 0.5]
# Define the center of the camera to be the domain center
c = ds.domain_center[0]
# Define the width of the image
W = 1.5 * ds.domain_width[0]
# Define the number of pixels to render
Npixels = 512
```
Create a `camera` object and
```
sc = yt.create_scene(ds, "Temperature")
dd = ds.all_data()
source = sc[0]
source.log_field = False
tf = yt.ColorTransferFunction((mi, ma), grey_opacity=False)
tf.map_to_colormap(mi, ma, scale=15.0, colormap="cmyt.algae")
source.set_transfer_function(tf)
sc.add_source(source)
cam = sc.add_camera()
cam.width = W
cam.center = c
cam.normal_vector = L
cam.north_vector = [0, 0, 1]
sc.show(sigma_clip=4)
```
### FITS image data
The FITS file format is a common astronomical format for 2-D images, but it can store three-dimensional data as well. The [AstroPy](https://www.astropy.org) project has modules for FITS reading and writing, which were incorporated from the [PyFITS](http://www.stsci.edu/institute/software_hardware/pyfits) library.
```
import astropy.io.fits as pyfits
# Or, just import pyfits if that's what you have installed
```
Using `pyfits` we can open a FITS file. If we call `info()` on the file handle, we can figure out some information about the file's contents. The file in this example has a primary HDU (header-data-unit) with no data, and three HDUs with 3-D data. In this case, the data consists of three velocity fields:
```
f = pyfits.open(join(data_dir, "UnigridData", "velocity_field_20.fits"))
f.info()
```
We can put it into a dictionary in the same way as before, but we slice the file handle `f` so that we don't use the `PrimaryHDU`. `hdu.name` is the field name and `hdu.data` is the actual data. Each of these velocity fields is in km/s. We can check that we got the correct fields.
```
data = {}
for hdu in f:
name = hdu.name.lower()
data[name] = (hdu.data, "km/s")
print(data.keys())
```
The velocity field names in this case are slightly different than the standard yt field names for velocity fields, so we will reassign the field names:
```
data["velocity_x"] = data.pop("x-velocity")
data["velocity_y"] = data.pop("y-velocity")
data["velocity_z"] = data.pop("z-velocity")
```
Now we load the data into yt. Let's assume that the box size is a Mpc. Since these are velocity fields, we can overlay velocity vectors on slices, just as if we had loaded in data from a supported code.
```
ds = yt.load_uniform_grid(data, data["velocity_x"][0].shape, length_unit=(1.0, "Mpc"))
slc = yt.SlicePlot(
ds, "x", [("gas", "velocity_x"), ("gas", "velocity_y"), ("gas", "velocity_z")]
)
for ax in "xyz":
slc.set_log(("gas", f"velocity_{ax}"), False)
slc.annotate_velocity()
slc.show()
```
## Generic AMR Data
In a similar fashion to unigrid data, data gridded into rectangular patches at varying levels of resolution may also be loaded into yt. In this case, a list of grid dictionaries should be provided, with the requisite information about each grid's properties. This example sets up two grids: a top-level grid (`level == 0`) covering the entire domain and a subgrid at `level == 1`.
```
grid_data = [
{
"left_edge": [0.0, 0.0, 0.0],
"right_edge": [1.0, 1.0, 1.0],
"level": 0,
"dimensions": [32, 32, 32],
},
{
"left_edge": [0.25, 0.25, 0.25],
"right_edge": [0.75, 0.75, 0.75],
"level": 1,
"dimensions": [32, 32, 32],
},
]
```
We'll just fill each grid with random density data, with a scaling with the grid refinement level.
```
for g in grid_data:
g["density"] = (np.random.random(g["dimensions"]) * 2 ** g["level"], "g/cm**3")
```
Particle fields are supported by adding 1-dimensional arrays to each `grid`. If a grid has no particles, the particle fields still have to be defined since they are defined elsewhere; set them to empty NumPy arrays:
```
grid_data[0]["particle_position_x"] = (
np.array([]),
"code_length",
) # No particles, so set empty arrays
grid_data[0]["particle_position_y"] = (np.array([]), "code_length")
grid_data[0]["particle_position_z"] = (np.array([]), "code_length")
grid_data[1]["particle_position_x"] = (
np.random.uniform(low=0.25, high=0.75, size=1000),
"code_length",
)
grid_data[1]["particle_position_y"] = (
np.random.uniform(low=0.25, high=0.75, size=1000),
"code_length",
)
grid_data[1]["particle_position_z"] = (
np.random.uniform(low=0.25, high=0.75, size=1000),
"code_length",
)
```
Then, call `load_amr_grids`:
```
ds = yt.load_amr_grids(grid_data, [32, 32, 32])
```
`load_amr_grids` also takes the same keywords `bbox` and `sim_time` as `load_uniform_grid`. We could have also specified the length, time, velocity, and mass units in the same manner as before. Let's take a slice:
```
slc = yt.SlicePlot(ds, "z", ("gas", "density"))
slc.annotate_particles(0.25, p_size=15.0, col="Pink")
slc.show()
```
## Multiple Particle Types
For both uniform grid data and AMR data, one can specify particle fields with multiple types if the particle field names are given as field tuples instead of strings (the default particle type is `"io"`):
```
posxr_arr = np.random.uniform(low=-1.5, high=1.5, size=10000)
posyr_arr = np.random.uniform(low=-1.5, high=1.5, size=10000)
poszr_arr = np.random.uniform(low=-1.5, high=1.5, size=10000)
posxb_arr = np.random.uniform(low=-1.5, high=1.5, size=20000)
posyb_arr = np.random.uniform(low=-1.5, high=1.5, size=20000)
poszb_arr = np.random.uniform(low=-1.5, high=1.5, size=20000)
data = {
("gas", "density"): (np.random.random(size=(64, 64, 64)), "Msun/kpc**3"),
("red", "particle_position_x"): (posxr_arr, "code_length"),
("red", "particle_position_y"): (posyr_arr, "code_length"),
("red", "particle_position_z"): (poszr_arr, "code_length"),
("blue", "particle_position_x"): (posxb_arr, "code_length"),
("blue", "particle_position_y"): (posyb_arr, "code_length"),
("blue", "particle_position_z"): (poszb_arr, "code_length"),
}
bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [-1.5, 1.5]])
ds = yt.load_uniform_grid(
data,
data["gas", "density"][0].shape,
length_unit=(1.0, "Mpc"),
mass_unit=(1.0, "Msun"),
bbox=bbox,
nprocs=4,
)
```
We can now see we have multiple particle types:
```
dd = ds.all_data()
print(ds.particle_types)
print(dd["red", "particle_position_x"].size)
print(dd["blue", "particle_position_x"].size)
print(dd["all", "particle_position_x"].size)
```
## Caveats for Loading Generic Array Data
* Particles may be difficult to integrate.
* Data must already reside in memory before loading it in to yt, whether it is generated at runtime or loaded from disk.
* Some functions may behave oddly, and parallelism will be disappointing or non-existent in most cases.
* No consistency checks are performed on the hierarchy
* Consistency between particle positions and grids is not checked; `load_amr_grids` assumes that particle positions associated with one grid are not bounded within another grid at a higher level, so this must be ensured by the user prior to loading the grid data.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/examining/Loading_Generic_Array_Data.ipynb | Loading_Generic_Array_Data.ipynb |
.. _examining-data:
Loading and Examining Data
==========================
Nominally, one should just be able to run ``yt.load()`` on a dataset and start
computing; however, there may be additional notes associated with different
data formats as described below. Furthermore, we provide methods for loading
data from unsupported data formats in :ref:`loading-numpy-array`,
:ref:`generic-particle-data`, and :ref:`loading-spherical-data`. Lastly, if
you want to examine the raw data for your particular dataset, visit
:ref:`low-level-data-inspection`.
.. toctree::
:maxdepth: 2
loading_data
generic_array_data
generic_particle_data
loading_via_functions
spherical_data
low_level_inspection
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/examining/index.rst | index.rst |
.. _low-level-data-inspection:
Low-Level Data Inspection: Accessing Raw Data
=============================================
yt can not only provide high-level access to data, such as through slices,
projections, object queries and the like, but it can also provide low-level
access to the raw data.
.. note:: This section is tuned for patch- or block-based simulations. Future
versions of yt will enable more direct access to particle and oct
based simulations. For now, these are represented as patches, with
the attendant properties.
For a more basic introduction, see :ref:`quickstart` and more specifically
:ref:`data_inspection`.
.. _examining-grid-hierarchies:
Examining Grid Hierarchies
--------------------------
yt organizes grids in a hierarchical fashion; a coarser grid that contains (or
overlaps with) a finer grid is referred to as its parent. yt organizes these
only a single level of refinement at a time. To access grids, the ``grids``
attribute on a :class:`~yt.geometry.grid_geometry_handler.GridIndex` object. (For
fast operations, a number of additional arrays prefixed with ``grid`` are also
available, such as ``grid_left_edges`` and so on.) This returns an instance of
:class:`~yt.data_objects.grid_patch.AMRGridPatch`, which can be queried for
either data or index information.
The :class:`~yt.data_objects.grid_patch.AMRGridPatch` object itself provides
the following attributes:
* ``Children``: a list of grids contained within this one, of one higher level
of refinement
* ``Parent``: a single object or a list of objects this grid is contained
within, one level of refinement coarser
* ``child_mask``: a mask of 0's and 1's, representing where no finer data is
available in refined grids (1) or where this grid is covered by finer regions
(0). Note that to get back the final data contained within a grid, one can
multiple a field by this attribute.
* ``child_indices``: a mask of booleans, where False indicates no finer data
is available. This is essentially the inverse of ``child_mask``.
* ``child_index_mask``: a mask of indices into the ``ds.index.grids`` array of the
child grids.
* ``LeftEdge``: the left edge, in native code coordinates, of this grid
* ``RightEdge``: the right edge, in native code coordinates, of this grid
* ``dds``: the width of a cell in this grid
* ``id``: the id (not necessarily the index) of this grid. Defined such that
subtracting the property ``_id_offset`` gives the index into ``ds.index.grids``.
* ``NumberOfParticles``: the number of particles in this grid
* ``OverlappingSiblings``: a list of sibling grids that this grid overlaps
with. Likely only defined for Octree-based codes.
In addition, the method
:meth:`~yt.data_objects.grid_patch.AMRGridPatch.get_global_startindex` can be
used to get the integer coordinates of the upper left edge. These integer
coordinates are defined with respect to the current level; this means that they
are the offset of the left edge, with respect to the left edge of the domain,
divided by the local ``dds``.
To traverse a series of grids, this type of construction can be used:
.. code-block:: python
g = ds.index.grids[1043]
g2 = g.Children[1].Children[0]
print(g2.LeftEdge)
.. _examining-grid-data:
Examining Grid Data
-------------------
Once you have identified a grid you wish to inspect, there are two ways to
examine data. You can either ask the grid to read the data and pass it to you
as normal, or you can manually intercept the data from the IO handler and
examine it before it has been unit converted. This allows for much more raw
data inspection.
To access data that has been read in the typical fashion and unit-converted as
normal, you can access the grid as you would a normal object:
.. code-block:: python
g = ds.index.grids[1043]
print(g["gas", "density"])
print(g["gas", "density"].min())
To access the raw data (as found in the file), use
.. code-block:: python
g = ds.index.grids[1043]
rho = g["gas", "density"].in_base("code")
.. _finding-data-at-fixed-points:
Finding Data at Fixed Points
----------------------------
One of the most common questions asked of data is, what is the value *at this
specific point*. While there are several ways to find out the answer to this
question, a few helper routines are provided as well. To identify the
finest-resolution (i.e., most canonical) data at a given point, use
the point data object::
from yt.units import kpc
point_obj = ds.point([30, 75, 80]*kpc)
density_at_point = point_obj['gas', 'density']
The point data object works just like any other yt data object. It is special
because it is the only zero-dimensional data object: it will only return data at
the exact point specified when creating the point data object. For more
information about yt data objects, see :ref:`Data-objects`.
If you need to find field values at many points, the
:meth:`~yt.data_objects.static_output.Dataset.find_field_values_at_points`
function may be more efficient. This function returns a nested list of field
values at multiple points in the simulation volume. For example, if one wanted
to find the value of a mesh field at the location of the particles in a
simulation, one could do::
ad = ds.all_data()
ppos = ad["all", "particle_position"]
ppos_den_vel = ds.find_field_values_at_points(
[("gas", "density"), ("gas", "velocity_x")],
ppos
)
In this example, ``ppos_den_vel`` will be a list of arrays. The first array will
contain the density values at the particle positions, the second will contain
the x velocity values at the particle positions.
.. _examining-grid-data-in-a-fixed-resolution-array:
Examining Grid Data in a Fixed Resolution Array
-----------------------------------------------
If you have a dataset, either AMR or single resolution, and you want to just
stick it into a fixed resolution numpy array for later examination, then you
want to use a :ref:`Covering Grid <available-objects>`. You must specify the
maximum level at which to sample the data, a left edge of the data where you
will start, and the resolution at which you want to sample.
For example, let's use the :ref:`sample dataset <getting-sample-data>`
``Enzo_64``. This dataset is at a resolution of 64^3 with 5 levels of AMR,
so if we want a 64^3 array covering the entire volume and sampling just the
lowest level data, we run:
.. code-block:: python
import yt
ds = yt.load("Enzo_64/DD0043/data0043")
all_data_level_0 = ds.covering_grid(level=0, left_edge=[0, 0.0, 0.0], dims=[64, 64, 64])
Note that we can also get the same result and rely on the dataset to know
its own underlying dimensions:
.. code-block:: python
all_data_level_0 = ds.covering_grid(
level=0, left_edge=[0, 0.0, 0.0], dims=ds.domain_dimensions
)
We can now access our underlying data at the lowest level by specifying what
:ref:`field <field-list>` we want to examine:
.. code-block:: python
print(all_data_level_0["gas", "density"].shape)
# (64, 64, 64)
print(all_data_level_0["gas", "density"])
# array([[[ 1.92588925e-31, 1.74647692e-31, 2.54787518e-31, ...,
print(all_data_level_0["gas", "temperature"].shape)
# (64, 64, 64)
If you create a covering grid that spans two child grids of a single parent
grid, it will fill those zones covered by a zone of a child grid with the
data from that child grid. Where it is covered only by the parent grid, the
cells from the parent grid will be duplicated (appropriately) to fill the
covering grid.
Let's say we now want to look at that entire data volume and sample it at
a higher resolution (i.e. level 2). As stated above, we'll be oversampling
under-refined regions, but that's OK. We must also increase the resolution
of our output array by a factor of 2^2 in each direction to hold this new
larger dataset:
.. code-block:: python
all_data_level_2 = ds.covering_grid(
level=2, left_edge=[0, 0.0, 0.0], dims=ds.domain_dimensions * 2**2
)
And let's see what's the density in the central location:
.. code-block:: python
print(all_data_level_2["gas", "density"].shape)
(256, 256, 256)
print(all_data_level_2["gas", "density"][128, 128, 128])
1.7747457571203124e-31
There are two different types of covering grids: unsmoothed and smoothed.
Smoothed grids will be filled through a cascading interpolation process;
they will be filled at level 0, interpolated to level 1, filled at level 1,
interpolated to level 2, filled at level 2, etc. This will help to reduce
edge effects. Unsmoothed covering grids will not be interpolated, but rather
values will be duplicated multiple times.
To sample our dataset from above with a smoothed covering grid in order
to reduce edge effects, it is a nearly identical process:
.. code-block:: python
all_data_level_2_s = ds.smoothed_covering_grid(
2, [0.0, 0.0, 0.0], ds.domain_dimensions * 2**2
)
print(all_data_level_2_s["gas", "density"].shape)
(256, 256, 256)
print(all_data_level_2_s["gas", "density"][128, 128, 128])
1.763744852165591e-31
Covering grids can also accept a ``data_source`` argument, in which case only
the cells of the covering grid that are contained by the ``data_source`` will be
filled. This can be useful to create regularized arrays of more complex
geometries. For example, if we provide a sphere, we see that the covering grid
shape is the same, but the number of cells with data is less
.. code-block:: python
sp = ds.sphere(ds.domain_center, (0.25, "code_length"))
cg_sp = ds.covering_grid(
level=0, left_edge=[0, 0.0, 0.0], dims=ds.domain_dimensions, data_source=sp
)
print(cg_sp[("gas", "density")].shape)
(64, 64, 64)
print(cg_sp[("gas", "density")].size)
262144
print(cg_sp[("gas", "density")][cg_sp[("gas", "density")] != 0].size)
17256
The ``data_source`` can be any :ref:`3D Data Container <region-reference>`. Also
note that the ``data_source`` argument is only available for the ``covering_grid``
at present (not the ``smoothed_covering_grid``).
.. _examining-image-data-in-a-fixed-resolution-array:
Examining Image Data in a Fixed Resolution Array
------------------------------------------------
In the same way that one can sample a multi-resolution 3D dataset by placing
it into a fixed resolution 3D array as a
:ref:`Covering Grid <examining-grid-data-in-a-fixed-resolution-array>`, one can
also access the raw image data that is returned from various yt functions
directly as a fixed resolution array. This provides a means for bypassing the
yt method for generating plots, and allows the user the freedom to use
whatever interface they wish for displaying and saving their image data.
You can use the :class:`~yt.visualization.fixed_resolution.FixedResolutionBuffer`
to accomplish this as described in :ref:`fixed-resolution-buffers`.
High-level Information about Particles
--------------------------------------
There are a number of high-level helpers attached to ``Dataset`` objects to find
out information about the particles in an output file. First, one can check if
there are any particles in a dataset at all by examining
``ds.particles_exist``. This will be ``True`` for datasets the include particles
and ``False`` otherwise.
One can also see which particle types are available in a dataset. Particle types
that are available in the dataset's on-disk output are known as "raw" particle
types, and they will appear in ``ds.particle_types_raw``. Particle types that
are dynamically defined via a particle filter of a particle union will also
appear in the ``ds.particle_types`` list. If the simulation only has one
particle type on-disk, its name will by ``'io'``. If there is more than one
particle type, the names of the particle types will be inferred from the output
file. For example, Gadget HDF5 files have particle type names like ``PartType0``
and ``PartType1``, while Enzo data, which usually only has one particle type,
will only have a particle named ``io``.
Finally, one can see the number of each particle type by inspecting
``ds.particle_type_counts``. This will be a dictionary mapping the names of
particle types in ``ds.particle_types_raw`` to the number of each particle type
in a simulation output.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/examining/low_level_inspection.rst | low_level_inspection.rst |
.. _loading-data:
Loading Data
============
This section contains information on how to load data into yt, as well as
some important caveats about different data formats.
.. _loading-sample-data:
Sample Data
-----------
The yt community has provided a large number of sample datasets, which are
accessible from https://yt-project.org/data/ . yt also provides a helper
function, ``yt.load_sample``, that can load from a set of sample datasets. The
quickstart notebooks in this documentation utilize this.
The files are, in general, named identically to their listings on the data
catalog page. For instance, you can load ``IsolatedGalaxy`` by executing:
.. code-block:: python
import yt
ds = yt.load_sample("IsolatedGalaxy")
To find a list of all available datasets, you can call ``load_sample`` without
any arguments, and it will return a list of the names that can be supplied:
.. code-block:: python
import yt
yt.load_sample()
This will return a list of possible filenames; more information can be accessed on the data catalog.
.. _loading-archived-data:
Archived Data
-------------
If your data is stored as a (compressed) tar file, you can access the contained
dataset directly without extracting the tar file.
This can be achieved using the ``load_archive`` function:
.. code-block:: python
import yt
ds = yt.load_archive("IsolatedGalaxy.tar.gz", "IsolatedGalaxy/galaxy0030/galaxy0030")
The first argument is the path to the archive file, the second one is the path to the file to load
in the archive. Subsequent arguments are passed to ``yt.load``.
The functionality requires the package `ratarmount <https://github.com/mxmlnkn/ratarmount/>`_ to be installed.
Under the hood, yt will mount the archive as a (read-only) filesystem. Note that this requires the
entire archive to be read once to compute the location of each file in the archive; subsequent accesses
will be much faster.
All archive formats supported by `ratarmount <https://github.com/mxmlnkn/ratarmount>`__ should be loadable, provided
the dependencies are installed; this includes ``tar``, ``tar.gz`` and tar.bz2`` formats.
.. _loading-hdf5-data:
Simple HDF5 Data
----------------
.. note::
This wrapper takes advantage of the functionality described in
:ref:`loading-via-functions` but the basics of setting up function handlers,
guessing fields, etc, are handled by yt.
Using the function :func:`yt.loaders.load_hdf5_file`, you can load a generic
set of fields from an HDF5 file and have a fully-operational yt dataset. For
instance, in the yt sample data repository, we have the `UniGrid
Data <https://yt-project.org/data/UnigridData.tar.gz>`_ dataset (~1.6GB). This dataset includes the file ``turb_vels.h5`` with this structure:
.. code-block:: bash
$ h5ls -r h5ls -r ./UnigridData/turb_vels.h5
/ Group
/Bx Dataset {256, 256, 256}
/By Dataset {256, 256, 256}
/Bz Dataset {256, 256, 256}
/Density Dataset {256, 256, 256}
/MagneticEnergy Dataset {256, 256, 256}
/Temperature Dataset {256, 256, 256}
/turb_x-velocity Dataset {256, 256, 256}
/turb_y-velocity Dataset {256, 256, 256}
/turb_z-velocity Dataset {256, 256, 256}
/x-velocity Dataset {256, 256, 256}
/y-velocity Dataset {256, 256, 256}
/z-velocity Dataset {256, 256, 256}
In versions of yt prior to 4.1, these could be loaded into memory individually
and then accessed *en masse* by the :func:`yt.loaders.load_uniform_grid`
function. Introduced in version 4.1, however, was the ability to provide the
filename and then allow yt to identify the available fields and even subset
them into chunks to preserve memory. Only those requested fields will be
loaded at the time of the request, and they will be subset into chunks to avoid
over-allocating for reduction operations.
To use the auto-loader, call :func:`~yt.loaders.load_hdf5_file` with the name
of the file. Optionally, you can specify the root node of the file to probe
for fields -- for instance, if all of the fields are stored under ``/grid`` (as
they are in output from the ytdata frontend). You can also provide the
expected bounding box, which will otherwise default to 0..1 in all dimensions,
the names of fields to make available (by default yt will probe for them) and
the number of chunks to subdivide the file into. If the number of chunks is
not specified it defaults to trying to keep the size of each individual chunk
no more than $64^3$ zones.
To load the above file, we would use the function as follows:
.. code-block:: python
import yt
ds = yt.load_hdf5_file("UnigridData/turb_vels.h5")
At this point, we now have a dataset that we can do all of our normal
operations on, and all of the known yt derived fields will be available.
.. _loading-amrvac-data:
AMRVAC Data
-----------
To load data to yt, simply use
.. code-block::
import yt
ds = yt.load("output0010.dat")
.. rubric:: Dataset geometry & periodicity
Starting from AMRVAC 2.2, and datfile format 5, a geometry flag
(e.g. "Cartesian_2.5D", "Polar_2D", "Cylindrical_1.5D"...) was added
to the datfile header. yt will fall back to a cartesian mesh if the
geometry flag is not found. For older datfiles however it is possible
to provide it externally with the ``geometry_override`` parameter.
.. code-block:: python
# examples
ds = yt.load("output0010.dat", geometry_override="polar")
ds = yt.load("output0010.dat", geometry_override="cartesian")
Note that ``geometry_override`` has priority over any ``geometry`` flag
present in recent datfiles, which means it can be used to force ``r``
VS ``theta`` 2D plots in polar geometries (for example), but this may
produce unpredictable behaviour and comes with no guarantee.
A ``ndim``-long ``periodic`` boolean array was also added to improve
compatibility with yt. See http://amrvac.org/md_doc_fileformat.html
for details.
.. rubric:: Auto-setup for derived fields
Yt will attempt to mimic the way AMRVAC internally defines kinetic energy,
pressure, and sound speed. To see a complete list of fields that are defined after
loading, one can simply type
.. code-block:: python
print(ds.derived_field_list)
Note that for adiabatic (magneto-)hydrodynamics, i.e. ``(m)hd_energy = False`` in
AMRVAC, additional input data is required in order to setup some of these fields.
This is done by passing the corresponding parfile(s) at load time
.. code-block:: python
# example using a single parfile
ds = yt.load("output0010.dat", parfiles="amrvac.par")
# ... or using multiple parfiles
ds = yt.load("output0010.dat", parfiles=["amrvac.par", "modifier.par"])
In case more than one parfile is passed, yt will create a single namelist by
replicating AMRVAC's rules (see "Using multiple par files"
http://amrvac.org/md_doc_commandline.html).
.. rubric:: Unit System
AMRVAC only supports dimensionless fields and as such, no unit system
is ever attached to any given dataset. yt however defines physical
quantities and give them units. As is customary in yt, the default
unit system is ``cgs``, e.g. lengths are read as "cm" unless specified
otherwise.
The user has two ways to control displayed units, through
``unit_system`` (``"cgs"``, ``"mks"`` or ``"code"``) and
``units_override``. Example:
.. code-block:: python
units_override = dict(length_unit=(100.0, "au"), mass_unit=yt.units.mass_sun)
ds = yt.load("output0010.dat", units_override=units_override, unit_system="mks")
To ensure consistency with normalisations as used in AMRVAC we only allow
overriding a maximum of three units. Allowed unit combinations at the moment are
.. code-block:: none
{numberdensity_unit, temperature_unit, length_unit}
{mass_unit, temperature_unit, length_unit}
{mass_unit, time_unit, length_unit}
{numberdensity_unit, velocity_unit, length_unit}
{mass_unit, velocity_unit, length_unit}
Appropriate errors are thrown for other combinations.
.. rubric:: Partially supported and unsupported features
* a maximum of 100 dust species can be read by yt at the moment.
If your application needs this limit increased, please report an issue
https://github.com/yt-project/yt/issues
* particle data: currently not supported (but might come later)
* staggered grids (AMRVAC 2.2 and later): yt logs a warning if you load
staggered datasets, but the flag is currently ignored.
* "stretched grids" are being implemented in yt, but are not yet
fully-supported. (Previous versions of this file suggested they would
"never" be supported, which we hope to prove incorrect once we finish
implementing stretched grids in AMR. At present, stretched grids are
only supported on a single level of refinement.)
.. note::
Ghost cells exist in .dat files but never read by yt.
.. _loading-art-data:
ART Data
--------
ART data has been supported in the past by Christopher Moody and is currently
cared for by Kenza Arraki. Please contact the ``yt-dev`` mailing list if you
are interested in using yt for ART data, or if you are interested in assisting
with development of yt to work with ART data.
To load an ART dataset you can use the ``yt.load`` command and provide it the
gas mesh file. It will search for and attempt to find the complementary dark
matter and stellar particle header and data files. However, your simulations may
not follow the same naming convention.
.. code-block:: python
import yt
ds = yt.load("D9p_500/10MpcBox_HartGal_csf_a0.500.d")
It will search for and attempt to find the complementary dark matter and stellar
particle header and data files. However, your simulations may not follow the
same naming convention.
For example, the single snapshot given in the sample data has a series of files
that look like this:
.. code-block:: none
10MpcBox_HartGal_csf_a0.500.d #Gas mesh
PMcrda0.500.DAT #Particle header
PMcrs0a0.500.DAT #Particle data (positions,velocities)
stars_a0.500.dat #Stellar data (metallicities, ages, etc.)
The ART frontend tries to find the associated files matching the
above, but if that fails you can specify ``file_particle_header``,
``file_particle_data``, and ``file_particle_stars``, in addition to
specifying the gas mesh. Note that the ``pta0.500.dat`` or ``pt.dat``
file containing particle time steps is not loaded by yt.
You also have the option of gridding particles and assigning them onto the
meshes. This process is in beta, and for the time being, it's probably best to
leave ``do_grid_particles=False`` as the default.
To speed up the loading of an ART file, you have a few options. You can turn
off the particles entirely by setting ``discover_particles=False``. You can
also only grid octs up to a certain level, ``limit_level=5``, which is useful
when debugging by artificially creating a 'smaller' dataset to work with.
Finally, when stellar ages are computed we 'spread' the ages evenly within a
smoothing window. By default this is turned on and set to 10Myr. To turn this
off you can set ``spread=False``, and you can tweak the age smoothing window
by specifying the window in seconds, ``spread=1.0e7*365*24*3600``.
There is currently preliminary support for dark matter only ART data. To load a
dataset use the ``yt.load`` command and provide it the particle data file. It
will search for the complementary particle header file.
.. code-block:: python
import yt
ds = yt.load("PMcrs0a0.500.DAT")
Important: This should not be used for loading just the dark matter
data for a 'regular' hydrodynamical data set as the units and IO are
different!
.. _loading-artio-data:
ARTIO Data
----------
ARTIO data has a well-specified internal parameter system and has few free
parameters. However, for optimization purposes, the parameter that provides
the most guidance to yt as to how to manage ARTIO data is ``max_range``. This
governs the maximum number of space-filling curve cells that will be used in a
single "chunk" of data read from disk. For small datasets, setting this number
very large will enable more data to be loaded into memory at any given time;
for very large datasets, this parameter can be left alone safely. By default
it is set to 1024; it can in principle be set as high as the total number of
SFC cells.
To load ARTIO data, you can specify a command such as this:
.. code-block:: python
ds = load("./A11QR1/s11Qzm1h2_a1.0000.art")
.. _loading-athena-data:
Athena Data
-----------
Athena 4.x VTK data is supported and cared for by John ZuHone. Both uniform grid
and SMR datasets are supported.
.. note::
yt also recognizes Fargo3D data written to VTK files as
Athena data, but support for Fargo3D data is preliminary.
Loading Athena datasets is slightly different depending on whether
your dataset came from a serial or a parallel run. If the data came
from a serial run or you have joined the VTK files together using the
Athena tool ``join_vtk``, you can load the data like this:
.. code-block:: python
import yt
ds = yt.load("kh.0010.vtk")
The filename corresponds to the file on SMR level 0, whereas if there
are multiple levels the corresponding files will be picked up
automatically, assuming they are laid out in ``lev*`` subdirectories
under the directory where the base file is located.
For parallel datasets, yt assumes that they are laid out in
directories named ``id*``, one for each processor number, each with
``lev*`` subdirectories for additional refinement levels. To load this
data, call ``load`` with the base file in the ``id0`` directory:
.. code-block:: python
import yt
ds = yt.load("id0/kh.0010.vtk")
which will pick up all of the files in the different ``id*`` directories for
the entire dataset.
The default unit system in yt is cgs ("Gaussian") units, but Athena data is not
normally stored in these units, so the code unit system is the default unit
system for Athena data. This means that answers to field queries from data
objects and plots of data will be expressed in code units. Note that the default
conversions from these units will still be in terms of cgs units, e.g. 1
``code_length`` equals 1 cm, and so on. If you would like to provided different
conversions, you may supply conversions for length, time, and mass to ``load``
using the ``units_override`` functionality:
.. code-block:: python
import yt
units_override = {
"length_unit": (1.0, "Mpc"),
"time_unit": (1.0, "Myr"),
"mass_unit": (1.0e14, "Msun"),
}
ds = yt.load("id0/cluster_merger.0250.vtk", units_override=units_override)
This means that the yt fields, e.g. ``("gas","density")``,
``("gas","velocity_x")``, ``("gas","magnetic_field_x")``, will be in cgs units
(or whatever unit system was specified), but the Athena fields, e.g.,
``("athena","density")``, ``("athena","velocity_x")``,
``("athena","cell_centered_B_x")``, will be in code units.
The default normalization for various magnetic-related quantities such as
magnetic pressure, Alfven speed, etc., as well as the conversion between
magnetic code units and other units, is Gaussian/CGS, meaning that factors
of :math:`4\pi` or :math:`\sqrt{4\pi}` will appear in these quantities, e.g.
:math:`p_B = B^2/8\pi`. To use the Lorentz-Heaviside normalization instead,
in which the factors of :math:`4\pi` are dropped (:math:`p_B = B^2/2), for
example), set ``magnetic_normalization="lorentz_heaviside"`` in the call to
``yt.load``:
.. code-block:: python
ds = yt.load(
"id0/cluster_merger.0250.vtk",
units_override=units_override,
magnetic_normalization="lorentz_heaviside",
)
Some 3D Athena outputs may have large grids (especially parallel datasets
subsequently joined with the ``join_vtk`` script), and may benefit from being
subdivided into "virtual grids". For this purpose, one can pass in the
``nprocs`` parameter:
.. code-block:: python
import yt
ds = yt.load("sloshing.0000.vtk", nprocs=8)
which will subdivide each original grid into ``nprocs`` grids. Note that this
parameter is independent of the number of MPI tasks assigned to analyze the data
set in parallel (see :ref:`parallel-computation`), and ideally should be (much)
larger than this.
.. note::
Virtual grids are only supported (and really only necessary) for 3D data.
Alternative values for the following simulation parameters may be specified
using a ``parameters`` dict, accepting the following keys:
* ``gamma``: ratio of specific heats, Type: Float. If not specified,
:math:`\gamma = 5/3` is assumed.
* ``geometry``: Geometry type, currently accepts ``"cartesian"`` or
``"cylindrical"``. Default is ``"cartesian"``.
* ``periodicity``: Is the domain periodic? Type: Tuple of boolean values
corresponding to each dimension. Defaults to ``True`` in all directions.
* ``mu``: mean molecular weight, Type: Float. If not specified, :math:`\mu = 0.6`
(for a fully ionized primordial plasma) is assumed.
.. code-block:: python
import yt
parameters = {
"gamma": 4.0 / 3.0,
"geometry": "cylindrical",
"periodicity": (False, False, False),
}
ds = yt.load("relativistic_jet_0000.vtk", parameters=parameters)
.. rubric:: Caveats
* yt primarily works with primitive variables. If the Athena dataset contains
conservative variables, the yt primitive fields will be generated from the
conserved variables on disk.
* Special relativistic datasets may be loaded, but at this time not all of
their fields are fully supported. In particular, the relationships between
quantities such as pressure and thermal energy will be incorrect, as it is
currently assumed that their relationship is that of an ideal a
:math:`\gamma`-law equation of state. This will be rectified in a future
release.
* Domains may be visualized assuming periodicity.
* Particle list data is currently unsupported.
.. _loading-athena-pp-data:
Athena++ Data
-------------
Athena++ HDF5 data is supported and cared for by John ZuHone. Uniform-grid, SMR,
and AMR datasets in cartesian coordinates are fully supported. Support for
curvilinear coordinates and logarithmic cell sizes exists, but is preliminary.
For the latter type of dataset, the data will be loaded in as a semi-structured
mesh dataset. See :ref:`loading-semi-structured-mesh-data` for more details on
how this works in yt.
The default unit system in yt is cgs ("Gaussian") units, but Athena++ data is
not normally stored in these units, so the code unit system is the default unit
system for Athena++ data. This means that answers to field queries from data
objects and plots of data will be expressed in code units. Note that the default
conversions from these units will still be in terms of cgs units, e.g. 1
``code_length`` equals 1 cm, and so on. If you would like to provided different
conversions, you may supply conversions for length, time, and mass to ``load``
using the ``units_override`` functionality:
.. code-block:: python
import yt
units_override = {
"length_unit": (1.0, "Mpc"),
"time_unit": (1.0, "Myr"),
"mass_unit": (1.0e14, "Msun"),
}
ds = yt.load("AM06/AM06.out1.00400.athdf", units_override=units_override)
This means that the yt fields, e.g. ``("gas","density")``,
``("gas","velocity_x")``, ``("gas","magnetic_field_x")``, will be in cgs units
(or whatever unit system was specified), but the Athena fields, e.g.,
``("athena_pp","density")``, ``("athena_pp","vel1")``, ``("athena_pp","Bcc1")``,
will be in code units.
The default normalization for various magnetic-related quantities such as
magnetic pressure, Alfven speed, etc., as well as the conversion between
magnetic code units and other units, is Gaussian/CGS, meaning that factors
of :math:`4\pi` or :math:`\sqrt{4\pi}` will appear in these quantities, e.g.
:math:`p_B = B^2/8\pi`. To use the Lorentz-Heaviside normalization instead,
in which the factors of :math:`4\pi` are dropped (:math:`p_B = B^2/2), for
example), set ``magnetic_normalization="lorentz_heaviside"`` in the call to
``yt.load``:
.. code-block:: python
ds = yt.load(
"AM06/AM06.out1.00400.athdf",
units_override=units_override,
magnetic_normalization="lorentz_heaviside",
)
Alternative values for the following simulation parameters may be specified
using a ``parameters`` dict, accepting the following keys:
* ``gamma``: ratio of specific heats, Type: Float. If not specified,
:math:`\gamma = 5/3` is assumed.
* ``geometry``: Geometry type, currently accepts ``"cartesian"`` or
``"cylindrical"``. Default is ``"cartesian"``.
* ``periodicity``: Is the domain periodic? Type: Tuple of boolean values
corresponding to each dimension. Defaults to ``True`` in all directions.
* ``mu``: mean molecular weight, Type: Float. If not specified, :math:`\mu = 0.6`
(for a fully ionized primordial plasma) is assumed.
.. rubric:: Caveats
* yt primarily works with primitive variables. If the Athena++ dataset contains
conservative variables, the yt primitive fields will be generated from the
conserved variables on disk.
* Special relativistic datasets may be loaded, but at this time not all of their
fields are fully supported. In particular, the relationships between
quantities such as pressure and thermal energy will be incorrect, as it is
currently assumed that their relationship is that of an ideal
:math:`\gamma`-law equation of state. This will be rectified in a future
release.
* Domains may be visualized assuming periodicity.
.. _loading-orion-data:
AMReX / BoxLib Data
-------------------
AMReX and BoxLib share a frontend (currently named ``boxlib``), since
the file format nearly identical. yt has been tested with AMReX/BoxLib
data generated by Orion, Nyx, Maestro, Castro, IAMR, and
WarpX. Currently it is cared for by a combination of Andrew Myers,
Matthew Turk, and Mike Zingale.
To load an AMReX/BoxLib dataset, you can use the ``yt.load`` command on
the plotfile directory name. In general, you must also have the
``inputs`` file in the base directory, but Maestro, Castro, Nyx, and WarpX will get
all the necessary parameter information from the ``job_info`` file in
the plotfile directory. For instance, if you were in a
directory with the following files:
.. code-block:: none
inputs
pltgmlcs5600/
pltgmlcs5600/Header
pltgmlcs5600/Level_0
pltgmlcs5600/Level_0/Cell_H
pltgmlcs5600/Level_1
pltgmlcs5600/Level_1/Cell_H
pltgmlcs5600/Level_2
pltgmlcs5600/Level_2/Cell_H
pltgmlcs5600/Level_3
pltgmlcs5600/Level_3/Cell_H
pltgmlcs5600/Level_4
pltgmlcs5600/Level_4/Cell_H
You would feed it the filename ``pltgmlcs5600``:
.. code-block:: python
import yt
ds = yt.load("pltgmlcs5600")
For Maestro, Castro, Nyx, and WarpX, you would not need the ``inputs`` file, and you
would have a ``job_info`` file in the plotfile directory.
.. rubric:: Caveats
* yt does not read the Maestro base state (although you can have Maestro
map it to a full Cartesian state variable before writing the plotfile
to get around this). E-mail the dev list if you need this support.
* yt supports AMReX/BoxLib particle data stored in the standard format used
by Nyx and WarpX, and optionally Castro. It currently does not support the ASCII particle
data used by Maestro and Castro.
* For Maestro, yt aliases either "tfromp" or "tfromh to" ``temperature``
depending on the value of the ``use_tfromp`` runtime parameter.
* For Maestro, some velocity fields like ``velocity_magnitude`` or
``mach_number`` will always use the on-disk value, and not have yt
derive it, due to the complex interplay of the base state velocity.
Viewing raw fields in WarpX
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Most AMReX/BoxLib codes output cell-centered data. If the underlying discretization
is not cell-centered, then fields are typically averaged to cell centers before
they are written to plot files for visualization. WarpX, however, has the option
to output the raw (i.e., not averaged to cell centers) data as well. If you
run your WarpX simulation with ``warpx.plot_raw_fields = 1`` in your inputs
file, then you should get an additional ``raw_fields`` subdirectory inside your
plot file. When you load this dataset, yt will have additional on-disk fields
defined, with the "raw" field type:
.. code-block:: python
import yt
ds = yt.load("Laser/plt00015/")
print(ds.field_list)
The raw fields in WarpX are nodal in at least one direction. We define a field
to be "nodal" in a given direction if the field data is defined at the "low"
and "high" sides of the cell in that direction, rather than at the cell center.
Instead of returning one field value per cell selected, nodal fields return a
number of values, depending on their centering. This centering is marked by
a ``nodal_flag`` that describes whether the fields is nodal in each dimension.
``nodal_flag = [0, 0, 0]`` means that the field is cell-centered, while
``nodal_flag = [0, 0, 1]`` means that the field is nodal in the z direction
and cell centered in the others, i.e. it is defined on the z faces of each cell.
``nodal_flag = [1, 1, 0]`` would mean that the field is centered in the z direction,
but nodal in the other two, i.e. it lives on the four cell edges that are normal
to the z direction.
.. code-block:: python
ds.index
ad = ds.all_data()
print(ds.field_info[("raw", "Ex")].nodal_flag)
print(ad["raw", "Ex"].shape)
print(ds.field_info[("raw", "Bx")].nodal_flag)
print(ad["raw", "Bx"].shape)
print(ds.field_info["raw", "Bx"].nodal_flag)
print(ad["raw", "Bx"].shape)
Here, the field ``('raw', 'Ex')`` is nodal in two directions, so four values per cell
are returned, corresponding to the four edges in each cell on which the variable
is defined. ``('raw', 'Bx')`` is nodal in one direction, so two values are returned
per cell. The standard, averaged-to-cell-centers fields are still available.
Currently, slices and data selection are implemented for nodal fields. Projections,
volume rendering, and many of the analysis modules will not work.
.. _loading-pluto-data:
Pluto Data
----------
Support for Pluto AMR data is provided through the Chombo frontend, which
is currently maintained by Andrew Myers. Pluto output files that don't use
the Chombo HDF5 format are currently not supported. To load a Pluto dataset,
you can use the ``yt.load`` command on the ``*.hdf5`` files. For example, the
KelvinHelmholtz sample dataset is a directory that contains the following
files:
.. code-block:: none
data.0004.hdf5
pluto.ini
To load it, you can navigate into that directory and do:
.. code-block:: python
import yt
ds = yt.load("data.0004.hdf5")
The ``pluto.ini`` file must also be present alongside the HDF5 file.
By default, all of the Pluto fields will be in code units.
.. _loading-enzo-data:
Enzo Data
---------
Enzo data is fully supported and cared for by Matthew Turk. To load an Enzo
dataset, you can use the ``yt.load`` command and provide it the dataset name.
This would be the name of the output file, and it
contains no extension. For instance, if you have the following files:
.. code-block:: none
DD0010/
DD0010/data0010
DD0010/data0010.index
DD0010/data0010.cpu0000
DD0010/data0010.cpu0001
DD0010/data0010.cpu0002
DD0010/data0010.cpu0003
You would feed the ``load`` command the filename ``DD0010/data0010`` as
mentioned.
.. code-block:: python
import yt
ds = yt.load("DD0010/data0010")
.. rubric:: Caveats
* There are no major caveats for Enzo usage
* Units should be correct, if you utilize standard unit-setting routines. yt
will notify you if it cannot determine the units, although this
notification will be passive.
* 2D and 1D data are supported, but the extraneous dimensions are set to be
of length 1.0 in "code length" which may produce strange results for volume
quantities.
Enzo MHDCT data
^^^^^^^^^^^^^^^
The electric and magnetic fields for Enzo MHDCT simulations are defined on cell
faces, unlike other Enzo fields which are defined at cell centers. In yt, we
call face-centered fields like this "nodal". We define a field to be nodal in
a given direction if the field data is defined at the "low" and "high" sides of
the cell in that direction, rather than at the cell center. Instead of
returning one field value per cell selected, nodal fields return a number of
values, depending on their centering. This centering is marked by a ``nodal_flag``
that describes whether the fields is nodal in each dimension. ``nodal_flag =
[0, 0, 0]`` means that the field is cell-centered, while ``nodal_flag = [0, 0,
1]`` means that the field is nodal in the z direction and cell centered in the
others, i.e. it is defined on the z faces of each cell. ``nodal_flag = [1, 1,
0]`` would mean that the field is centered in the z direction, but nodal in the
other two, i.e. it lives on the four cell edges that are normal to the z
direction.
.. code-block:: python
ds.index
ad = ds.all_data()
print(ds.field_info[("enzo", "Ex")].nodal_flag)
print(ad["enzo", "Ex"].shape)
print(ds.field_info[("enzo", "BxF")].nodal_flag)
print(ad["enzo", "Bx"].shape)
print(ds.field_info[("enzo", "Bx")].nodal_flag)
print(ad["enzo", "Bx"].shape)
Here, the field ``('enzo', 'Ex')`` is nodal in two directions, so four values
per cell are returned, corresponding to the four edges in each cell on which the
variable is defined. ``('enzo', 'BxF')`` is nodal in one direction, so two
values are returned per cell. The standard, non-nodal field ``('enzo', 'Bx')``
is also available.
Currently, slices and data selection are implemented for nodal
fields. Projections, volume rendering, and many of the analysis modules will not
work.
.. _loading-enzoe-data:
Enzo-E Data
-----------
Enzo-E outputs have three types of files.
.. code-block:: none
hello-0200/
hello-0200/hello-0200.block_list
hello-0200/hello-0200.file_list
hello-0200/hello-0200.hello-c0020-p0000.h5
To load Enzo-E data into yt, provide the block list file:
.. code-block:: python
import yt
ds = yt.load("hello-0200/hello-0200.block_list")
Mesh and particle fields are fully supported for 1, 2, and 3D datasets. Enzo-E
supports arbitrary particle types defined by the user. The available particle
types will be known as soon as the dataset index is created.
.. code-block:: python
ds = yt.load("ENZOP_DD0140/ENZOP_DD0140.block_list")
ds.index
print(ds.particle_types)
print(ds.particle_type_counts)
print(ds.r["dark", "particle_position"])
.. _loading-exodusii-data:
Exodus II Data
--------------
.. note::
To load Exodus II data, you need to have the `netcdf4 <http://unidata.github.io/
netcdf4-python/>`_ python interface installed.
Exodus II is a file format for Finite Element datasets that is used by the MOOSE
framework for file IO. Support for this format (and for unstructured mesh data in
general) is a new feature as of yt 3.3, so while we aim to fully support it, we
also expect there to be some buggy features at present. Currently, yt can visualize
quads, hexes, triangles, and tetrahedral element types at first order. Additionally,
there is experimental support for the high-order visualization of 20-node hex elements.
Development of more high-order visualization capability is a work in progress.
To load an Exodus II dataset, you can use the ``yt.load`` command on the Exodus II
file:
.. code-block:: python
import yt
ds = yt.load("MOOSE_sample_data/out.e-s010", step=0)
Because Exodus II datasets can have multiple steps (which can correspond to time steps,
Picard iterations, non-linear solve iterations, etc...), you can also specify a step
argument when you load an Exodus II data that defines the index at which to look when
you read data from the file. Omitting this argument is the same as passing in 0, and
setting ``step=-1`` selects the last time output in the file.
You can access the connectivity information directly by doing:
.. code-block:: python
import yt
ds = yt.load("MOOSE_sample_data/out.e-s010", step=-1)
print(ds.index.meshes[0].connectivity_coords)
print(ds.index.meshes[0].connectivity_indices)
print(ds.index.meshes[1].connectivity_coords)
print(ds.index.meshes[1].connectivity_indices)
This particular dataset has two meshes in it, both of which are made of 8-node hexes.
yt uses a field name convention to access these different meshes in plots and data
objects. To see all the fields found in a particular dataset, you can do:
.. code-block:: python
import yt
ds = yt.load("MOOSE_sample_data/out.e-s010")
print(ds.field_list)
This will give you a list of field names like ``('connect1', 'diffused')`` and
``('connect2', 'convected')``. Here, fields labelled with ``'connect1'`` correspond to the
first mesh, and those with ``'connect2'`` to the second, and so on. To grab the value
of the ``'convected'`` variable at all the nodes in the first mesh, for example, you
would do:
.. code-block:: python
import yt
ds = yt.load("MOOSE_sample_data/out.e-s010")
ad = ds.all_data() # geometric selection, this just grabs everything
print(ad["connect1", "convected"])
In this dataset, ``('connect1', 'convected')`` is nodal field, meaning that the field values
are defined at the vertices of the elements. If we examine the shape of the returned array:
.. code-block:: python
import yt
ds = yt.load("MOOSE_sample_data/out.e-s010")
ad = ds.all_data()
print(ad["connect1", "convected"].shape)
we see that this mesh has 12480 8-node hexahedral elements, and that we get 8 field values
for each element. To get the vertex positions at which these field values are defined, we
can do, for instance:
.. code-block:: python
import yt
ds = yt.load("MOOSE_sample_data/out.e-s010")
ad = ds.all_data()
print(ad["connect1", "vertex_x"])
If we instead look at an element-centered field, like ``('connect1', 'conv_indicator')``,
we get:
.. code-block:: python
import yt
ds = yt.load("MOOSE_sample_data/out.e-s010")
ad = ds.all_data()
print(ad["connect1", "conv_indicator"].shape)
we instead get only one field value per element.
For information about visualizing unstructured mesh data, including Exodus II datasets,
please see :ref:`unstructured-mesh-slices` and :ref:`unstructured_mesh_rendering`.
Displacement Fields
^^^^^^^^^^^^^^^^^^^
Finite element codes often solve for the displacement of each vertex from its
original position as a node variable, rather than updating the actual vertex
positions with time. For analysis and visualization, it is often useful to turn
these displacements on or off, and to be able to scale them arbitrarily to
emphasize certain features of the solution. To allow this, if ``yt`` detects
displacement fields in an Exodus II dataset (using the convention that they will
be named ``disp_x``, ``disp_y``, etc...), it will optionally add these to
the mesh vertex positions for the purposes of visualization. Displacement fields
can be controlled when a dataset is loaded by passing in an optional dictionary
to the ``yt.load`` command. This feature is turned off by default, meaning that
a dataset loaded as
.. code-block:: python
import yt
ds = yt.load("MOOSE_sample_data/mps_out.e")
will not include the displacements in the vertex positions. The displacements can
be turned on separately for each mesh in the file by passing in a tuple of
(scale, offset) pairs for the meshes you want to enable displacements for.
For example, the following code snippet turns displacements on for the second
mesh, but not the first:
.. code-block:: python
import yt
ds = yt.load(
"MOOSE_sample_data/mps_out.e",
step=10,
displacements={"connect2": (1.0, [0.0, 0.0, 0.0])},
)
The displacements can also be scaled by an arbitrary factor before they are
added in to the vertex positions. The following code turns on displacements
for both ``connect1`` and ``connect2``, scaling the former by a factor of 5.0
and the later by a factor of 10.0:
.. code-block:: python
import yt
ds = yt.load(
"MOOSE_sample_data/mps_out.e",
step=10,
displacements={
"connect1": (5.0, [0.0, 0.0, 0.0]),
"connect2": (10.0, [0.0, 0.0, 0.0]),
},
)
Finally, we can also apply an arbitrary offset to the mesh vertices after
the scale factor is applied. For example, the following code scales all
displacements in the second mesh by a factor of 5.0, and then shifts
each vertex in the mesh by 1.0 unit in the z-direction:
.. code-block:: python
import yt
ds = yt.load(
"MOOSE_sample_data/mps_out.e",
step=10,
displacements={"connect2": (5.0, [0.0, 0.0, 1.0])},
)
.. _loading-fits-data:
FITS Data
---------
FITS data is *mostly* supported and cared for by John ZuHone. In order to
read FITS data, `AstroPy <https://www.astropy.org>`_ must be installed. FITS
data cubes can be loaded in the same way by yt as other datasets. yt
can read FITS image files that have the following (case-insensitive) suffixes:
* fits
* fts
* fits.gz
* fts.gz
yt can currently read two kinds of FITS files: FITS image files and FITS
binary table files containing positions, times, and energies of X-ray
events. These are described in more detail below.
Types of FITS Datasets Supported by yt
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
yt FITS Data Standard
"""""""""""""""""""""
yt has facilities for creating 2 and 3-dimensional FITS images from derived,
fixed-resolution data products from other datasets. These include images
produced from slices, projections, and 3D covering grids. The resulting
FITS images are fully-describing in that unit, parameter, and coordinate
information is passed from the original dataset. These can be created via the
:class:`~yt.visualization.fits_image.FITSImageData` class and its subclasses.
For information about how to use these special classes, see
:ref:`writing_fits_images`.
Once you have produced a FITS file in this fashion, you can load it using
yt and it will be detected as a ``YTFITSDataset`` object, and it can be analyzed
in the same way as any other dataset in yt.
Astronomical Image Data
"""""""""""""""""""""""
These files are one of three types:
* Generic two-dimensional FITS images in sky coordinates
* Three or four-dimensional "spectral cubes"
* *Chandra* event files
These FITS images typically are in celestial or galactic coordinates, and
for 3D spectral cubes the third axis is typically in velocity, wavelength,
or frequency units. For these datasets, since yt does not yet recognize
non-spatial axes, the coordinates are in units of the image pixels. The
coordinates of these pixels in the WCS coordinate systems will be available
in separate fields.
Often, the aspect ratio of 3D spectral cubes can be far from unity. Because yt
sets the pixel scale as the ``code_length``, certain visualizations (such as
volume renderings) may look extended or distended in ways that are
undesirable. To adjust the width in ``code_length`` of the spectral axis, set
``spectral_factor`` equal to a constant which gives the desired scaling, or set
it to ``"auto"`` to make the width the same as the largest axis in the sky
plane:
.. code-block:: python
ds = yt.load("m33_hi.fits.gz", spectral_factor=0.1)
For 4D spectral cubes, the fourth axis is assumed to be composed of different
fields altogether (e.g., Stokes parameters for radio data).
*Chandra* X-ray event data, which is in tabular form, will be loaded as
particle fields in yt, but a grid will be constructed from the WCS
information in the FITS header. There is a helper function,
``setup_counts_fields``, which may be used to make deposited image fields
from the event data for different energy bands (for an example see
:ref:`xray_fits`).
Generic FITS Images
"""""""""""""""""""
If the FITS file contains images but does not have adequate header information
to fall into one of the above categories, yt will still load the data, but
the resulting field and/or coordinate information will necessarily be
incomplete. Field names may not be descriptive, and units may be incorrect. To
get the full use out of yt for FITS files, make sure that the file is sufficiently
self-descripting to fall into one of the above categories.
Making the Most of yt for FITS Data
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
yt will load data without WCS information and/or some missing header keywords,
but the resulting field and/or coordinate information will necessarily be
incomplete. For example, field names may not be descriptive, and units will not
be correct. To get the full use out of yt for FITS files, make sure that for
each image HDU the following standard header keywords have sensible values:
* ``CDELTx``: The pixel width in along axis ``x``
* ``CRVALx``: The coordinate value at the reference position along axis ``x``
* ``CRPIXx``: The reference pixel along axis ``x``
* ``CTYPEx``: The projection type of axis ``x``
* ``CUNITx``: The units of the coordinate along axis ``x``
* ``BTYPE``: The type of the image, this will be used as the field name
* ``BUNIT``: The units of the image
FITS header keywords can easily be updated using AstroPy. For example,
to set the ``BTYPE`` and ``BUNIT`` keywords:
.. code-block:: python
from astropy.io import fits
f = fits.open("xray_flux_image.fits", mode="update")
f[0].header["BUNIT"] = "cts/s/pixel"
f[0].header["BTYPE"] = "flux"
f.flush()
f.close()
FITS Data Decomposition
^^^^^^^^^^^^^^^^^^^^^^^
Though a FITS image is composed of a single array in the FITS file,
upon being loaded into yt it is automatically decomposed into grids:
.. code-block:: python
import yt
ds = yt.load("m33_hi.fits")
ds.print_stats()
.. parsed-literal::
level # grids # cells # cells^3
----------------------------------------------
0 512 981940800 994
----------------------------------------------
512 981940800
For 3D spectral-cube data, the decomposition into grids will be done along the
spectral axis since this will speed up many common operations for this
particular type of dataset.
yt will generate its own domain decomposition, but the number of grids can be
set manually by passing the ``nprocs`` parameter to the ``load`` call:
.. code-block:: python
ds = yt.load("m33_hi.fits", nprocs=64)
Fields in FITS Datasets
^^^^^^^^^^^^^^^^^^^^^^^
Multiple fields can be included in a FITS dataset in several different ways.
The first way, and the simplest, is if more than one image HDU is
contained within the same file. The field names will be determined by the
value of ``BTYPE`` in the header, and the field units will be determined by
the value of ``BUNIT``. The second way is if a dataset has a fourth axis,
with each slice along this axis corresponding to a different field. In this
case, the field names will be determined by the value of the ``CTYPE4`` keyword
and the index of the slice. So, for example, if ``BTYPE`` = ``"intensity"`` and
``CTYPE4`` = ``"stokes"``, then the fields will be named
``"intensity_stokes_1"``, ``"intensity_stokes_2"``, and so on.
The third way is if auxiliary files are included along with the main file, like so:
.. code-block:: python
ds = yt.load("flux.fits", auxiliary_files=["temp.fits", "metal.fits"])
The image blocks in each of these files will be loaded as a separate field,
provided they have the same dimensions as the image blocks in the main file.
Additionally, fields corresponding to the WCS coordinates will be generated
based on the corresponding ``CTYPEx`` keywords. When queried, these fields
will be generated from the pixel coordinates in the file using the WCS
transformations provided by AstroPy.
.. note::
Each FITS image from a single dataset, whether from one file or from one of
multiple files, must have the same dimensions and WCS information as the
first image in the primary file. If this is not the case,
yt will raise a warning and will not load this field.
.. _additional_fits_options:
Additional Options
^^^^^^^^^^^^^^^^^^
The following are additional options that may be passed to the ``load`` command
when analyzing FITS data:
``nan_mask``
""""""""""""
FITS image data may include ``NaNs``. If you wish to mask this data out,
you may supply a ``nan_mask`` parameter, which may either be a
single floating-point number (applies to all fields) or a Python dictionary
containing different mask values for different fields:
.. code-block:: python
# passing a single float for all images
ds = yt.load("m33_hi.fits", nan_mask=0.0)
# passing a dict
ds = yt.load("m33_hi.fits", nan_mask={"intensity": -1.0, "temperature": 0.0})
``suppress_astropy_warnings``
"""""""""""""""""""""""""""""
Generally, AstroPy may generate a lot of warnings about individual FITS
files, many of which you may want to ignore. If you want to see these
warnings, set ``suppress_astropy_warnings = False``.
Miscellaneous Tools for Use with FITS Data
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A number of tools have been prepared for use with FITS data that enhance yt's
visualization and analysis capabilities for this particular type of data. These
are included in the ``yt.frontends.fits.misc`` module, and can be imported like
so:
.. code-block:: python
from yt.frontends.fits.misc import PlotWindowWCS, ds9_region, setup_counts_fields
``setup_counts_fields``
"""""""""""""""""""""""
This function can be used to create image fields from X-ray counts data in
different energy bands:
.. code-block:: python
ebounds = [(0.1, 2.0), (2.0, 5.0)] # Energies are in keV
setup_counts_fields(ds, ebounds)
which would make two fields, ``"counts_0.1-2.0"`` and ``"counts_2.0-5.0"``,
and add them to the field registry for the dataset ``ds``.
``ds9_region``
""""""""""""""
This function takes a `ds9 <http://ds9.si.edu/site/Home.html>`_ region and
creates a "cut region" data container from it, that can be used to select
the cells in the FITS dataset that fall within the region. To use this
functionality, the `regions <https://github.com/astropy/regions/>`_
package must be installed.
.. code-block:: python
ds = yt.load("m33_hi.fits")
circle_region = ds9_region(ds, "circle.reg")
print(circle_region.quantities.extrema("flux"))
``PlotWindowWCS``
"""""""""""""""""
This class takes a on-axis ``SlicePlot`` or ``ProjectionPlot`` of FITS
data and adds celestial coordinates to the plot axes. To use it, a
version of AstroPy >= 1.3 must be installed.
.. code-block:: python
wcs_slc = PlotWindowWCS(slc)
wcs_slc.show() # for Jupyter notebooks
wcs_slc.save()
``WCSAxes`` is still in an experimental state, but as its functionality
improves it will be utilized more here.
``create_spectral_slabs``
"""""""""""""""""""""""""
.. note::
The following functionality requires the
`spectral-cube <https://spectral-cube.readthedocs.io/en/latest/>`_ library to be
installed.
If you have a spectral intensity dataset of some sort, and would like to
extract emission in particular slabs along the spectral axis of a certain
width, ``create_spectral_slabs`` can be used to generate a dataset with
these slabs as different fields. In this example, we use it to extract
individual lines from an intensity cube:
.. code-block:: python
slab_centers = {
"13CN": (218.03117, "GHz"),
"CH3CH2CHO": (218.284256, "GHz"),
"CH3NH2": (218.40956, "GHz"),
}
slab_width = (0.05, "GHz")
ds = create_spectral_slabs(
"intensity_cube.fits", slab_centers, slab_width, nan_mask=0.0
)
All keyword arguments to ``create_spectral_slabs`` are passed on to ``load`` when
creating the dataset (see :ref:`additional_fits_options` above). In the
returned dataset, the different slabs will be different fields, with the field
names taken from the keys in ``slab_centers``. The WCS coordinates on the
spectral axis are reset so that the center of the domain along this axis is
zero, and the left and right edges of the domain along this axis are
:math:`\pm` ``0.5*slab_width``.
Examples of Using FITS Data
^^^^^^^^^^^^^^^^^^^^^^^^^^^
The following Jupyter notebooks show examples of working with FITS data in yt,
which we recommend you look at in the following order:
* :ref:`radio_cubes`
* :ref:`xray_fits`
* :ref:`writing_fits_images`
.. _loading-flash-data:
FLASH Data
----------
FLASH HDF5 data is *mostly* supported and cared for by John ZuHone. To load a
FLASH dataset, you can use the ``yt.load`` command and provide it the file name of
a plot file, checkpoint file, or particle file. Particle files require special handling
depending on the situation, the main issue being that they typically lack grid information.
The first case is when you have a plotfile and a particle file that you would like to
load together. In the simplest case, this occurs automatically. For instance, if you
were in a directory with the following files:
.. code-block:: none
radio_halo_1kpc_hdf5_plt_cnt_0100 # plotfile
radio_halo_1kpc_hdf5_part_0100 # particle file
where the plotfile and the particle file were created at the same time (therefore having
particle data consistent with the grid structure of the former). Notice also that the
prefix ``"radio_halo_1kpc_"`` and the file number ``100`` are the same. In this special case,
the particle file will be loaded automatically when ``yt.load`` is called on the plotfile.
This also works when loading a number of files in a time series.
If the two files do not have the same prefix and number, but they nevertheless have the same
grid structure and are at the same simulation time, the particle data may be loaded with the
``particle_filename`` optional argument to ``yt.load``:
.. code-block:: python
import yt
ds = yt.load(
"radio_halo_1kpc_hdf5_plt_cnt_0100",
particle_filename="radio_halo_1kpc_hdf5_part_0100",
)
However, if you don't have a corresponding plotfile for a particle file, but would still
like to load the particle data, you can still call ``yt.load`` on the file. However, the
grid information will not be available, and the particle data will be loaded in a fashion
similar to other particle-based datasets in yt.
Mean Molecular Weight and Number Density Fields
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The way the mean molecular weight and number density fields are defined depends on
what type of simulation you are running. If you are running a simulation without
species and a :math:`\gamma`-law equation of state, then the mean molecular weight
is defined using the ``eos_singleSpeciesA`` parameter in the FLASH dataset. If you
have multiple species and your dataset contains the FLASH field ``"abar"``, then
this is used as the mean molecular weight. In either case, the number density field
is calculated using this weight.
If you are running a FLASH simulation where the fields ``"sumy"`` and ``"ye"`` are
present, Then the mean molecular weight is the inverse of ``"sumy"``, and the fields
``"El_number_density"``, ``"ion_number_density"``, and ``"number_density"`` are
defined using the following mathematical definitions:
* ``"El_number_density"`` :math:`n_e = N_AY_e\rho`
* ``"ion_number_density"`` :math:`n_i = N_A\rho/\bar{A}`
* ``"number_density"`` :math:`n = n_e + n_i`
where :math:`n_e` and :math:`n_i` are the electron and ion number densities,
:math:`\rho` is the mass density, :math:`Y_e` is the electron number per baryon,
:math:`\bar{A}` is the mean molecular weight, and :math:`N_A` is Avogadro's number.
.. rubric:: Caveats
* Please be careful that the units are correctly utilized; yt assumes cgs by default, but conversion to
other unit systems is also possible.
.. _loading-gadget-data:
Gadget Data
-----------
.. note::
For more information about how yt indexes and reads particle data, set the
section :ref:`demeshening`.
yt has support for reading Gadget data in both raw binary and HDF5 formats. It
is able to access the particles as it would any other particle dataset, and it
can apply smoothing kernels to the data to produce both quantitative analysis
and visualization. See :ref:`loading-sph-data` for more details and
:ref:`gadget-notebook` for a detailed example of loading, analyzing, and
visualizing a Gadget dataset. An example which makes use of a Gadget snapshot
from the OWLS project can be found at :ref:`owls-notebook`.
.. note::
If you are loading a multi-file dataset with Gadget, you can either supply the *zeroth*
file to the ``load`` command or the directory containing all of the files.
For instance, to load the *zeroth* file: ``yt.load("snapshot_061.0.hdf5")`` . To
give just the directory, if you have all of your ``snapshot_000.*`` files in a directory
called ``snapshot_000``, do: ``yt.load("/path/to/snapshot_000")``.
Gadget data in HDF5 format can be loaded with the ``load`` command:
.. code-block:: python
import yt
ds = yt.load("snapshot_061.hdf5")
Gadget data in raw binary format can also be loaded with the ``load`` command.
This is supported for snapshots created with the ``SnapFormat`` parameter
set to 1 or 2.
.. code-block:: python
import yt
ds = yt.load("snapshot_061")
.. _particle-bbox:
Units and Bounding Boxes
^^^^^^^^^^^^^^^^^^^^^^^^
There are two additional pieces of information that may be needed. If your
simulation is cosmological, yt can often guess the bounding box and the units of
the simulation. However, for isolated simulations and for cosmological
simulations with non-standard units, these must be supplied by the user. For
example, if a length unit of 1.0 corresponds to a kiloparsec, you can supply
this in the constructor. yt can accept units such as ``Mpc``, ``kpc``, ``cm``,
``Mpccm/h`` and so on. In particular, note that ``Mpc/h`` and ``Mpccm/h``
(``cm`` for comoving here) are usable unit definitions.
yt will attempt to use units for ``mass``, ``length``, ``time``, and
``magnetic`` as supplied in the argument ``unit_base``. The ``bounding_box``
argument is a list of two-item tuples or lists that describe the left and right
extents of the particles. In this example we load a dataset with a custom bounding
box and units.
.. code-block:: python
bbox = [[-600.0, 600.0], [-600.0, 600.0], [-600.0, 600.0]]
unit_base = {
"length": (1.0, "kpc"),
"velocity": (1.0, "km/s"),
"mass": (1.0, "Msun"),
}
ds = yt.load("snap_004", unit_base=unit_base, bounding_box=bbox)
.. warning::
If a ``bounding_box`` argument is supplied and the original dataset
has periodic boundaries, it will no longer have periodic boundaries
after the bounding box is applied.
In addition, you can use ``UnitLength_in_cm``, ``UnitVelocity_in_cm_per_s``,
``UnitMass_in_g``, and ``UnitMagneticField_in_gauss`` as keys for the
``unit_base`` dictionary. These name come from the names used in the Gadget
runtime parameter file. This example will initialize a dataset with the same
units as the example above:
.. code-block:: python
unit_base = {
"UnitLength_in_cm": 3.09e21,
"UnitVelocity_in_cm_per_s": 1e5,
"UnitMass_in_g": 1.989e33,
}
ds = yt.load("snap_004", unit_base=unit_base, bounding_box=bbox)
.. _gadget-field-spec:
Field Specifications
^^^^^^^^^^^^^^^^^^^^
Binary Gadget outputs often have additional fields or particle types that are
non-standard from the default Gadget distribution format. These can be
specified in the call to ``GadgetDataset`` by either supplying one of the
sets of field specifications as a string or by supplying a field specification
itself. As an example, yt has built-in definitions for ``default`` (the
default), ``agora_unlv``, ``group0000``, and ``magneticum_box2_hr``. They can
be used like this:
.. code-block:: python
ds = yt.load("snap_100", field_spec="group0000")
Field specifications must be tuples, and must be of this format:
.. code-block:: python
default = (
"Coordinates",
"Velocities",
"ParticleIDs",
"Mass",
("InternalEnergy", "Gas"),
("Density", "Gas"),
("SmoothingLength", "Gas"),
)
This is the default specification used by the Gadget frontend. It means that
the fields are, in order, Coordinates, Velocities, ParticleIDs, Mass, and the
fields InternalEnergy, Density and SmoothingLength *only* for Gas particles.
So for example, if you have defined a Metallicity field for the particle type
Halo, which comes right after ParticleIDs in the file, you could define it like
this:
.. code-block:: python
import yt
my_field_def = (
"Coordinates",
"Velocities",
"ParticleIDs",
("Metallicity", "Halo"),
"Mass",
("InternalEnergy", "Gas"),
("Density", "Gas"),
("SmoothingLength", "Gas"),
)
ds = yt.load("snap_100", field_spec=my_field_def)
To save time, you can utilize the plugins file for yt and use it to add items
to the dictionary where these definitions are stored. You could do this like
so:
.. code-block:: python
import yt
from yt.frontends.gadget.definitions import gadget_field_specs
gadget_field_specs["my_field_def"] = my_field_def
ds = yt.load("snap_100", field_spec="my_field_def")
Please also feel free to issue a pull request with any new field
specifications, as we're happy to include them in the main distribution!
Magneticum halos downloaded using the SIMCUT method from the
`Cosmological Web Portal <https://c2papcosmosim.uc.lrz.de/>`_ can be loaded
using the ``"magneticum_box2_hr"`` value for the ``field_spec`` argumemt.
However, this is strictly only true for halos downloaded after May 14, 2021,
since before then the halos had the following signature (with the ``"StellarAge"``
field for the ``"Bndry"`` particles missing):
.. code-block:: python
magneticum_box2_hr = (
"Coordinates",
"Velocities",
"ParticleIDs",
"Mass",
("InternalEnergy", "Gas"),
("Density", "Gas"),
("SmoothingLength", "Gas"),
("ColdFraction", "Gas"),
("Temperature", "Gas"),
("StellarAge", "Stars"),
"Potential",
("InitialMass", "Stars"),
("ElevenMetalMasses", ("Gas", "Stars")),
("StarFormationRate", "Gas"),
("TrueMass", "Bndry"),
("AccretionRate", "Bndry"),
)
and before November 20, 2020, the field specification had the ``"ParticleIDs"`` and ``"Mass"``
fields swapped:
.. code-block:: python
magneticum_box2_hr = (
"Coordinates",
"Velocities",
"Mass",
"ParticleIDs",
("InternalEnergy", "Gas"),
("Density", "Gas"),
("SmoothingLength", "Gas"),
("ColdFraction", "Gas"),
("Temperature", "Gas"),
("StellarAge", "Stars"),
"Potential",
("InitialMass", "Stars"),
("ElevenMetalMasses", ("Gas", "Stars")),
("StarFormationRate", "Gas"),
("TrueMass", "Bndry"),
("AccretionRate", "Bndry"),
)
In general, to determine what fields are in your Gadget binary file, it may
be useful to inspect them with the `g3read <https://github.com/aragagnin/g3read>`_
code first.
.. _gadget-long-ids:
Long Particle IDs
^^^^^^^^^^^^^^^^^
Some Gadget binary files use 64-bit integers for particle IDs. To use these,
simply set ``long_ids=True`` when loading the dataset:
.. code-block:: python
import yt
ds = yt.load("snap_100", long_ids=True)
This is needed, for example, for Magneticum halos downloaded using the SIMCUT
method from the `Cosmological Web Portal <https://c2papcosmosim.uc.lrz.de/>`_
.. _gadget-ptype-spec:
Particle Type Definitions
^^^^^^^^^^^^^^^^^^^^^^^^^
In some cases, research groups add new particle types or re-order them. You
can supply alternate particle types by using the keyword ``ptype_spec`` to the
``GadgetDataset`` call. The default for Gadget binary data is:
.. code-block:: python
("Gas", "Halo", "Disk", "Bulge", "Stars", "Bndry")
You can specify alternate names, but note that this may cause problems with the
field specification if none of the names match old names.
.. _gadget-header-spec:
Header Specification
^^^^^^^^^^^^^^^^^^^^
If you have modified the header in your Gadget binary file, you can specify an
alternate header specification with the keyword ``header_spec``. This can
either be a list of strings corresponding to individual header types known to
yt, or it can be a combination of strings and header specifications. The
default header specification (found in ``yt/frontends/sph/definitions.py``) is:
.. code-block:: python
default = (
("Npart", 6, "i"),
("Massarr", 6, "d"),
("Time", 1, "d"),
("Redshift", 1, "d"),
("FlagSfr", 1, "i"),
("FlagFeedback", 1, "i"),
("Nall", 6, "i"),
("FlagCooling", 1, "i"),
("NumFiles", 1, "i"),
("BoxSize", 1, "d"),
("Omega0", 1, "d"),
("OmegaLambda", 1, "d"),
("HubbleParam", 1, "d"),
("FlagAge", 1, "i"),
("FlagMEtals", 1, "i"),
("NallHW", 6, "i"),
("unused", 16, "i"),
)
These items will all be accessible inside the object ``ds.parameters``, which
is a dictionary. You can add combinations of new items, specified in the same
way, or alternately other types of headers. The other string keys defined are
``pad32``, ``pad64``, ``pad128``, and ``pad256`` each of which corresponds to
an empty padding in bytes. For example, if you have an additional 256 bytes of
padding at the end, you can specify this with:
.. code-block:: python
header_spec = "default+pad256"
Note that a single string like this means a single header block. To specify
multiple header blocks, use a list of strings instead:
.. code-block:: python
header_spec = ["default", "pad256"]
This can then be supplied to the constructor. Note that you can also define
header items manually, for instance with:
.. code-block:: python
from yt.frontends.gadget.definitions import gadget_header_specs
gadget_header_specs["custom"] = (("some_value", 8, "d"), ("another_value", 1, "i"))
header_spec = "default+custom"
The letters correspond to data types from the Python struct module. Please
feel free to submit alternate header types to the main yt repository.
.. _specifying-gadget-units:
Specifying Units
^^^^^^^^^^^^^^^^
If you are running a cosmology simulation, yt will be able to guess the units
with some reliability. However, if you are not and you do not specify a
dataset, yt will not be able to and will use the defaults of length
being 1.0 Mpc/h (comoving), velocity being in cm/s, and mass being in 10^10
Msun/h. You can specify alternate units by supplying the ``unit_base`` keyword
argument of this form:
.. code-block:: python
unit_base = {"length": (1.0, "cm"), "mass": (1.0, "g"), "time": (1.0, "s")}
yt will utilize length, mass and time to set up all other units.
.. _loading-swift-data:
SWIFT Data
----------
.. note::
For more information about how yt indexes and reads particle data, set the
section :ref:`demeshening`.
yt has support for reading in SWIFT data from the HDF5 file format. It is able
to access all particles and fields which are stored on-disk and it is also able
to generate derived fields, i.e, linear momentum from on-disk fields.
It is also possible to smooth the data onto a grid or an octree. This
interpolation can be done using an SPH kernel using either the scatter or gather
approach. The SWIFT frontend is supported and cared for by Ashley Kelly.
SWIFT data in HDF5 format can be loaded with the ``load`` command:
.. code-block:: python
import yt
ds = yt.load("EAGLE_6/eagle_0005.hdf5")
.. _arepo-data:
Arepo Data
----------
.. note::
For more information about how yt indexes and reads discrete data, set the
section :ref:`demeshening`.
Arepo data is currently treated as SPH data. The gas cells have smoothing lengths
assigned using the following prescription for a given gas cell :math:`i`:
.. math::
h_{\rm sml} = \alpha\left(\frac{3}{4\pi}\frac{m_i}{\rho_i}\right)^{1/3}
where :math:`\alpha` is a constant factor. By default, :math:`\alpha = 2`. In
practice, smoothing lengths are only used for creating slices and projections,
and this value of :math:`\alpha` works well for this purpose. However, this
value can be changed when loading an Arepo dataset by setting the
``smoothing_factor`` parameter:
.. code-block:: python
import yt
ds = yt.load("snapshot_100.hdf5", smoothing_factor=1.5)
Currently, only Arepo HDF5 snapshots are supported.
If the "GFM" metal fields are present in your dataset, they will be loaded in
and aliased to the appropriate species fields in the ``"GFM_Metals"`` field
on-disk. For more information, see the
`Illustris TNG documentation <http://www.tng-project.org/data/docs/specifications/#sec1b>`_.
If passive scalar fields are present in your dataset, they will be loaded in
and aliased to fields with the naming convention ``"PassiveScalars_XX"`` where
``XX`` is the number of the passive scalar array, e.g. ``"00"``, ``"01"``, etc.
HDF5 snapshots will be detected as Arepo data if they have the ``"GFM_Metals"``
field present, or if they have a ``"Config"`` group in the header. If neither of
these are the case, and your snapshot *is* Arepo data, you can fix this with the
following:
.. code-block:: python
import h5py
with h5py.File(saved_filename, "r+") as f:
f.create_group("Config")
f["/Config"].attrs["VORONOI"] = 1
.. _loading-gamer-data:
GAMER Data
----------
GAMER HDF5 data is supported and cared for by Hsi-Yu Schive and John ZuHone.
Datasets using hydrodynamics, particles, magnetohydrodynamics, wave dark matter,
and special relativistic hydrodynamics are supported. You can load the data like
this:
.. code-block:: python
import yt
ds = yt.load("InteractingJets/jet_000002")
For simulations without units (i.e., ``OPT__UNIT = 0``), you can supply conversions
for length, time, and mass to ``load`` using the ``units_override``
functionality:
.. code-block:: python
import yt
code_units = {
"length_unit": (1.0, "kpc"),
"time_unit": (3.08567758096e13, "s"),
"mass_unit": (1.4690033e36, "g"),
}
ds = yt.load("InteractingJets/jet_000002", units_override=code_units)
Particle data are supported and are always stored in the same file as the grid
data.
For special relativistic simulations, both the gamma-law and Taub-Mathews EOSes
are supported, and the following fields are defined:
* ``("gas", "density")``: Comoving rest-mass density :math:`\rho`
* ``("gas", "frame_density")``: Coordinate-frame density :math:`D = \gamma\rho`
* ``("gas", "gamma")``: Ratio of specific heats :math:`\Gamma`
* ``("gas", "four_velocity_[txyz]")``: Four-velocity fields :math:`U_t, U_x, U_y, U_z`
* ``("gas", "lorentz_factor")``: Lorentz factor :math:`\gamma = \sqrt{1+U_iU^i/c^2}`
(where :math:`i` runs over the spatial indices)
These, and other fields following them (3-velocity, energy densities, etc.) are
computed in the same manner as in the
`GAMER-SR paper <https://ui.adsabs.harvard.edu/abs/2021MNRAS.504.3298T/abstract>`_
to avoid catastrophic cancellations.
.. rubric:: Caveats
* GAMER data in raw binary format (i.e., ``OPT__OUTPUT_TOTAL = "C-binary"``) is not
supported.
.. _loading-amr-data:
Generic AMR Data
----------------
See :ref:`loading-numpy-array` and
:func:`~yt.frontends.stream.data_structures.load_amr_grids` for more detail.
.. note::
It is now possible to load data using *only functions*, rather than using the
fully-in-memory method presented here. For more information and examples,
see :ref:`loading-via-functions`.
It is possible to create native yt dataset from Python's dictionary
that describes set of rectangular patches of data of possibly varying
resolution.
.. code-block:: python
import yt
grid_data = [
dict(
left_edge=[0.0, 0.0, 0.0],
right_edge=[1.0, 1.0, 1.0],
level=0,
dimensions=[32, 32, 32],
),
dict(
left_edge=[0.25, 0.25, 0.25],
right_edge=[0.75, 0.75, 0.75],
level=1,
dimensions=[32, 32, 32],
),
]
for g in grid_data:
g["density"] = np.random.random(g["dimensions"]) * 2 ** g["level"]
ds = yt.load_amr_grids(grid_data, [32, 32, 32], 1.0)
.. note::
yt only supports a block structure where the grid edges on the ``n``-th
refinement level are aligned with the cell edges on the ``n-1``-th level.
Particle fields are supported by adding 1-dimensional arrays to each
``grid``'s dict:
.. code-block:: python
for g in grid_data:
g["particle_position_x"] = np.random.random(size=100000)
.. rubric:: Caveats
* Some functions may behave oddly, and parallelism will be disappointing or
non-existent in most cases.
* No consistency checks are performed on the index
* Data must already reside in memory.
* Consistency between particle positions and grids is not checked;
``load_amr_grids`` assumes that particle positions associated with one grid are
not bounded within another grid at a higher level, so this must be
ensured by the user prior to loading the grid data.
Generic Array Data
------------------
See :ref:`loading-numpy-array` and
:func:`~yt.frontends.stream.data_structures.load_uniform_grid` for more detail.
Even if your data is not strictly related to fields commonly used in
astrophysical codes or your code is not supported yet, you can still feed it to
yt to use its advanced visualization and analysis facilities. The only
requirement is that your data can be represented as one or more uniform, three
dimensional numpy arrays. Assuming that you have your data in ``arr``,
the following code:
.. code-block:: python
import yt
data = dict(Density=arr)
bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [1.5, 1.5]])
ds = yt.load_uniform_grid(data, arr.shape, 3.08e24, bbox=bbox, nprocs=12)
will create yt-native dataset ``ds`` that will treat your array as
density field in cubic domain of 3 Mpc edge size (3 * 3.08e24 cm) and
simultaneously divide the domain into 12 chunks, so that you can take advantage
of the underlying parallelism.
Particle fields are added as one-dimensional arrays in a similar manner as the
three-dimensional grid fields:
.. code-block:: python
import yt
data = dict(
Density=dens,
particle_position_x=posx_arr,
particle_position_y=posy_arr,
particle_position_z=posz_arr,
)
bbox = np.array([[-1.5, 1.5], [-1.5, 1.5], [1.5, 1.5]])
ds = yt.load_uniform_grid(data, arr.shape, 3.08e24, bbox=bbox, nprocs=12)
where in this example the particle position fields have been assigned. If no
particle fields are supplied, then the number of particles is assumed to be
zero.
.. rubric:: Caveats
* Particles may be difficult to integrate.
* Data must already reside in memory.
.. _loading-semi-structured-mesh-data:
Semi-Structured Grid Data
-------------------------
.. note::
With the release of yt-4.1, functionality has been added to allow loading
"stretched" grids that are operated on in a more efficient way. This is done
via the :func:`~yt.frontends.stream.data_structures.load_uniform_grid`
operation, supplying the ``cell_widths`` argument. Using the hexahedral mesh
is no longer suggested for situations where the mesh can be adequately
described with three arrays of cell widths.
See :ref:`loading-stretched-grids` for more information.
See :ref:`loading-numpy-array`,
:func:`~yt.frontends.stream.data_structures.hexahedral_connectivity`,
:func:`~yt.frontends.stream.data_structures.load_hexahedral_mesh` for
more detail.
In addition to uniform grids as described above, you can load in data
with non-uniform spacing between datapoints. To load this type of
data, you must first specify a hexahedral mesh, a mesh of six-sided
cells, on which it will live. You define this by specifying the x,y,
and z locations of the corners of the hexahedral cells. The following
code:
.. code-block:: python
import numpy
import yt
xgrid = numpy.array([-1, -0.65, 0, 0.65, 1])
ygrid = numpy.array([-1, 0, 1])
zgrid = numpy.array([-1, -0.447, 0.447, 1])
coordinates, connectivity = yt.hexahedral_connectivity(xgrid, ygrid, zgrid)
will define the (x,y,z) coordinates of the hexahedral cells and
information about that cell's neighbors such that the cell corners
will be a grid of points constructed as the Cartesian product of
xgrid, ygrid, and zgrid.
Then, to load your data, which should be defined on the interiors of
the hexahedral cells, and thus should have the shape,
``(len(xgrid)-1, len(ygrid)-1, len(zgrid)-1)``, you can use the following code:
.. code-block:: python
bbox = numpy.array(
[
[numpy.min(xgrid), numpy.max(xgrid)],
[numpy.min(ygrid), numpy.max(ygrid)],
[numpy.min(zgrid), numpy.max(zgrid)],
]
)
data = {"density": arr}
ds = yt.load_hexahedral_mesh(data, conn, coords, 1.0, bbox=bbox)
to load your data into the dataset ``ds`` as described above, where we
have assumed your data is stored in the three-dimensional array
``arr``.
.. rubric:: Caveats
* Integration is not implemented.
* Some functions may behave oddly or not work at all.
* Data must already reside in memory.
.. _loading-stretched-grids:
Stretched Grid Data
-------------------
.. warning::
API consistency for loading stretched grids is not guaranteed until at least
yt 4.2! There may be changes in between then and now, as this is a
preliminary feature.
With version 4.1, yt has the ability to specify cell widths for grids. This
allows situations where a grid has a functional form for cell widths, or where
widths are provided in advance.
.. note::
At present, support is available for a single grid with varying cell-widths,
loaded through the stream handler. Future versions of yt will have more
complete and flexible support!
To load a stretched grid, you use the standard (and now rather-poorly named)
``load_uniform_grid`` function, but supplying a ``cell_widths`` argument. This
argument should be a list of three arrays, corresponding to the first, second
and third index-direction cell widths. (For instance, in a "standard"
cartesian dataset, this would be x, y, z.)
This script,
demonstrates loading a simple "random" dataset with a random set of cell-widths.
.. code:: python
import yt
import numpy as np
N = 8
data = {"density": np.random.random((N, N, N))}
cell_widths = []
for i in range(3):
widths = np.random.random(N)
widths /= widths.sum() # Normalize to span 0 .. 1.
cell_widths.append(widths)
ds = yt.load_uniform_grid(
data,
[N, N, N],
bbox=np.array([[0.0, 1.0], [0.0, 1.0], [0.0, 1.0]]),
cell_widths=cell_widths,
)
This can be modified to load data from a file, as well as to use more (or
fewer) cells.
Unstructured Grid Data
----------------------
See :ref:`loading-numpy-array`,
:func:`~yt.frontends.stream.data_structures.load_unstructured_mesh` for
more detail.
In addition to the above grid types, you can also load data stored on
unstructured meshes. This type of mesh is used, for example, in many
finite element calculations. Currently, hexahedral and tetrahedral
mesh elements are supported.
To load an unstructured mesh, you need to specify the following. First,
you need to have a coordinates array, which should be an (L, 3) array
that stores the (x, y, z) positions of all of the vertices in the mesh.
Second, you need to specify a connectivity array, which describes how
those vertices are connected into mesh elements. The connectivity array
should be (N, M), where N is the number of elements and M is the
connectivity length, i.e. the number of vertices per element. Finally,
you must also specify a data dictionary, where the keys should be
the names of the fields and the values should be numpy arrays that
contain the field data. These arrays can either supply the cell-averaged
data for each element, in which case they would be (N, 1), or they
can have node-centered data, in which case they would also be (N, M).
Here is an example of how to load an in-memory, unstructured mesh dataset:
.. code-block:: python
import numpy as np
import yt
coords = np.array([[0.0, 0.0], [1.0, 0.0], [1.0, 1.0], [0.0, 1.0]], dtype=np.float64)
connect = np.array([[0, 1, 3], [1, 2, 3]], dtype=np.int64)
data = {}
data["connect1", "test"] = np.array(
[[0.0, 1.0, 3.0], [1.0, 2.0, 3.0]], dtype=np.float64
)
Here, we have made up a simple, 2D unstructured mesh dataset consisting of two
triangles and one node-centered data field. This data can be loaded as an in-memory
dataset as follows:
.. code-block:: python
ds = yt.load_unstructured_mesh(connect, coords, data)
The in-memory dataset can then be visualized as usual, e.g.:
.. code-block:: python
sl = yt.SlicePlot(ds, "z", ("connect1", "test"))
sl.annotate_mesh_lines()
Note that load_unstructured_mesh can take either a single mesh or a list of meshes.
To load multiple meshes, you can do:
.. code-block:: python
import numpy as np
import yt
coordsMulti = np.array(
[[0.0, 0.0], [1.0, 0.0], [1.0, 1.0], [0.0, 1.0]], dtype=np.float64
)
connect1 = np.array(
[
[0, 1, 3],
],
dtype=np.int64,
)
connect2 = np.array(
[
[1, 2, 3],
],
dtype=np.int64,
)
data1 = {}
data2 = {}
data1["connect1", "test"] = np.array(
[
[0.0, 1.0, 3.0],
],
dtype=np.float64,
)
data2["connect2", "test"] = np.array(
[
[1.0, 2.0, 3.0],
],
dtype=np.float64,
)
connectList = [connect1, connect2]
dataList = [data1, data2]
ds = yt.load_unstructured_mesh(connectList, coordsMulti, dataList)
# only plot the first mesh
sl = yt.SlicePlot(ds, "z", ("connect1", "test"))
# only plot the second
sl = yt.SlicePlot(ds, "z", ("connect2", "test"))
# plot both
sl = yt.SlicePlot(ds, "z", ("all", "test"))
Note that you must respect the field naming convention that fields on the first
mesh will have the type ``connect1``, fields on the second will have ``connect2``, etc...
.. rubric:: Caveats
* Integration is not implemented.
* Some functions may behave oddly or not work at all.
* Data must already reside in memory.
Generic Particle Data
---------------------
.. note::
For more information about how yt indexes and reads particle data, set the
section :ref:`demeshening`.
See :ref:`generic-particle-data` and
:func:`~yt.frontends.stream.data_structures.load_particles` for more detail.
You can also load generic particle data using the same ``stream`` functionality
discussed above to load in-memory grid data. For example, if your particle
positions and masses are stored in ``positions`` and ``masses``, a
vertically-stacked array of particle x,y, and z positions, and a 1D array of
particle masses respectively, you would load them like this:
.. code-block:: python
import yt
data = dict(particle_position=positions, particle_mass=masses)
ds = yt.load_particles(data)
You can also load data using 1D x, y, and z position arrays:
.. code-block:: python
import yt
data = dict(
particle_position_x=posx,
particle_position_y=posy,
particle_position_z=posz,
particle_mass=masses,
)
ds = yt.load_particles(data)
The ``load_particles`` function also accepts the following keyword parameters:
``length_unit``
The units used for particle positions.
``mass_unit``
The units of the particle masses.
``time_unit``
The units used to represent times. This is optional and is only used if
your data contains a ``creation_time`` field or a ``particle_velocity`` field.
``velocity_unit``
The units used to represent velocities. This is optional and is only used
if you supply a velocity field. If this is not supplied, it is inferred from
the length and time units.
``bbox``
The bounding box for the particle positions.
.. _smooth-non-sph:
Adding Smoothing Lengths for Non-SPH Particles
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A novel use of the ``load_particles`` function is to facilitate SPH
visualization of non-SPH particles. See the example below:
.. code-block:: python
import yt
# Load dataset and center on the dense region
ds = yt.load("FIRE_M12i_ref11/snapshot_600.hdf5")
_, center = ds.find_max(("PartType0", "density"))
# Reload DM particles into a stream dataset
ad = ds.all_data()
pt = "PartType1"
fields = ["particle_mass"] + [f"particle_position_{ax}" for ax in "xyz"]
data = {field: ad[pt, field] for field in fields}
ds_dm = yt.load_particles(data, data_source=ad)
# Generate the missing SPH fields
ds_dm.add_sph_fields()
# Make the SPH projection plot
p = yt.ProjectionPlot(ds_dm, "z", ("io", "density"), center=center, width=(1, "Mpc"))
p.set_unit(("io", "density"), "Msun/kpc**2")
p.show()
Here we see two new things. First, ``load_particles`` accepts a ``data_source``
argument to infer parameters like code units, which could be tedious to provide
otherwise. Second, the returned
:class:`~yt.frontends.stream.data_structures.StreamParticleDataset` has an
:meth:`~yt.frontends.stream.data_structures.StreamParticleDataset.add_sph_fields`
method, to create the ``smoothing_length`` and ``density`` fields required for
SPH visualization to work.
.. _loading-gizmo-data:
Gizmo Data
----------
.. note::
For more information about how yt indexes and reads particle data, set the
section :ref:`demeshening`.
Gizmo datasets, including FIRE outputs, can be loaded into yt in the usual
manner. Like other SPH data formats, yt loads Gizmo data as particle fields
and then uses smoothing kernels to deposit those fields to an underlying
grid structure as spatial fields as described in :ref:`loading-gadget-data`.
To load Gizmo datasets using the standard HDF5 output format::
import yt
ds = yt.load("snapshot_600.hdf5")
Because the Gizmo output format is similar to the Gadget format, yt
may load Gizmo datasets as Gadget depending on the circumstances, but this
should not pose a problem in most situations. FIRE outputs will be loaded
accordingly due to the number of metallicity fields found (11 or 17).
If ``("PartType0", "MagneticField")`` is present in the output, it would be
loaded and aliased to ``("PartType0", "particle_magnetic_field")``. The
corresponding component field like ``("PartType0", "particle_magnetic_field_x")``
would be added automatically.
Note that ``("PartType4", "StellarFormationTime")`` field has different
meanings depending on whether it is a cosmological simulation. For cosmological
runs this is the scale factor at the redshift when the star particle formed.
For non-cosmological runs it is the time when the star particle formed. (See the
`GIZMO User Guide <http://www.tapir.caltech.edu/~phopkins/Site/GIZMO_files/gizmo_documentation.html>`_)
For this reason, ``("PartType4", "StellarFormationTime")`` is loaded as a
dimensionless field. We defined two related fields
``("PartType4", "creation_time")``, and ``("PartType4", "age")`` with physical
units for your convenience.
For Gizmo outputs written as raw binary outputs, you may have to specify
a bounding box, field specification, and units as are done for standard
Gadget outputs. See :ref:`loading-gadget-data` for more information.
.. _halo-catalog-data:
Halo Catalog Data
-----------------
.. note::
For more information about how yt indexes and reads particle data, set the
section :ref:`demeshening`.
yt has support for reading halo catalogs produced by the AdaptaHOP, Amiga Halo
Finder (AHF), Rockstar and the inline FOF/SUBFIND halo finders of Gadget and
OWLS. The halo catalogs are treated as particle datasets where each particle
represents a single halo. For example, this means that the ``"particle_mass"``
field refers to the mass of the halos. For Gadget FOF/SUBFIND catalogs, the
member particles for a given halo can be accessed by creating ``halo`` data
containers. See :ref:`halo_containers` for more information.
If you have access to both the halo catalog and the simulation snapshot from
the same redshift, additional analysis can be performed for each halo using
:ref:`halo-analysis`. The resulting product can be reloaded in a similar manner
to the other halo catalogs shown here.
AdataHOP
^^^^^^^^
`AdaptaHOP <https://ascl.net/1305.004>`_ halo catalogs are loaded by providing
the path to the ``tree_bricksXXX`` file. As the halo catalog does not contain
all the information about the simulation (for example the cosmological
parameters), you also need to pass the parent dataset for it to load correctly.
Some fields of note available from AdaptaHOP are:
+---------------------+---------------------------+
| Rockstar field | yt field name |
+=====================+===========================+
| halo id | particle_identifier |
+---------------------+---------------------------+
| halo mass | particle_mass |
+---------------------+---------------------------+
| virial mass | virial_mass |
+---------------------+---------------------------+
| virial radius | virial_radius |
+---------------------+---------------------------+
| virial temperature | virial_temperature |
+---------------------+---------------------------+
| halo position | particle_position_(x,y,z) |
+---------------------+---------------------------+
| halo velocity | particle_velocity_(x,y,z) |
+---------------------+---------------------------+
Numerous other AdataHOP fields exist. To see them, check the field list by
typing ``ds.field_list`` for a dataset loaded as ``ds``. Like all other datasets,
fields must be accessed through :ref:`Data-objects`.
.. code-block:: python
import yt
parent_ds = yt.load("output_00080/info_00080.txt")
ds = yt.load("output_00080_halos/tree_bricks080", parent_ds=parent_ds)
ad = ds.all_data()
# halo masses
print(ad["halos", "particle_mass"])
# halo radii
print(ad["halos", "virial_radius"])
Halo Data Containers
""""""""""""""""""""
Halo member particles are accessed by creating halo data containers with the
the halo id and the type of the particles. Scalar values for halos
can be accessed in the same way. Halos also have mass, position, velocity, and
member ids attributes.
.. code-block:: python
halo = ds.halo(1, ptype="io")
# member particles for this halo
print(halo.member_ids)
# masses of the halo particles
print(halo["io", "particle_mass"])
# halo mass
print(halo.mass)
In addition, the halo container contains a sphere container. This is the smallest
sphere that contains all the halos' particles
.. code-block:: python
halo = ds.halo(1, ptype="io")
sp = halo.sphere
# Density in halo
sp["gas", "density"]
# Entropy in halo
sp["gas", "entropy"]
.. _ahf:
Amiga Halo Finder
^^^^^^^^^^^^^^^^^
Amiga Halo Finder (AHF) halo catalogs are loaded by providing the path to the
.parameter files. The corresponding .log and .AHF_halos files must exist for
data loading to succeed. The field type for all fields is "halos". Some fields
of note available from AHF are:
+----------------+---------------------------+
| AHF field | yt field name |
+================+===========================+
| ID | particle_identifier |
+----------------+---------------------------+
| Mvir | particle_mass |
+----------------+---------------------------+
| Rvir | virial_radius |
+----------------+---------------------------+
| (X,Y,Z)c | particle_position_(x,y,z) |
+----------------+---------------------------+
| V(X,Y,Z)c | particle_velocity_(x,y,z) |
+----------------+---------------------------+
Numerous other AHF fields exist. To see them, check the field list by typing
``ds.field_list`` for a dataset loaded as ``ds``. Like all other datasets, fields
must be accessed through :ref:`Data-objects`.
.. code-block:: python
import yt
ds = yt.load("ahf_halos/snap_N64L16_135.parameter", hubble_constant=0.7)
ad = ds.all_data()
# halo masses
print(ad["halos", "particle_mass"])
# halo radii
print(ad["halos", "virial_radius"])
.. note::
Currently the dimensionless Hubble parameter that yt needs is not provided in
AHF outputs. So users need to provide the ``hubble_constant`` (default to 1.0)
while loading datasets, as shown above.
.. _rockstar:
Rockstar
^^^^^^^^
Rockstar halo catalogs are loaded by providing the path to one of the .bin files.
In the case where multiple files were produced, one need only provide the path
to a single one of them. The field type for all fields is "halos". Some fields
of note available from Rockstar are:
+----------------+---------------------------+
| Rockstar field | yt field name |
+================+===========================+
| halo id | particle_identifier |
+----------------+---------------------------+
| virial mass | particle_mass |
+----------------+---------------------------+
| virial radius | virial_radius |
+----------------+---------------------------+
| halo position | particle_position_(x,y,z) |
+----------------+---------------------------+
| halo velocity | particle_velocity_(x,y,z) |
+----------------+---------------------------+
Numerous other Rockstar fields exist. To see them, check the field list by
typing ``ds.field_list`` for a dataset loaded as ``ds``. Like all other datasets,
fields must be accessed through :ref:`Data-objects`.
.. code-block:: python
import yt
ds = yt.load("rockstar_halos/halos_0.0.bin")
ad = ds.all_data()
# halo masses
print(ad["halos", "particle_mass"])
# halo radii
print(ad["halos", "virial_radius"])
.. _gadget_fof:
Gadget FOF/SUBFIND
^^^^^^^^^^^^^^^^^^
Gadget FOF/SUBFIND halo catalogs work in the same way as those created by
:ref:`rockstar`, except there are two field types: ``FOF`` for friend-of-friends
groups and ``Subhalo`` for halos found with the SUBFIND substructure finder.
Also like Rockstar, there are a number of fields specific to these halo
catalogs.
+-------------------+---------------------------+
| FOF/SUBFIND field | yt field name |
+===================+===========================+
| halo id | particle_identifier |
+-------------------+---------------------------+
| halo mass | particle_mass |
+-------------------+---------------------------+
| halo position | particle_position_(x,y,z) |
+-------------------+---------------------------+
| halo velocity | particle_velocity_(x,y,z) |
+-------------------+---------------------------+
| num. of particles | particle_number |
+-------------------+---------------------------+
| num. of subhalos | subhalo_number (FOF only) |
+-------------------+---------------------------+
Many other fields exist, especially for SUBFIND subhalos. Check the field
list by typing ``ds.field_list`` for a dataset loaded as ``ds``. Like all
other datasets, fields must be accessed through :ref:`Data-objects`.
.. code-block:: python
import yt
ds = yt.load("gadget_fof_halos/groups_042/fof_subhalo_tab_042.0.hdf5")
ad = ds.all_data()
# The halo mass
print(ad["Group", "particle_mass"])
print(ad["Subhalo", "particle_mass"])
# Halo ID
print(ad["Group", "particle_identifier"])
print(ad["Subhalo", "particle_identifier"])
# positions
print(ad["Group", "particle_position_x"])
# velocities
print(ad["Group", "particle_velocity_x"])
Multidimensional fields can be accessed through the field name followed by an
underscore and the index.
.. code-block:: python
# x component of the spin
print(ad["Subhalo", "SubhaloSpin_0"])
.. _halo_containers:
Halo Data Containers
""""""""""""""""""""
Halo member particles are accessed by creating halo data containers with the
type of halo ("Group" or "Subhalo") and the halo id. Scalar values for halos
can be accessed in the same way. Halos also have mass, position, and velocity
attributes.
.. code-block:: python
halo = ds.halo("Group", 0)
# member particles for this halo
print(halo["member_ids"])
# halo virial radius
print(halo["Group_R_Crit200"])
# halo mass
print(halo.mass)
Subhalos containers can be created using either their absolute ids or their
subhalo ids.
.. code-block:: python
# first subhalo of the first halo
subhalo = ds.halo("Subhalo", (0, 0))
# this subhalo's absolute id
print(subhalo.group_identifier)
# member particles
print(subhalo["member_ids"])
OWLS FOF/SUBFIND
^^^^^^^^^^^^^^^^
OWLS halo catalogs have a very similar structure to regular Gadget halo catalogs.
The two field types are ``FOF`` and ``SUBFIND``. See :ref:`gadget_fof` for more
information. At this time, halo member particles cannot be loaded.
.. code-block:: python
import yt
ds = yt.load("owls_fof_halos/groups_008/group_008.0.hdf5")
ad = ds.all_data()
# The halo mass
print(ad["FOF", "particle_mass"])
.. _halocatalog:
YTHaloCatalog
^^^^^^^^^^^^^
These are catalogs produced by the analysis discussed in :ref:`halo-analysis`.
In the case where multiple files were produced, one need only provide the path
to a single one of them. The field type for all fields is "halos". The fields
available here are similar to other catalogs. Any addition
:ref:`halo_catalog_quantities` will also be accessible as fields.
+-------------------+---------------------------+
| HaloCatalog field | yt field name |
+===================+===========================+
| halo id | particle_identifier |
+-------------------+---------------------------+
| virial mass | particle_mass |
+-------------------+---------------------------+
| virial radius | virial_radius |
+-------------------+---------------------------+
| halo position | particle_position_(x,y,z) |
+-------------------+---------------------------+
| halo velocity | particle_velocity_(x,y,z) |
+-------------------+---------------------------+
.. code-block:: python
import yt
ds = yt.load("tiny_fof_halos/DD0046/DD0046.0.h5")
ad = ds.all_data()
# The halo mass
print(ad["halos", "particle_mass"])
Halo Data Containers
""""""""""""""""""""
Halo particles can be accessed by creating halo data containers with the
type of halo ("halos") and the halo id and then querying the "member_ids"
field. Halo containers have mass, radius, position, and velocity
attributes. Additional fields for which there will be one value per halo
can be accessed in the same manner as conventional data containers.
.. code-block:: python
halo = ds.halo("halos", 0)
# particles for this halo
print(halo["member_ids"])
# halo properties
print(halo.mass, halo.radius, halo.position, halo.velocity)
.. _loading-openpmd-data:
openPMD Data
------------
`openPMD <https://www.openpmd.org>`_ is an open source meta-standard and naming
scheme for mesh based data and particle data. It does not actually define a file
format.
HDF5-containers respecting the minimal set of meta information from
versions 1.0.0 and 1.0.1 of the standard are compatible.
Support for the ED-PIC extension is not available. Mesh data in cartesian coordinates
and particle data can be read by this frontend.
To load the first in-file iteration of a openPMD datasets using the standard HDF5
output format:
.. code-block:: python
import yt
ds = yt.load("example-3d/hdf5/data00000100.h5")
If you operate on large files, you may want to modify the virtual chunking behaviour through
``open_pmd_virtual_gridsize``. The supplied value is an estimate of the size of a single read request
for each particle attribute/mesh (in Byte).
.. code-block:: python
import yt
ds = yt.load("example-3d/hdf5/data00000100.h5", open_pmd_virtual_gridsize=10e4)
sp = yt.SlicePlot(ds, "x", ("openPMD", "rho"))
sp.show()
Particle data is fully supported:
.. code-block:: python
import yt
ds = yt.load("example-3d/hdf5/data00000100.h5")
ad = f.all_data()
ppp = yt.ParticlePhasePlot(
ad,
("all", "particle_position_y"),
("all", "particle_momentum_y"),
("all", "particle_weighting"),
)
ppp.show()
.. rubric:: Caveats
* 1D, 2D and 3D data is compatible, but lower dimensional data might yield
strange results since it gets padded and treated as 3D. Extraneous dimensions are
set to be of length 1.0m and have a width of one cell.
* The frontend has hardcoded logic for renaming the openPMD ``position``
of particles to ``positionCoarse``
.. _loading-pyne-data:
PyNE Data
---------
`PyNE <http://pyne.io/>`_ is an open source nuclear engineering toolkit
maintained by the PyNE development team ([email protected]).
PyNE meshes utilize the Mesh-Oriented datABase
`(MOAB) <https://press3.mcs.anl.gov/sigma/moab-library/>`_ and can be
Cartesian or tetrahedral. In addition to field data, pyne meshes store pyne
Material objects which provide a rich set of capabilities for nuclear
engineering tasks. PyNE Cartesian (Hex8) meshes are supported by yt.
To create a pyne mesh:
.. code-block:: python
from pyne.mesh import Mesh
num_divisions = 50
coords = linspace(-1, 1, num_divisions)
m = Mesh(structured=True, structured_coords=[coords, coords, coords])
Field data can then be added:
.. code-block:: python
from pyne.mesh import iMeshTag
m.neutron_flux = IMeshTag()
# neutron_flux_data is a list or numpy array of size num_divisions^3
m.neutron_flux[:] = neutron_flux_data
Any field data or material data on the mesh can then be viewed just like any other yt dataset!
.. code-block:: python
import yt
pf = yt.frontends.moab.data_structures.PyneMoabHex8Dataset(m)
s = yt.SlicePlot(pf, "z", "neutron_flux")
s.display()
.. _loading-ramses-data:
RAMSES Data
-----------
In yt-4.x, RAMSES data is fully supported. If you are interested in taking a
development or stewardship role, please contact the yt-dev mailing list. To
load a RAMSES dataset, you can use the ``yt.load`` command and provide it
the ``info*.txt`` filename. For instance, if you were in a
directory with the following files:
.. code-block:: none
output_00007
output_00007/amr_00007.out00001
output_00007/grav_00007.out00001
output_00007/hydro_00007.out00001
output_00007/info_00007.txt
output_00007/part_00007.out00001
You would feed it the filename ``output_00007/info_00007.txt``:
.. code-block:: python
import yt
ds = yt.load("output_00007/info_00007.txt")
yt will attempt to guess the fields in the file. For more control over the hydro fields or the particle fields, see :ref:`loading-ramses-data-args`.
yt also support the new way particles are handled introduced after
version ``stable_17_09`` (the version introduced after the 2017 Ramses
User Meeting). In this case, the file ``part_file_descriptor.txt``
containing the different fields in the particle files will be read. If
you use a custom version of RAMSES, make sure this file is up-to-date
and reflects the true layout of the particles.
yt supports outputs made by the mainline ``RAMSES`` code as well as the
``RAMSES-RT`` fork. Files produces by ``RAMSES-RT`` are recognized as such
based on the presence of a ``info_rt_*.txt`` file in the output directory.
.. note::
for backward compatibility, particles from the
``part_XXXXX.outYYYYY`` files have the particle type ``io`` by
default (including dark matter, stars, tracer particles, ...). Sink
particles have the particle type ``sink``.
.. _loading-ramses-data-args:
Arguments passed to the load function
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
It is possible to provide extra arguments to the load function when loading RAMSES datasets. Here is a list of the ones specific to RAMSES:
``fields``
A list of fields to read from the hydro files. For example, in a pure
hydro simulation with an extra custom field named ``my-awesome-field``, one
would specify the fields argument following this example:
.. code-block:: python
import yt
fields = [
"Density",
"x-velocity",
"y-velocity",
"z-velocity",
"Pressure",
"my-awesome-field",
]
ds = yt.load("output_00123/info_00123.txt", fields=fields)
"my-awesome-field" in ds.field_list # is True
``extra_particle_fields``
A list of tuples describing extra particles fields to read in. By
default, yt will try to detect as many fields as possible,
assuming the extra ones to be double precision floats. This
argument is useful if you have extra fields besides the particle mass,
position, and velocity fields that yt cannot detect automatically. For
example, for a dataset containing two extra particle integer fields named
``family`` and ``info``, one would do:
.. code-block:: python
import yt
extra_fields = [("family", "I"), ("info", "I")]
ds = yt.load("output_00001/info_00001.txt", extra_particle_fields=extra_fields)
# ('all', 'family') and ('all', 'info') now in ds.field_list
The format of the ``extra_particle_fields`` argument is as follows:
``[('field_name_1', 'type_1'), ..., ('field_name_n', 'type_n')]`` where
the second element of the tuple follows the `python struct format
convention
<https://docs.python.org/3.5/library/struct.html#format-characters>`_.
Note that if ``extra_particle_fields`` is defined, yt will not assume
that the ``particle_birth_time`` and ``particle_metallicity`` fields
are present in the dataset. If these fields are present, they must be
explicitly enumerated in the ``extra_particle_fields`` argument.
``cosmological``
Force yt to consider a simulation to be cosmological or
not. This may be useful for some specific simulations e.g. that
run down to negative redshifts.
``bbox``
The subbox to load. yt will only read CPUs intersecting with the
subbox. This is especially useful for large simulations or
zoom-in simulations, where you don't want to have access to data
outside of a small region of interest. This argument will prevent
yt from loading AMR files outside the subbox and will hence
spare memory and time.
For example, one could use
.. code-block:: python
import yt
# Only load a small cube of size (0.1)**3
bbox = [[0.0, 0.0, 0.0], [0.1, 0.1, 0.1]]
ds = yt.load("output_00001/info_00001.txt", bbox=bbox)
# See the note below for the following examples
ds.right_edge == [1, 1, 1] # is True
ad = ds.all_data()
ad["all", "particle_position_x"].max() > 0.1 # _may_ be True
bb = ds.box(left_edge=bbox[0], right_edge=bbox[1])
bb["all", "particle_position_x"].max() < 0.1 # is True
.. note::
When using the bbox argument, yt will read all the CPUs
intersecting with the subbox. However it may also read some
data *outside* the selected region. This is due to the fact
that domains have a complicated shape when using Hilbert
ordering. Internally, yt will hence assume the loaded dataset
covers the entire simulation. If you only want the data from
the selected region, you may want to use ``ds.box(...)``.
.. note::
The ``bbox`` feature is only available for datasets using
Hilbert ordering.
``max_level, max_level_convention``
This will set the deepest level to be read from file. Both arguments
have to be set, where the convention can be either "ramses" or "yt".
In the "ramses" convention, levels go from 1 (the root grid)
to levelmax, such that the finest cells have a size of ``boxsize/2**levelmax``.
In the "yt" convention, levels are numbered from 0 (the coarsest
uniform grid at RAMSES' ``levelmin``) to ``max_level``, such that
the finest cells are ``2**max_level`` smaller than the coarsest.
.. code-block:: python
import yt
# Assuming RAMSES' levelmin=6, i.e. the structure is full
# down to levelmin=6
ds_all = yt.load("output_00080/info_00080.txt")
ds_yt = yt.load("output_00080/info_00080.txt", max_level=2, max_level_convention="yt")
ds_ramses = yt.load(
"output_00080/info_00080.txt",
max_level=8,
max_level_convention="ramses",
)
any(ds_all.r["index", "grid_level"] > 2) # True
all(ds_yt.r["index", "grid_level"] <= 2) # True
all(ds_ramses.r["index", "grid_level"] <= 2) # True
Adding custom particle fields
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
There are three way to make yt detect all the particle fields. For example, if you wish to make yt detect the birth time and metallicity of your particles, use one of these methods
1. ``yt.load`` method. Whenever loading a dataset, add the extra particle fields as a keyword argument to the ``yt.load`` call.
.. code-block:: python
import yt
epf = [("particle_birth_time", "d"), ("particle_metallicity", "d")]
ds = yt.load("dataset", extra_particle_fields=epf)
("io", "particle_birth_time") in ds.derived_field_list # is True
("io", "particle_metallicity") in ds.derived_field_list # is True
2. yt config method. If you don't want to pass the arguments for each call of ``yt.load``, you can add in your configuration
.. code-block:: none
[ramses-particles]
fields = """
particle_position_x, d
particle_position_y, d
particle_position_z, d
particle_velocity_x, d
particle_velocity_y, d
particle_velocity_z, d
particle_mass, d
particle_identifier, i
particle_refinement_level, I
particle_birth_time, d
particle_metallicity, d
"""
Each line should contain the name of the field and its data type (``d`` for double precision, ``f`` for single precision, ``i`` for integer and ``l`` for long integer). You can also configure the auto detected fields for fluid types by adding a section ``ramses-hydro``, ``ramses-grav`` or ``ramses-rt`` in the config file. For example, if you customized your gravity files so that they contain the potential, the potential in the previous timestep and the x, y and z accelerations, you can use :
.. code-block:: none
[ramses-grav]
fields = [ "Potential", "Potential-old", "x-acceleration", "y-acceleration", "z-acceleration" ]
3. New RAMSES way. Recent versions of RAMSES automatically write in their output an ``hydro_file_descriptor.txt`` file that gives information about which field is where. If you wish, you can simply create such a file in the folder containing the ``info_xxxxx.txt`` file
.. code-block:: none
# version: 1
# ivar, variable_name, variable_type
1, position_x, d
2, position_y, d
3, position_z, d
4, velocity_x, d
5, velocity_y, d
6, velocity_z, d
7, mass, d
8, identity, i
9, levelp, i
10, birth_time, d
11, metallicity, d
It is important to note that this file should not end with an empty line (but in this case with ``11, metallicity, d``).
.. note::
The kind (``i``, ``d``, ``I``, ...) of the field follow the `python convention <https://docs.python.org/3.5/library/struct.html#format-characters>`_.
Customizing the particle type association
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In versions of RAMSES more recent than December 2017, particles carry
along a ``family`` array. The value of this array gives the kind of
the particle, e.g. 1 for dark matter. It is possible to customize the
association between particle type and family by customizing the yt
config (see :ref:`configuration-file`), adding
.. code-block:: none
[ramses-families]
gas_tracer = 100
star_tracer = 101
dm = 0
star = 1
Particle ages and formation times
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For non-cosmological simulations, particle ages are stored in physical units on
disk. To access the birth time for the particles, use the
``particle_birth_time`` field. The time recorded in this field is relative to
the beginning of the simulation. Particles that were present in the initial
conditions will have negative values for ``particle_birth_time``.
For cosmological simulations that include star particles, RAMSES stores particle
formation times as conformal times. To access the formation time field data in
conformal units use the ``conformal_birth_time`` field. This will return the
formation times of particles in the simulation in conformal units as a
dimensionless array. To access the formation time in physical units, use the
``particle_birth_time`` field. Finally, to access the ages of star particles in
your simulation, use the ``star_age`` field. Note that this field is defined for
all particle types but will only make sense for star particles.
For simulations conducted in Newtownian coordinates, with no cosmology or
comoving expansion, the time is equal to zero at the beginning of the
simulation. That means that particles present in the initial conditions may have
negative birth times. This can happen, for example, in idealized isolated galaxy
simulations, where star particles are included in the initial conditions. For
simulations conducted in cosmological comoving units, the time is equal to zero
at the big bang, and all particles should have positive values for the
``particle_birth_time`` field.
To help clarify the above discussion, the following table describes the meaning
of the various particle formation time and age fields:
+------------------+--------------------------+--------------------------------+
| Simulation type | Field name | Description |
+==================+==========================+================================+
| cosmological | ``conformal_birth_time`` | Formation time in conformal |
| | | units (dimensionless) |
+------------------+--------------------------+--------------------------------+
| any | ``particle_birth_time`` | The time relative to the |
| | | beginning of the simulation |
| | | when the particle was formed. |
| | | For non-cosmological |
| | | simulations, this field will |
| | | have positive values for |
| | | particles formed during the |
| | | simulation and negative for |
| | | particles of finite age in the |
| | | initial conditions. For |
| | | cosmological simulations this |
| | | is the time the particle |
| | | formed relative to the big |
| | | bang, therefore the value of |
| | | this field should be between |
| | | 0 and 13.7 Gyr. |
+------------------+--------------------------+--------------------------------+
| any | ``star_age`` | Age of the particle. |
| | | Only physically meaningful for |
| | | stars and particles that |
| | | formed dynamically during the |
| | | simulation. |
+------------------+--------------------------+--------------------------------+
RAMSES datasets produced by a version of the code newer than November 2017
contain the metadata necessary for yt to automatically distinguish between star
particles and other particle types. If you are working with a dataset produced
by a version of RAMSES older than November 2017, yt will only automatically
recognize a single particle ``io``. It may be convenient to define a particle
filter in your scripts to distinguish between particles present in the initial
conditions and particles that formed dynamically during the simulation by
filtering particles with ``"conformal_birth_time"`` values equal to zero and not
equal to zero. An example particle filter definition for dynamically formed
stars might look like this:
.. code-block:: python
@yt.particle_filter(requires=["conformal_birth_time"], filtered_type="io")
def stars(pfilter, data):
filter = data[pfilter.filtered_type, "conformal_birth_time"] != 0
return filter
For a cosmological simulation, this filter will distinguish between stars and
dark matter particles.
.. _loading-sph-data:
SPH Particle Data
-----------------
.. note::
For more information about how yt indexes and reads particle data, set the
section :ref:`demeshening`.
For all of the SPH frontends, yt uses cython-based SPH smoothing onto an
in-memory octree to create deposited mesh fields from individual SPH particle
fields.
This uses a standard M4 smoothing kernel and the ``smoothing_length``
field to calculate SPH sums, filling in the mesh fields. This gives you the
ability to both track individual particles (useful for tasks like following
contiguous clouds of gas that would be require a clump finder in grid data) as
well as doing standard grid-based analysis (i.e. slices, projections, and profiles).
The ``smoothing_length`` variable is also useful for determining which particles
can interact with each other, since particles more distant than twice the
smoothing length do not typically see each other in SPH simulations. By
changing the value of the ``smoothing_length`` and then re-depositing particles
onto the grid, you can also effectively mimic what your data would look like at
lower resolution.
.. _loading-tipsy-data:
Tipsy Data
----------
.. note::
For more information about how yt indexes and reads particle data, set the
section :ref:`demeshening`.
See :ref:`tipsy-notebook` and :ref:`loading-sph-data` for more details.
yt also supports loading Tipsy data. Many of its characteristics are similar
to how Gadget data is loaded.
.. code-block:: python
ds = load("./halo1e11_run1.00400")
.. _specifying-cosmology-tipsy:
Specifying Tipsy Cosmological Parameters and Setting Default Units
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Cosmological parameters can be specified to Tipsy to enable computation of
default units. For example do the following, to load a Tipsy dataset whose
path is stored in the variable ``my_filename`` with specified cosmology
parameters:
.. code-block:: python
cosmology_parameters = {
"current_redshift": 0.0,
"omega_lambda": 0.728,
"omega_matter": 0.272,
"hubble_constant": 0.702,
}
ds = yt.load(my_filename, cosmology_parameters=cosmology_parameters)
If you wish to set the unit system directly, you can do so by using the
``unit_base`` keyword in the load statement.
.. code-block:: python
import yt
ds = yt.load(filename, unit_base={"length", (1.0, "Mpc")})
See the documentation for the
:class:`~yt.frontends.tipsy.data_structures.TipsyDataset` class for more
information.
Loading Cosmological Simulations
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you are not using a parameter file (i.e. non-Gasoline users), then you must
use keyword ``cosmology_parameters`` when loading your data set to indicate to
yt that it is a cosmological data set. If you do not wish to set any
non-default cosmological parameters, you may pass an empty dictionary.
.. code-block:: python
import yt
ds = yt.load(filename, cosmology_parameters={})
.. _loading-cfradial-data:
CfRadial Data
-------------
Cf/Radial is a CF compliant netCDF convention for radial data from radar and
lidar platforms that supports both airborne and ground-based sensors. Because
of its CF-compliance, CfRadial will allow researchers familiar with CF to read
the data into a wide variety of analysis tools, models etc. For more see:
[CfRadialDoc.v1.4.20160801.pdf](https://github.com/NCAR/CfRadial/blob/d4562a995d0589cea41f4f6a4165728077c9fc9b/docs/CfRadialDoc.v1.4.20160801.pdf)
yt provides support for loading cartesian-gridded CfRadial netcdf-4 files as
well as polar coordinate Cfradial netcdf-4 files. When loading a standard
CfRadial dataset in polar coordinates, yt will first build a sample on a
cartesian grid (see :ref:`cfradial_gridding`). To load a CfRadial data file:
.. code-block:: python
import yt
ds = yt.load("CfRadialGrid/grid1.nc")
.. _cfradial_gridding:
Gridding Behavior
^^^^^^^^^^^^^^^^^
When you load a CfRadial dataset in polar coordinates (elevation, azimuth and
range), yt will first build a sample by mapping the data onto a cartesian grid
using the Python-ARM Radar Toolkit (`pyart <https://github.com/ARM-DOE/pyart>`_).
Grid points are found by interpolation of all data points within a specified radius of influence.
This data, now in x, y, z coordinate domain is then saved as a new dataset and subsequent
loads of the original native CfRadial dataset will use the gridded file.
Mapping the data from spherical to Cartesian coordinates is useful for 3D volume
rendering the data using yt.
See the documentation for the
:class:`~yt.frontends.cf_radial.data_structures.CFRadialDataset` class for a
description of how to adjust the gridding parameters and storage of the gridded
file.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/examining/loading_data.rst | loading_data.rst |
This example creates a fake in-memory particle dataset and then loads it as a yt dataset using the `load_particles` function.
Our "fake" dataset will be numpy arrays filled with normally distributed randoml particle positions and uniform particle masses. Since real data is often scaled, I arbitrarily multiply by 1e6 to show how to deal with scaled data.
```
import numpy as np
n_particles = 5000000
ppx, ppy, ppz = 1e6 * np.random.normal(size=[3, n_particles])
ppm = np.ones(n_particles)
```
The `load_particles` function accepts a dictionary populated with particle data fields loaded in memory as numpy arrays or python lists:
```
data = {
"particle_position_x": ppx,
"particle_position_y": ppy,
"particle_position_z": ppz,
"particle_mass": ppm,
}
```
To hook up with yt's internal field system, the dictionary keys must be 'particle_position_x', 'particle_position_y', 'particle_position_z', and 'particle_mass', as well as any other particle field provided by one of the particle frontends.
The `load_particles` function transforms the `data` dictionary into an in-memory yt `Dataset` object, providing an interface for further analysis with yt. The example below illustrates how to load the data dictionary we created above.
```
import yt
from yt.units import Msun, parsec
bbox = 1.1 * np.array(
[[min(ppx), max(ppx)], [min(ppy), max(ppy)], [min(ppz), max(ppz)]]
)
ds = yt.load_particles(data, length_unit=1.0 * parsec, mass_unit=1e8 * Msun, bbox=bbox)
```
The `length_unit` and `mass_unit` are the conversion from the units used in the `data` dictionary to CGS. I've arbitrarily chosen one parsec and 10^8 Msun for this example.
The `n_ref` parameter controls how many particle it takes to accumulate in an oct-tree cell to trigger refinement. Larger `n_ref` will decrease poisson noise at the cost of resolution in the octree.
Finally, the `bbox` parameter is a bounding box in the units of the dataset that contains all of the particles. This is used to set the size of the base octree block.
This new dataset acts like any other yt `Dataset` object, and can be used to create data objects and query for yt fields. This example shows how to access "deposit" fields:
```
ad = ds.all_data()
# This is generated with "cloud-in-cell" interpolation.
cic_density = ad["deposit", "all_cic"]
# These three are based on nearest-neighbor cell deposition
nn_density = ad["deposit", "all_density"]
nn_deposited_mass = ad["deposit", "all_mass"]
particle_count_per_cell = ad["deposit", "all_count"]
ds.field_list
ds.derived_field_list
slc = yt.SlicePlot(ds, 2, ("deposit", "all_cic"))
slc.set_width((8, "Mpc"))
```
Finally, one can specify multiple particle types in the `data` directory by setting the field names to be field tuples (the default field type for particles is `"io"`) if one is not specified:
```
n_star_particles = 1000000
n_dm_particles = 2000000
ppxd, ppyd, ppzd = 1e6 * np.random.normal(size=[3, n_dm_particles])
ppmd = np.ones(n_dm_particles)
ppxs, ppys, ppzs = 5e5 * np.random.normal(size=[3, n_star_particles])
ppms = 0.1 * np.ones(n_star_particles)
data2 = {
("dm", "particle_position_x"): ppxd,
("dm", "particle_position_y"): ppyd,
("dm", "particle_position_z"): ppzd,
("dm", "particle_mass"): ppmd,
("star", "particle_position_x"): ppxs,
("star", "particle_position_y"): ppys,
("star", "particle_position_z"): ppzs,
("star", "particle_mass"): ppms,
}
ds2 = yt.load_particles(
data2, length_unit=1.0 * parsec, mass_unit=1e8 * Msun, n_ref=256, bbox=bbox
)
```
We now have separate `"dm"` and `"star"` particles, as well as their deposited fields:
```
slc = yt.SlicePlot(ds2, 2, [("deposit", "dm_cic"), ("deposit", "star_cic")])
slc.set_width((8, "Mpc"))
```
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/examining/Loading_Generic_Particle_Data.ipynb | Loading_Generic_Particle_Data.ipynb |
import matplotlib.pyplot as plt
import numpy as np
import yt
"""
Make a turbulent KE power spectrum. Since we are stratified, we use
a rho**(1/3) scaling to the velocity to get something that would
look Kolmogorov (if the turbulence were fully developed).
Ultimately, we aim to compute:
1 ^ ^*
E(k) = integral - V(k) . V(k) dS
2
n ^
where V = rho U is the density-weighted velocity field, and V is the
FFT of V.
(Note: sometimes we normalize by 1/volume to get a spectral
energy density spectrum).
"""
def doit(ds):
# a FFT operates on uniformly gridded data. We'll use the yt
# covering grid for this.
max_level = ds.index.max_level
ref = int(np.prod(ds.ref_factors[0:max_level]))
low = ds.domain_left_edge
dims = ds.domain_dimensions * ref
nx, ny, nz = dims
nindex_rho = 1.0 / 3.0
Kk = np.zeros((nx // 2 + 1, ny // 2 + 1, nz // 2 + 1))
for vel in [("gas", "velocity_x"), ("gas", "velocity_y"), ("gas", "velocity_z")]:
Kk += 0.5 * fft_comp(
ds, ("gas", "density"), vel, nindex_rho, max_level, low, dims
)
# wavenumbers
L = (ds.domain_right_edge - ds.domain_left_edge).d
kx = np.fft.rfftfreq(nx) * nx / L[0]
ky = np.fft.rfftfreq(ny) * ny / L[1]
kz = np.fft.rfftfreq(nz) * nz / L[2]
# physical limits to the wavenumbers
kmin = np.min(1.0 / L)
kmax = np.min(0.5 * dims / L)
kbins = np.arange(kmin, kmax, kmin)
N = len(kbins)
# bin the Fourier KE into radial kbins
kx3d, ky3d, kz3d = np.meshgrid(kx, ky, kz, indexing="ij")
k = np.sqrt(kx3d**2 + ky3d**2 + kz3d**2)
whichbin = np.digitize(k.flat, kbins)
ncount = np.bincount(whichbin)
E_spectrum = np.zeros(len(ncount) - 1)
for n in range(1, len(ncount)):
E_spectrum[n - 1] = np.sum(Kk.flat[whichbin == n])
k = 0.5 * (kbins[0 : N - 1] + kbins[1:N])
E_spectrum = E_spectrum[1:N]
index = np.argmax(E_spectrum)
kmax = k[index]
Emax = E_spectrum[index]
plt.loglog(k, E_spectrum)
plt.loglog(k, Emax * (k / kmax) ** (-5.0 / 3.0), ls=":", color="0.5")
plt.xlabel(r"$k$")
plt.ylabel(r"$E(k)dk$")
plt.savefig("spectrum.png")
def fft_comp(ds, irho, iu, nindex_rho, level, low, delta):
cube = ds.covering_grid(level, left_edge=low, dims=delta, fields=[irho, iu])
rho = cube[irho].d
u = cube[iu].d
nx, ny, nz = rho.shape
# do the FFTs -- note that since our data is real, there will be
# too much information here. fftn puts the positive freq terms in
# the first half of the axes -- that's what we keep. Our
# normalization has an '8' to account for this clipping to one
# octant.
ru = np.fft.fftn(rho**nindex_rho * u)[
0 : nx // 2 + 1, 0 : ny // 2 + 1, 0 : nz // 2 + 1
]
ru = 8.0 * ru / (nx * ny * nz)
return np.abs(ru) ** 2
ds = yt.load("maestro_xrb_lores_23437")
doit(ds) | yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/cookbook/power_spectrum_example.py | power_spectrum_example.py |
import numpy as np
import yt
from yt.data_objects.particle_filters import add_particle_filter
# Define filter functions for our particle filters based on stellar age.
# In this dataset particles in the initial conditions are given creation
# times arbitrarily far into the future, so stars with negative ages belong
# in the old stars filter.
def stars_10Myr(pfilter, data):
age = data.ds.current_time - data["Stars", "creation_time"]
filter = np.logical_and(age >= 0, age.in_units("Myr") < 10)
return filter
def stars_100Myr(pfilter, data):
age = (data.ds.current_time - data["Stars", "creation_time"]).in_units("Myr")
filter = np.logical_and(age >= 10, age < 100)
return filter
def stars_old(pfilter, data):
age = data.ds.current_time - data["Stars", "creation_time"]
filter = np.logical_or(age < 0, age.in_units("Myr") >= 100)
return filter
# Create the particle filters
add_particle_filter(
"stars_young",
function=stars_10Myr,
filtered_type="Stars",
requires=["creation_time"],
)
add_particle_filter(
"stars_medium",
function=stars_100Myr,
filtered_type="Stars",
requires=["creation_time"],
)
add_particle_filter(
"stars_old", function=stars_old, filtered_type="Stars", requires=["creation_time"]
)
# Load a dataset and apply the particle filters
filename = "TipsyGalaxy/galaxy.00300"
ds = yt.load(filename)
ds.add_particle_filter("stars_young")
ds.add_particle_filter("stars_medium")
ds.add_particle_filter("stars_old")
# What are the total masses of different ages of star in the whole simulation
# volume?
ad = ds.all_data()
mass_young = ad["stars_young", "particle_mass"].in_units("Msun").sum()
mass_medium = ad["stars_medium", "particle_mass"].in_units("Msun").sum()
mass_old = ad["stars_old", "particle_mass"].in_units("Msun").sum()
print(f"Mass of young stars = {mass_young:g}")
print(f"Mass of medium stars = {mass_medium:g}")
print(f"Mass of old stars = {mass_old:g}")
# Generate 4 projections: gas density, young stars, medium stars, old stars
fields = [
("stars_young", "particle_mass"),
("stars_medium", "particle_mass"),
("stars_old", "particle_mass"),
]
prj1 = yt.ProjectionPlot(ds, "z", ("gas", "density"), center="max", width=(100, "kpc"))
prj1.save()
prj2 = yt.ParticleProjectionPlot(ds, "z", fields, center="max", width=(100, "kpc"))
prj2.save() | yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/cookbook/particle_filter.py | particle_filter.py |
# In this example we will show how to use the AMRKDTree to take a simulation
# with 8 levels of refinement and only use levels 0-3 to render the dataset.
# Currently this cookbook is flawed in that the data that is covered by the
# higher resolution data gets masked during the rendering. This should be
# fixed by changing either the data source or the code in
# yt/utilities/amr_kdtree.py where data is being masked for the partitioned
# grid. Right now the quick fix is to create a data_collection, but this
# will only work for patch based simulations that have ds.index.grids.
# We begin by loading up yt, and importing the AMRKDTree
import numpy as np
import yt
from yt.utilities.amr_kdtree.api import AMRKDTree
# Load up a dataset and define the kdtree
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
im, sc = yt.volume_render(ds, ("gas", "density"), fname="v0.png")
sc.camera.set_width(ds.arr(100, "kpc"))
render_source = sc.get_source()
kd = render_source.volume
# Print out specifics of KD Tree
print("Total volume of all bricks = %i" % kd.count_volume())
print("Total number of cells = %i" % kd.count_cells())
new_source = ds.all_data()
new_source.max_level = 3
kd_low_res = AMRKDTree(ds, data_source=new_source)
print(kd_low_res.count_volume())
print(kd_low_res.count_cells())
# Now we pass this in as the volume to our camera, and render the snapshot
# again.
render_source.set_volume(kd_low_res)
render_source.set_field(("gas", "density"))
sc.save("v1.png", sigma_clip=6.0)
# This operation was substantially faster. Now lets modify the low resolution
# rendering until we find something we like.
tf = render_source.transfer_function
tf.clear()
tf.add_layers(
4,
0.01,
col_bounds=[-27.5, -25.5],
alpha=np.ones(4, dtype="float64"),
colormap="RdBu_r",
)
sc.save("v2.png", sigma_clip=6.0)
# This looks better. Now let's try turning on opacity.
tf.grey_opacity = True
sc.save("v3.png", sigma_clip=6.0)
#
## That seemed to pick out some interesting structures. Now let's bump up the
## opacity.
#
tf.clear()
tf.add_layers(
4,
0.01,
col_bounds=[-27.5, -25.5],
alpha=10.0 * np.ones(4, dtype="float64"),
colormap="RdBu_r",
)
tf.add_layers(
4,
0.01,
col_bounds=[-27.5, -25.5],
alpha=10.0 * np.ones(4, dtype="float64"),
colormap="RdBu_r",
)
sc.save("v4.png", sigma_clip=6.0)
#
## This looks pretty good, now lets go back to the full resolution AMRKDTree
#
render_source.set_volume(kd)
sc.save("v5.png", sigma_clip=6.0)
# This looks great! | yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/cookbook/amrkdtree_downsampling.py | amrkdtree_downsampling.py |
.. _notebook-tutorial:
Notebook Tutorial
-----------------
The IPython notebook is a powerful system for literate coding - a style of
writing code that embeds input, output, and explanatory text into one document.
yt has deep integration with the IPython notebook, explained in-depth in the
other example notebooks and the rest of the yt documentation. This page is here
to give a brief introduction to the notebook itself.
To start the notebook, enter the following command at the bash command line:
.. code-block:: bash
$ ipython notebook
Depending on your default web browser and system setup this will open a web
browser and direct you to the notebook dashboard. If it does not, you might
need to connect to the notebook manually. See the `IPython documentation
<http://ipython.org/ipython-doc/stable/notebook/notebook.html#starting-the-notebook-server>`_
for more details.
For the notebook tutorial, we rely on example notebooks that are part of the
IPython documentation. We link to static nbviewer versions of the 'evaluated'
versions of these example notebooks. If you would like to run them locally on
your own computer, simply download the notebook by clicking the 'Download
Notebook' link in the top right corner of each page.
1. `IPython Notebook Tutorials <https://nbviewer.jupyter.org/github/ipython/ipython/blob/master/examples/Index.ipynb>`_
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/cookbook/notebook_tutorial.rst | notebook_tutorial.rst |
import matplotlib.pyplot as plt
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
import yt
from yt.visualization.api import Streamlines
# Load the dataset
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
# Define c: the center of the box, N: the number of streamlines,
# scale: the spatial scale of the streamlines relative to the boxsize,
# and then pos: the random positions of the streamlines.
c = ds.arr([0.5] * 3, "code_length")
N = 30
scale = ds.quan(15, "kpc").in_units("code_length") # 15 kpc in code units
pos_dx = np.random.random((N, 3)) * scale - scale / 2.0
pos = c + pos_dx
# Create the streamlines from these positions with the velocity fields as the
# fields to be traced
streamlines = Streamlines(
ds,
pos,
("gas", "velocity_x"),
("gas", "velocity_y"),
("gas", "velocity_z"),
length=1.0,
)
streamlines.integrate_through_volume()
# Create a 3D matplotlib figure for visualizing the streamlines
fig = plt.figure()
ax = Axes3D(fig, auto_add_to_figure=False)
fig.add_axes(ax)
# Trace the streamlines through the volume of the 3D figure
for stream in streamlines.streamlines:
stream = stream[np.all(stream != 0.0, axis=1)]
# Make the colors of each stream vary continuously from blue to red
# from low-x to high-x of the stream start position (each color is R, G, B)
# can omit and just set streamline colors to a fixed color
x_start_pos = ds.arr(stream[0, 0], "code_length")
x_start_pos -= ds.arr(0.5, "code_length")
x_start_pos /= scale
x_start_pos += 0.5
color = np.array([x_start_pos, 0, 1 - x_start_pos])
# Plot the stream in 3D
ax.plot3D(stream[:, 0], stream[:, 1], stream[:, 2], alpha=0.3, color=color)
# Create a sphere object centered on the highest density point in the simulation
# with radius = 1 Mpc
sphere = ds.sphere("max", (1.0, "Mpc"))
# Identify the isodensity surface in this sphere with density = 1e-24 g/cm^3
surface = ds.surface(sphere, ("gas", "density"), 1e-24)
# Color this isodensity surface according to the log of the temperature field
colors = yt.apply_colormap(np.log10(surface[("gas", "temperature")]), cmap_name="hot")
# Render this surface
p3dc = Poly3DCollection(surface.triangles, linewidth=0.0)
colors = colors[0, :, :] / 255.0 # scale to [0,1]
colors[:, 3] = 0.3 # alpha = 0.3
p3dc.set_facecolors(colors)
ax.add_collection(p3dc)
# Save the figure
plt.savefig("streamlines_isocontour.png") | yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/cookbook/streamlines_isocontour.py | streamlines_isocontour.py |
## Loading the data
First we set up our imports:
```
import numpy as np
import yt
```
First we load the data set, specifying both the unit length/mass/velocity, as well as the size of the bounding box (which should encapsulate all the particles in the data set)
At the end, we flatten the data into "ad" in case we want access to the raw simulation data
>This dataset is available for download at https://yt-project.org/data/GadgetDiskGalaxy.tar.gz (430 MB).
```
fname = "GadgetDiskGalaxy/snapshot_200.hdf5"
unit_base = {
"UnitLength_in_cm": 3.08568e21,
"UnitMass_in_g": 1.989e43,
"UnitVelocity_in_cm_per_s": 100000,
}
bbox_lim = 1e5 # kpc
bbox = [[-bbox_lim, bbox_lim], [-bbox_lim, bbox_lim], [-bbox_lim, bbox_lim]]
ds = yt.load(fname, unit_base=unit_base, bounding_box=bbox)
ds.index
ad = ds.all_data()
```
Let's make a projection plot to look at the entire volume
```
px = yt.ProjectionPlot(ds, "x", ("gas", "density"))
px.show()
```
Let's print some quantities about the domain, as well as the physical properties of the simulation
```
print("left edge: ", ds.domain_left_edge)
print("right edge: ", ds.domain_right_edge)
print("center: ", ds.domain_center)
```
We can also see the fields that are available to query in the dataset
```
sorted(ds.field_list)
```
Let's create a data object that represents the full simulation domain, and find the total mass in gas and dark matter particles contained in it:
```
ad = ds.all_data()
# total_mass returns a list, representing the total gas and dark matter + stellar mass, respectively
print([tm.in_units("Msun") for tm in ad.quantities.total_mass()])
```
Now let's say we want to zoom in on the box (since clearly the bounding we chose initially is much larger than the volume containing the gas particles!), and center on wherever the highest gas density peak is. First, let's find this peak:
```
density = ad["PartType0", "density"]
wdens = np.where(density == np.max(density))
coordinates = ad["PartType0", "Coordinates"]
center = coordinates[wdens][0]
print("center = ", center)
```
Set up the box to zoom into
```
new_box_size = ds.quan(250, "code_length")
left_edge = center - new_box_size / 2
right_edge = center + new_box_size / 2
print(new_box_size.in_units("Mpc"))
print(left_edge.in_units("Mpc"))
print(right_edge.in_units("Mpc"))
ad2 = ds.region(center=center, left_edge=left_edge, right_edge=right_edge)
```
Using this new data object, let's confirm that we're only looking at a subset of the domain by first calculating the total mass in gas and particles contained in the subvolume:
```
print([tm.in_units("Msun") for tm in ad2.quantities.total_mass()])
```
And then by visualizing what the new zoomed region looks like
```
px = yt.ProjectionPlot(ds, "x", ("gas", "density"), center=center, width=new_box_size)
px.show()
```
Cool - there's a disk galaxy there!
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/cookbook/yt_gadget_analysis.ipynb | yt_gadget_analysis.ipynb |
Calculating Dataset Information
-------------------------------
These recipes demonstrate methods of calculating quantities in a simulation,
either for later visualization or for understanding properties of fluids and
particles in the simulation.
Average Field Value
~~~~~~~~~~~~~~~~~~~
This recipe is a very simple method of calculating the global average of a
given field, as weighted by another field.
See :ref:`derived-quantities` for more information.
.. yt_cookbook:: average_value.py
Mass Enclosed in a Sphere
~~~~~~~~~~~~~~~~~~~~~~~~~
This recipe constructs a sphere and then sums the total mass in particles and
fluids in the sphere.
See :ref:`available-objects` and :ref:`derived-quantities` for more information.
.. yt_cookbook:: sum_mass_in_sphere.py
Global Phase Plot
~~~~~~~~~~~~~~~~~
This is a simple recipe to show how to open a dataset and then plot a couple
global phase diagrams, save them, and quit.
See :ref:`how-to-make-2d-profiles` for more information.
.. yt_cookbook:: global_phase_plots.py
.. _cookbook-radial-velocity:
Radial Velocity Profile
~~~~~~~~~~~~~~~~~~~~~~~
This recipe demonstrates how to subtract off a bulk velocity on a sphere before
calculating the radial velocity within that sphere.
See :ref:`how-to-make-1d-profiles` for more information on creating profiles and
:ref:`field_parameters` for an explanation of how the bulk velocity is provided
to the radial velocity field function.
.. yt_cookbook:: rad_velocity.py
Simulation Analysis
~~~~~~~~~~~~~~~~~~~
This uses :class:`~yt.data_objects.time_series.DatasetSeries` to
calculate the extrema of a series of outputs, whose names it guesses in
advance. This will run in parallel and take advantage of multiple MPI tasks.
See :ref:`parallel-computation` and :ref:`time-series-analysis` for more
information.
.. yt_cookbook:: simulation_analysis.py
.. _cookbook-time-series-analysis:
Time Series Analysis
~~~~~~~~~~~~~~~~~~~~
This recipe shows how to calculate a number of quantities on a set of parameter
files. Note that it is parallel aware, and that if you only wanted to run in
serial the operation ``for pf in ts:`` would also have worked identically.
See :ref:`parallel-computation` and :ref:`time-series-analysis` for more
information.
.. yt_cookbook:: time_series.py
.. _cookbook-simple-derived-fields:
Simple Derived Fields
~~~~~~~~~~~~~~~~~~~~~
This recipe demonstrates how to create a simple derived field,
``thermal_energy_density``, and then generate a projection from it.
See :ref:`creating-derived-fields` and :ref:`projection-plots` for more
information.
.. yt_cookbook:: derived_field.py
.. _cookbook-complicated-derived-fields:
Complicated Derived Fields
~~~~~~~~~~~~~~~~~~~~~~~~~~
This recipe demonstrates how to use the
:meth:`~yt.frontends.flash.data_structures.FLASHDataset.add_gradient_fields` method
to generate gradient fields and use them in a more complex derived field.
.. yt_cookbook:: hse_field.py
Using Particle Filters to Calculate Star Formation Rates
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This recipe demonstrates how to use a particle filter to calculate the star
formation rate in a galaxy evolution simulation.
See :ref:`filtering-particles` for more information.
.. yt_cookbook:: particle_filter_sfr.py
Making a Turbulent Kinetic Energy Power Spectrum
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This recipe shows how to use ``yt`` to read data and put it on a uniform
grid to interface with the NumPy FFT routines and create a turbulent
kinetic energy power spectrum. (Note: the dataset used here is of low
resolution, so the turbulence is not very well-developed. The spike
at high wavenumbers is due to non-periodicity in the z-direction).
.. yt_cookbook:: power_spectrum_example.py
Downsampling an AMR Dataset
~~~~~~~~~~~~~~~~~~~~~~~~~~~
This recipe shows how to use the ``max_level`` attribute of a yt data
object to only select data up to a maximum AMR level.
.. yt_cookbook:: downsampling_amr.py
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/cookbook/calculating_information.rst | calculating_information.rst |
import numpy as np
from matplotlib.colors import LogNorm
import yt
from yt.visualization.api import get_multi_plot
fn = "Enzo_64/RD0006/RedshiftOutput0006" # dataset to load
# load data and get center value and center location as maximum density location
ds = yt.load(fn)
v, c = ds.find_max(("gas", "density"))
# set up our Fixed Resolution Buffer parameters: a width, resolution, and center
width = (1.0, "unitary")
res = [1000, 1000]
# get_multi_plot returns a containing figure, a list-of-lists of axes
# into which we can place plots, and some axes that we'll put
# colorbars.
# it accepts: # of x-axis plots, # of y-axis plots, and how the
# colorbars are oriented (this also determines where they go: below
# in the case of 'horizontal', on the right in the case of
# 'vertical'), bw is the base-width in inches (4 is about right for
# most cases)
orient = "horizontal"
fig, axes, colorbars = get_multi_plot(2, 3, colorbar=orient, bw=6)
# Now we follow the method of "multi_plot.py" but we're going to iterate
# over the columns, which will become axes of slicing.
plots = []
for ax in range(3):
sli = ds.slice(ax, c[ax])
frb = sli.to_frb(width, res)
den_axis = axes[ax][0]
temp_axis = axes[ax][1]
# here, we turn off the axes labels and ticks, but you could
# customize further.
for ax in (den_axis, temp_axis):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
# converting our fixed resolution buffers to NDarray so matplotlib can
# render them
dens = np.array(frb[("gas", "density")])
temp = np.array(frb[("gas", "temperature")])
plots.append(den_axis.imshow(dens, norm=LogNorm()))
plots[-1].set_clim((5e-32, 1e-29))
plots[-1].set_cmap("bds_highcontrast")
plots.append(temp_axis.imshow(temp, norm=LogNorm()))
plots[-1].set_clim((1e3, 1e8))
plots[-1].set_cmap("hot")
# Each 'cax' is a colorbar-container, into which we'll put a colorbar.
# the zip command creates triples from each element of the three lists
# . Note that it cuts off after the shortest iterator is exhausted,
# in this case, titles.
titles = [
r"$\mathrm{density}\ (\mathrm{g\ cm^{-3}})$",
r"$\mathrm{temperature}\ (\mathrm{K})$",
]
for p, cax, t in zip(plots, colorbars, titles):
# Now we make a colorbar, using the 'image' we stored in plots
# above. note this is what is *returned* by the imshow method of
# the plots.
cbar = fig.colorbar(p, cax=cax, orientation=orient)
cbar.set_label(t)
# And now we're done!
fig.savefig(f"{ds}_3x2.png") | yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/cookbook/multi_plot_3x2_FRB.py | multi_plot_3x2_FRB.py |
import numpy as np
import yt
from yt.data_objects.level_sets.api import Clump, find_clumps
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
data_source = ds.disk([0.5, 0.5, 0.5], [0.0, 0.0, 1.0], (8, "kpc"), (1, "kpc"))
# the field to be used for contouring
field = ("gas", "density")
# This is the multiplicative interval between contours.
step = 2.0
# Now we set some sane min/max values between which we want to find contours.
# This is how we tell the clump finder what to look for -- it won't look for
# contours connected below or above these threshold values.
c_min = 10 ** np.floor(np.log10(data_source[field]).min())
c_max = 10 ** np.floor(np.log10(data_source[field]).max() + 1)
# Now find get our 'base' clump -- this one just covers the whole domain.
master_clump = Clump(data_source, field)
# Add a "validator" to weed out clumps with less than 20 cells.
# As many validators can be added as you want.
master_clump.add_validator("min_cells", 20)
# Calculate center of mass for all clumps.
master_clump.add_info_item("center_of_mass")
# Begin clump finding.
find_clumps(master_clump, c_min, c_max, step)
# Save the clump tree as a reloadable dataset
fn = master_clump.save_as_dataset(fields=[("gas", "density"), ("all", "particle_mass")])
# We can traverse the clump hierarchy to get a list of all of the 'leaf' clumps
leaf_clumps = master_clump.leaves
# Get total cell and particle masses for each leaf clump
leaf_masses = [leaf.quantities.total_mass() for leaf in leaf_clumps]
# If you'd like to visualize these clumps, a list of clumps can be supplied to
# the "clumps" callback on a plot. First, we create a projection plot:
prj = yt.ProjectionPlot(ds, 2, field, center="c", width=(20, "kpc"))
# Next we annotate the plot with contours on the borders of the clumps
prj.annotate_clumps(leaf_clumps)
# Save the plot to disk.
prj.save("clumps")
# Reload the clump dataset.
cds = yt.load(fn)
# Clump annotation can also be done with the reloaded clump dataset.
# Remove the original clump annotation
prj.clear_annotations()
# Get the leaves and add the callback.
leaf_clumps_reloaded = cds.leaves
prj.annotate_clumps(leaf_clumps_reloaded)
prj.save("clumps_reloaded")
# Query fields for clumps in the tree.
print(cds.tree["clump", "center_of_mass"])
print(cds.tree.children[0]["grid", "density"])
print(cds.tree.children[1]["all", "particle_mass"])
# Get all of the leaf clumps.
print(cds.leaves)
print(cds.leaves[0]["clump", "cell_mass"]) | yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/cookbook/find_clumps.py | find_clumps.py |
# Geographic Transforms and Projections
### Loading the GEOS data
For this analysis we'll be loading some global climate data into yt. A frontend does not exist for this dataset yet, so we'll load it in as a uniform grid with netcdf4.
```
import os
import re
import netCDF4 as nc4
import numpy as np
import yt
def get_data_path(arg):
if os.path.exists(arg):
return arg
else:
return os.path.join(yt.config.ytcfg.get("yt", "test_data_dir"), arg)
n = nc4.Dataset(get_data_path("geos/GEOS.fp.asm.inst3_3d_aer_Nv.20180822_0900.V01.nc4"))
```
Using the loaded data we'll fill arrays with the data dimensions and limits. We'll also rename `vertical level` to `altitude` to be clearer.
```
dims = []
sizes = []
bbox = []
ndims = len(n.dimensions)
for dim in n.dimensions.keys():
size = n.variables[dim].size
if size > 1:
bbox.append([n.variables[dim][:].min(), n.variables[dim][:].max()])
dims.append(n.variables[dim].long_name)
sizes.append(size)
dims.reverse() # Fortran ordering
sizes.reverse()
bbox.reverse()
dims = [f.replace("vertical level", "altitude") for f in dims]
bbox = np.array(bbox)
```
We'll also load the data into a container dictionary and create a lookup for the short to the long names
```
w_regex = re.compile(r"([a-zA-Z]+)(.*)")
def regex_parser(s):
try:
return "**".join(filter(None, w_regex.search(s).groups()))
except AttributeError:
return s
data = {}
names = {}
for field, d in n.variables.items():
if d.ndim != ndims:
continue
units = n.variables[field].units
units = " * ".join(map(regex_parser, units.split()))
data[field] = (np.squeeze(d), str(units))
names[field] = n.variables[field].long_name.replace("_", " ")
```
Now the data can be loaded with yt's `load_uniform_grid` function. We also need to say that the geometry is a `geographic` type. This will ensure that the axes created are matplotlib GeoAxes and that the transform functions are available to use for projections.
```
ds = yt.load_uniform_grid(data, sizes, 1.0, geometry=("geographic", dims), bbox=bbox)
```
### Default projection with geographic geometry
Now that the data is loaded, we can plot it with a yt SlicePlot along the altitude. This will create a figure with latitude and longitude as the plot axes and the colormap will correspond to the air density. Because no projection type has been set, the geographic geometry type assumes that the data is of the `PlateCarree` form. The resulting figure will be a `Mollweide` plot.
```
p = yt.SlicePlot(ds, "altitude", "AIRDENS")
p.show()
```
Note that this doesn't have a lot of contextual information. We can add annotations for the coastlines just as we would with matplotlib. Before the annotations are set, we need to call `p._setup_plots` to make the axes available for annotation.
```
p = yt.SlicePlot(ds, "altitude", "AIRDENS")
p._setup_plots()
p.plots["AIRDENS"].axes.set_global()
p.plots["AIRDENS"].axes.coastlines()
p.show()
```
### Using geographic transforms to project data
If a projection other than the default `Mollweide` is desired, then we can pass an argument to the `set_mpl_projection()` function to set a different projection than the default. This will set the projection to a Robinson projection.
```
p = yt.SlicePlot(ds, "altitude", "AIRDENS")
p.set_mpl_projection("Robinson")
p._setup_plots()
p.plots["AIRDENS"].axes.set_global()
p.plots["AIRDENS"].axes.coastlines()
p.show()
```
`geo_projection` accepts a string or a 2- to 3- length sequence describing the projection the second item in the sequence are the args and the third item is the kwargs. This can be used for further customization of the projection.
```
p = yt.SlicePlot(ds, "altitude", "AIRDENS")
p.set_mpl_projection(("Robinson", (37.5,)))
p._setup_plots()
p.plots["AIRDENS"].axes.set_global()
p.plots["AIRDENS"].axes.coastlines()
p.show()
```
We don't actually need to keep creating a SlicePlot to change the projection type. We can use the function `set_mpl_projection()` and pass in a string of the transform type that we desire after an existing `SlicePlot` instance has been created. This will set the figure to an `Orthographic` projection.
```
p.set_mpl_projection("Orthographic")
p._setup_plots()
p.plots["AIRDENS"].axes.set_global()
p.plots["AIRDENS"].axes.coastlines()
p.show()
```
`set_mpl_projection()` can be used in a number of ways to customize the projection type.
* If a **string** is passed, then the string must correspond to the transform name, which is exclusively cartopy transforms at this time. This looks like: `set_mpl_projection('ProjectionType')`
* If a **tuple** is passed, the first item of the tuple is a string of the transform name and the second two items are args and kwargs. These can be used to further customize the transform (by setting the latitude and longitude, for example. This looks like:
* `set_mpl_projection(('ProjectionType', (args)))`
* `set_mpl_projection(('ProjectionType', (args), {kwargs}))`
* A **transform object** can also be passed. This can be any transform type -- a cartopy transform or a matplotlib transform. This allows users to either pass the same transform object around between plots or define their own transform and use that in yt's plotting functions. With a standard cartopy transform, this would look like:
* `set_mpl_projection(cartopy.crs.PlateCarree())`
To summarize:
The function `set_mpl_projection` can take one of several input types:
* `set_mpl_projection('ProjectionType')`
* `set_mpl_projection(('ProjectionType', (args)))`
* `set_mpl_projection(('ProjectionType', (args), {kwargs}))`
* `set_mpl_projection(cartopy.crs.MyTransform())`
For example, we can make the same Orthographic projection and pass in the central latitude and longitude for the projection:
```
p.set_mpl_projection(("Orthographic", (90, 45)))
p._setup_plots()
p.plots["AIRDENS"].axes.set_global()
p.plots["AIRDENS"].axes.coastlines()
p.show()
```
Or we can pass in the arguments to this function as kwargs by passing a three element tuple.
```
p.set_mpl_projection(
("Orthographic", (), {"central_latitude": -45, "central_longitude": 275})
)
p._setup_plots()
p.plots["AIRDENS"].axes.set_global()
p.plots["AIRDENS"].axes.coastlines()
p.show()
```
### A few examples of different projections
This next section will show a few of the different projections that one can use. This isn't meant to be complete, but it'll give you a visual idea of how these transforms can be used to illustrate geographic data for different purposes.
```
p.set_mpl_projection(("RotatedPole", (177.5, 37.5)))
p._setup_plots()
p.plots["AIRDENS"].axes.set_global()
p.plots["AIRDENS"].axes.coastlines()
p.show()
p.set_mpl_projection(
("RotatedPole", (), {"pole_latitude": 37.5, "pole_longitude": 177.5})
)
p._setup_plots()
p.plots["AIRDENS"].axes.set_global()
p.plots["AIRDENS"].axes.coastlines()
p.show()
p.set_mpl_projection("NorthPolarStereo")
p._setup_plots()
p.plots["AIRDENS"].axes.set_global()
p.plots["AIRDENS"].axes.coastlines()
p.show()
p.set_mpl_projection("AlbersEqualArea")
p._setup_plots()
p.plots["AIRDENS"].axes.set_global()
p.plots["AIRDENS"].axes.coastlines()
p.show()
p.set_mpl_projection("InterruptedGoodeHomolosine")
p._setup_plots()
p.plots["AIRDENS"].axes.set_global()
p.plots["AIRDENS"].axes.coastlines()
p.show()
p.set_mpl_projection("Robinson")
p._setup_plots()
p.plots["AIRDENS"].axes.set_global()
p.plots["AIRDENS"].axes.coastlines()
p.show()
p.set_mpl_projection("Gnomonic")
p._setup_plots()
p.plots["AIRDENS"].axes.set_global()
p.plots["AIRDENS"].axes.coastlines()
p.show()
```
### Modifying the data transform
While the data projection modifies how the data is displayed in our plot, the data transform describes the coordinate system that the data is actually described by. By default, the data is assumed to have a `PlateCarree` data transform. If you would like to change this, you can access the dictionary in the coordinate handler and set it to something else. The dictionary is structured such that each axis has its own default transform, so be sure to set the axis you intend to change. This next example changes the transform to a Miller type. Because our data is not in Miller coordinates, it will be skewed.
```
ds.coordinates.data_transform["altitude"] = "Miller"
p = yt.SlicePlot(ds, "altitude", "AIRDENS")
p.plots["AIRDENS"].axes.set_global()
p.plots["AIRDENS"].axes.coastlines()
p.show()
```
Because the transform type shouldn't change as we make subsequent figures, once it is changed it will be the same for all other figures made with the same dataset object. Note that this particular dataset is not actually in a Miller system, which is why the data now doesn't span the entire globe. Setting the new projection to Robinson results in Miller-skewed data in our next figure.
```
p.set_mpl_projection("Robinson")
p._setup_plots()
p.plots["AIRDENS"].axes.set_global()
p.plots["AIRDENS"].axes.coastlines()
p.show()
```
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/cookbook/geographic_xforms_and_projections.ipynb | geographic_xforms_and_projections.ipynb |
## Loading Files
Alright, let's start with some basics. Before we do anything, we will need to load a snapshot. You can do this using the ```load_sample``` convenience function. yt will autodetect that you want a tipsy snapshot and download it from the yt hub.
```
import yt
```
We will be looking at a fairly low resolution dataset.
>This dataset is available for download at https://yt-project.org/data/TipsyGalaxy.tar.gz (10 MB).
```
ds = yt.load_sample("TipsyGalaxy")
```
We now have a `TipsyDataset` object called `ds`. Let's see what fields it has.
```
ds.field_list
```
yt also defines so-called "derived" fields. These fields are functions of the on-disk fields that live in the `field_list`. There is a `derived_field_list` attribute attached to the `Dataset` object - let's take look at the derived fields in this dataset:
```
ds.derived_field_list
```
All of the field in the `field_list` are arrays containing the values for the associated particles. These haven't been smoothed or gridded in any way. We can grab the array-data for these particles using `ds.all_data()`. For example, let's take a look at a temperature-colored scatterplot of the gas particles in this output.
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
ad = ds.all_data()
xcoord = ad["Gas", "Coordinates"][:, 0].v
ycoord = ad["Gas", "Coordinates"][:, 1].v
logT = np.log10(ad["Gas", "Temperature"])
plt.scatter(
xcoord, ycoord, c=logT, s=2 * logT, marker="o", edgecolor="none", vmin=2, vmax=6
)
plt.xlim(-20, 20)
plt.ylim(-20, 20)
cb = plt.colorbar()
cb.set_label(r"$\log_{10}$ Temperature")
plt.gcf().set_size_inches(15, 10)
```
## Making Smoothed Images
yt will automatically generate smoothed versions of these fields that you can use to plot. Let's make a temperature slice and a density projection.
```
yt.SlicePlot(ds, "z", ("gas", "density"), width=(40, "kpc"), center="m")
yt.ProjectionPlot(ds, "z", ("gas", "density"), width=(40, "kpc"), center="m")
```
Not only are the values in the tipsy snapshot read and automatically smoothed, the auxiliary files that have physical significance are also smoothed. Let's look at a slice of Iron mass fraction.
```
yt.SlicePlot(ds, "z", ("gas", "Fe_fraction"), width=(40, "kpc"), center="m")
```
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/cookbook/tipsy_and_yt.ipynb | tipsy_and_yt.ipynb |
.. _cookbook:
The Cookbook
============
yt provides a great deal of functionality to the user, but sometimes it can
be a bit complex. This section of the documentation lays out examples recipes
for how to do a variety of tasks. Most of the early, simple code
demonstrations are small scripts which you can easily copy and paste into
your own code; however, as we move to more complex tasks, the recipes move to
iPython notebooks to display intermediate steps. All of these recipes are
available for download in a link next to the recipe.
Getting the Sample Data
-----------------------
All of the data used in the cookbook is freely available
`here <https://yt-project.org/data/>`_, where you will find links to download
individual datasets.
.. note:: To contribute your own recipes, please follow the instructions
on how to contribute documentation code: :ref:`writing_documentation`.
Example Scripts
---------------
.. toctree::
:maxdepth: 2
simple_plots
calculating_information
complex_plots
constructing_data_objects
.. _example-notebooks:
Example Notebooks
-----------------
.. toctree::
:maxdepth: 1
notebook_tutorial
custom_colorbar_tickmarks
gadget_notebook
owls_notebook
../visualizing/transfer_function_helper
fits_radio_cubes
fits_xray_images
geographic_projections
tipsy_notebook
../visualizing/volume_rendering_tutorial
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/cookbook/index.rst | index.rst |
```
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
slc = yt.SlicePlot(ds, "x", ("gas", "density"))
slc
```
`PlotWindow` plots are containers for plots, keyed to field names. Below, we get a copy of the plot for the `Density` field.
```
plot = slc.plots[("gas", "density")]
```
The plot has a few attributes that point to underlying `matplotlib` plot primitives. For example, the `colorbar` object corresponds to the `cb` attribute of the plot.
```
colorbar = plot.cb
```
Next, we call `_setup_plots()` to ensure the plot is properly initialized. Without this, the custom tickmarks we are adding will be ignored.
```
slc._setup_plots()
```
To set custom tickmarks, simply call the `matplotlib` [`set_ticks`](https://matplotlib.org/stable/api/colorbar_api.html#matplotlib.colorbar.ColorbarBase.set_ticks) and [`set_ticklabels`](https://matplotlib.org/stable/api/colorbar_api.html#matplotlib.colorbar.ColorbarBase.set_ticklabels) functions.
```
colorbar.set_ticks([1e-28])
colorbar.set_ticklabels(["$10^{-28}$"])
slc
```
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/cookbook/custom_colorbar_tickmarks.ipynb | custom_colorbar_tickmarks.ipynb |
import numpy as np
import yt
# Load the dataset. We'll work with a some Gadget data to illustrate all
# the different ways in which the effective resolution can vary. Specifically,
# we'll use the GadgetDiskGalaxy dataset available at
# http://yt-project.org/data/GadgetDiskGalaxy.tar.gz
# load the data with a refinement criteria of 2 particle per cell
# n.b. -- in yt-4.0, n_ref no longer exists as the data is no longer
# deposited only a grid. At present (03/15/2019), there is no way to
# handle non-gas data in Gadget snapshots, though that is work in progress
if int(yt.__version__[0]) < 4:
# increasing n_ref will result in a "lower resolution" (but faster) image,
# while decreasing it will go the opposite way
ds = yt.load("GadgetDiskGalaxy/snapshot_200.hdf5", n_ref=16)
else:
ds = yt.load("GadgetDiskGalaxy/snapshot_200.hdf5")
# Create projections of the density (max value in each resolution element in the image):
prj = yt.ProjectionPlot(
ds, "x", ("gas", "density"), method="max", center="max", width=(100, "kpc")
)
# nicen up the plot by using a better interpolation:
plot = prj.plots[list(prj.plots)[0]]
ax = plot.axes
img = ax.images[0]
img.set_interpolation("bicubic")
# nicen up the plot by setting the background color to the minimum of the colorbar
prj.set_background_color(("gas", "density"))
# vary the buff_size -- the number of resolution elements in the actual visualization
# set it to 2000x2000
buff_size = 2000
prj.set_buff_size(buff_size)
# set the figure size in inches
figure_size = 10
prj.set_figure_size(figure_size)
# if the image does not fill the plot (as is default, since the axes and
# colorbar contribute as well), then figuring out the proper dpi for a given
# buff_size and figure_size is non-trivial -- it requires finding the bbox
# for the actual image:
bounding_box = ax.get_position()
# we're going to scale to the larger of the two sides
image_size = figure_size * max([bounding_box.width, bounding_box.height])
# now save with a dpi that's scaled to the buff_size:
dpi = np.rint(np.ceil(buff_size / image_size))
prj.save("with_axes_colorbar.png", mpl_kwargs=dict(dpi=dpi))
# in the case where the image fills the entire plot (i.e. if the axes and colorbar
# are turned off), it's trivial to figure out the correct dpi from the buff_size and
# figure_size (or vice versa):
# hide the colorbar:
prj.hide_colorbar()
# hide the axes, while still keeping the background color correct:
prj.hide_axes(draw_frame=True)
# save with a dpi that makes sense:
dpi = np.rint(np.ceil(buff_size / figure_size))
prj.save("no_axes_colorbar.png", mpl_kwargs=dict(dpi=dpi)) | yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/cookbook/image_resolution.py | image_resolution.py |
```
%matplotlib inline
import numpy as np
import yt
```
This notebook shows how to use yt to make plots and examine FITS X-ray images and events files.
## Sloshing, Shocks, and Bubbles in Abell 2052
This example uses data provided by [Scott Randall](http://hea-www.cfa.harvard.edu/~srandall/), presented originally in [Blanton, E.L., Randall, S.W., Clarke, T.E., et al. 2011, ApJ, 737, 99](https://ui.adsabs.harvard.edu/abs/2011ApJ...737...99B). They consist of two files, a "flux map" in counts/s/pixel between 0.3 and 2 keV, and a spectroscopic temperature map in keV.
```
ds = yt.load(
"xray_fits/A2052_merged_0.3-2_match-core_tmap_bgecorr.fits",
auxiliary_files=["xray_fits/A2052_core_tmap_b1_m2000_.fits"],
)
```
Since the flux and projected temperature images are in two different files, we had to use one of them (in this case the "flux" file) as a master file, and pass in the "temperature" file with the `auxiliary_files` keyword to `load`.
Next, let's derive some new fields for the number of counts, the "pseudo-pressure", and the "pseudo-entropy":
```
def _counts(field, data):
exposure_time = data.get_field_parameter("exposure_time")
return data["fits", "flux"] * data["fits", "pixel"] * exposure_time
ds.add_field(
("gas", "counts"),
function=_counts,
sampling_type="cell",
units="counts",
take_log=False,
)
def _pp(field, data):
return np.sqrt(data["gas", "counts"]) * data["fits", "projected_temperature"]
ds.add_field(
("gas", "pseudo_pressure"),
function=_pp,
sampling_type="cell",
units="sqrt(counts)*keV",
take_log=False,
)
def _pe(field, data):
return data["fits", "projected_temperature"] * data["gas", "counts"] ** (-1.0 / 3.0)
ds.add_field(
("gas", "pseudo_entropy"),
function=_pe,
sampling_type="cell",
units="keV*(counts)**(-1/3)",
take_log=False,
)
```
Here, we're deriving a "counts" field from the "flux" field by passing it a `field_parameter` for the exposure time of the time and multiplying by the pixel scale. Second, we use the fact that the surface brightness is strongly dependent on density ($S_X \propto \rho^2$) to use the counts in each pixel as a "stand-in". Next, we'll grab the exposure time from the primary FITS header of the flux file and create a `YTQuantity` from it, to be used as a `field_parameter`:
```
exposure_time = ds.quan(ds.primary_header["exposure"], "s")
```
Now, we can make the `SlicePlot` object of the fields we want, passing in the `exposure_time` as a `field_parameter`. We'll also set the width of the image to 250 pixels.
```
slc = yt.SlicePlot(
ds,
"z",
[
("fits", "flux"),
("fits", "projected_temperature"),
("gas", "pseudo_pressure"),
("gas", "pseudo_entropy"),
],
origin="native",
field_parameters={"exposure_time": exposure_time},
)
slc.set_log(("fits", "flux"), True)
slc.set_zlim(("fits", "flux"), 1e-5)
slc.set_log(("gas", "pseudo_pressure"), False)
slc.set_log(("gas", "pseudo_entropy"), False)
slc.set_width(250.0)
slc.show()
```
To add the celestial coordinates to the image, we can use `PlotWindowWCS`, if you have a recent version of AstroPy (>= 1.3) installed:
```
from yt.frontends.fits.misc import PlotWindowWCS
wcs_slc = PlotWindowWCS(slc)
wcs_slc.show()
```
We can make use of yt's facilities for profile plotting as well.
```
v, c = ds.find_max(("fits", "flux")) # Find the maximum flux and its center
my_sphere = ds.sphere(c, (100.0, "code_length")) # Radius of 150 pixels
my_sphere.set_field_parameter("exposure_time", exposure_time)
```
Such as a radial profile plot:
```
radial_profile = yt.ProfilePlot(
my_sphere,
"radius",
["counts", "pseudo_pressure", "pseudo_entropy"],
n_bins=30,
weight_field="ones",
)
radial_profile.set_log("counts", True)
radial_profile.set_log("pseudo_pressure", True)
radial_profile.set_log("pseudo_entropy", True)
radial_profile.set_xlim(3, 100.0)
radial_profile.show()
```
Or a phase plot:
```
phase_plot = yt.PhasePlot(
my_sphere, "pseudo_pressure", "pseudo_entropy", ["counts"], weight_field=None
)
phase_plot.show()
```
Finally, we can also take an existing [ds9](http://ds9.si.edu/site/Home.html) region and use it to create a "cut region", using `ds9_region` (the [regions](https://astropy-regions.readthedocs.io/) package needs to be installed for this):
```
from yt.frontends.fits.misc import ds9_region
reg_file = [
"# Region file format: DS9 version 4.1\n",
"global color=green dashlist=8 3 width=3 include=1 source=1 fk5\n",
'circle(15:16:44.817,+7:01:19.62,34.6256")',
]
f = open("circle.reg", "w")
f.writelines(reg_file)
f.close()
circle_reg = ds9_region(
ds, "circle.reg", field_parameters={"exposure_time": exposure_time}
)
```
This region may now be used to compute derived quantities:
```
print(
circle_reg.quantities.weighted_average_quantity("projected_temperature", "counts")
)
```
Or used in projections:
```
prj = yt.ProjectionPlot(
ds,
"z",
[
("fits", "flux"),
("fits", "projected_temperature"),
("gas", "pseudo_pressure"),
("gas", "pseudo_entropy"),
],
origin="native",
field_parameters={"exposure_time": exposure_time},
data_source=circle_reg,
method="sum",
)
prj.set_log(("fits", "flux"), True)
prj.set_log(("gas", "pseudo_pressure"), False)
prj.set_log(("gas", "pseudo_entropy"), False)
prj.set_width(250.0)
prj.show()
```
## The Bullet Cluster
This example uses an events table file from a ~100 ks exposure of the "Bullet Cluster" from the [Chandra Data Archive](http://cxc.harvard.edu/cda/). In this case, the individual photon events are treated as particle fields in yt. However, you can make images of the object in different energy bands using the `setup_counts_fields` function.
```
from yt.frontends.fits.api import setup_counts_fields
```
`load` will handle the events file as FITS image files, and will set up a grid using the WCS information in the file. Optionally, the events may be reblocked to a new resolution. by setting the `"reblock"` parameter in the `parameters` dictionary in `load`. `"reblock"` must be a power of 2.
```
ds2 = yt.load("xray_fits/acisf05356N003_evt2.fits.gz", parameters={"reblock": 2})
```
`setup_counts_fields` will take a list of energy bounds (emin, emax) in keV and create a new field from each where the photons in that energy range will be deposited onto the image grid.
```
ebounds = [(0.1, 2.0), (2.0, 5.0)]
setup_counts_fields(ds2, ebounds)
```
The "x", "y", "energy", and "time" fields in the events table are loaded as particle fields. Each one has a name given by "event\_" plus the name of the field:
```
dd = ds2.all_data()
print(dd["io", "event_x"])
print(dd["io", "event_y"])
```
Now, we'll make a plot of the two counts fields we made, and pan and zoom to the bullet:
```
slc = yt.SlicePlot(
ds2, "z", [("gas", "counts_0.1-2.0"), ("gas", "counts_2.0-5.0")], origin="native"
)
slc.pan((100.0, 100.0))
slc.set_width(500.0)
slc.show()
```
The counts fields can take the field parameter `"sigma"` and use [AstroPy's convolution routines](https://astropy.readthedocs.io/en/latest/convolution/) to smooth the data with a Gaussian:
```
slc = yt.SlicePlot(
ds2,
"z",
[("gas", "counts_0.1-2.0"), ("gas", "counts_2.0-5.0")],
origin="native",
field_parameters={"sigma": 2.0},
) # This value is in pixel scale
slc.pan((100.0, 100.0))
slc.set_width(500.0)
slc.set_zlim(("gas", "counts_0.1-2.0"), 0.01, 100.0)
slc.set_zlim(("gas", "counts_2.0-5.0"), 0.01, 50.0)
slc.show()
```
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/cookbook/fits_xray_images.ipynb | fits_xray_images.ipynb |
import numpy as np
from matplotlib.colors import LogNorm
import yt
from yt.visualization.base_plot_types import get_multi_plot
fn = "GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150" # dataset to load
orient = "horizontal"
ds = yt.load(fn) # load data
# There's a lot in here:
# From this we get a containing figure, a list-of-lists of axes into which we
# can place plots, and some axes that we'll put colorbars.
# We feed it:
# Number of plots on the x-axis, number of plots on the y-axis, and how we
# want our colorbars oriented. (This governs where they will go, too.
# bw is the base-width in inches, but 4 is about right for most cases.
fig, axes, colorbars = get_multi_plot(3, 2, colorbar=orient, bw=4)
slc = yt.SlicePlot(
ds,
"z",
fields=[("gas", "density"), ("gas", "temperature"), ("gas", "velocity_magnitude")],
)
proj = yt.ProjectionPlot(ds, "z", ("gas", "density"), weight_field=("gas", "density"))
slc_frb = slc.data_source.to_frb((1.0, "Mpc"), 512)
proj_frb = proj.data_source.to_frb((1.0, "Mpc"), 512)
dens_axes = [axes[0][0], axes[1][0]]
temp_axes = [axes[0][1], axes[1][1]]
vels_axes = [axes[0][2], axes[1][2]]
for dax, tax, vax in zip(dens_axes, temp_axes, vels_axes):
dax.xaxis.set_visible(False)
dax.yaxis.set_visible(False)
tax.xaxis.set_visible(False)
tax.yaxis.set_visible(False)
vax.xaxis.set_visible(False)
vax.yaxis.set_visible(False)
# Converting our Fixed Resolution Buffers to numpy arrays so that matplotlib
# can render them
slc_dens = np.array(slc_frb[("gas", "density")])
proj_dens = np.array(proj_frb[("gas", "density")])
slc_temp = np.array(slc_frb[("gas", "temperature")])
proj_temp = np.array(proj_frb[("gas", "temperature")])
slc_vel = np.array(slc_frb[("gas", "velocity_magnitude")])
proj_vel = np.array(proj_frb[("gas", "velocity_magnitude")])
plots = [
dens_axes[0].imshow(slc_dens, origin="lower", norm=LogNorm()),
dens_axes[1].imshow(proj_dens, origin="lower", norm=LogNorm()),
temp_axes[0].imshow(slc_temp, origin="lower"),
temp_axes[1].imshow(proj_temp, origin="lower"),
vels_axes[0].imshow(slc_vel, origin="lower", norm=LogNorm()),
vels_axes[1].imshow(proj_vel, origin="lower", norm=LogNorm()),
]
plots[0].set_clim((1.0e-27, 1.0e-25))
plots[0].set_cmap("bds_highcontrast")
plots[1].set_clim((1.0e-27, 1.0e-25))
plots[1].set_cmap("bds_highcontrast")
plots[2].set_clim((1.0e7, 1.0e8))
plots[2].set_cmap("hot")
plots[3].set_clim((1.0e7, 1.0e8))
plots[3].set_cmap("hot")
plots[4].set_clim((1e6, 1e8))
plots[4].set_cmap("gist_rainbow")
plots[5].set_clim((1e6, 1e8))
plots[5].set_cmap("gist_rainbow")
titles = [
r"$\mathrm{Density}\ (\mathrm{g\ cm^{-3}})$",
r"$\mathrm{Temperature}\ (\mathrm{K})$",
r"$\mathrm{Velocity Magnitude}\ (\mathrm{cm\ s^{-1}})$",
]
for p, cax, t in zip(plots[0:6:2], colorbars, titles):
cbar = fig.colorbar(p, cax=cax, orientation=orient)
cbar.set_label(t)
# And now we're done!
fig.savefig(f"{ds}_3x2") | yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/cookbook/multi_plot_slice_and_proj.py | multi_plot_slice_and_proj.py |
A Few Complex Plots
-------------------
The built-in plotting functionality covers the very simple use cases that are
most common. These scripts will demonstrate how to construct more complex
plots or publication-quality plots. In many cases these show how to make
multi-panel plots.
Multi-Width Image
~~~~~~~~~~~~~~~~~
This is a simple recipe to show how to open a dataset and then plot slices
through it at varying widths.
See :ref:`slice-plots` for more information.
.. yt_cookbook:: multi_width_image.py
.. _image-resolution-primer:
Varying the resolution of an image
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This illustrates the various parameters that control the resolution
of an image, including the (deprecated) refinement level, the size of
the :class:`~yt.visualization.fixed_resolution.FixedResolutionBuffer`,
and the number of pixels in the output image.
In brief, there are three parameters that control the final resolution,
with a fourth entering for particle data that is deposited onto a mesh
(i.e. pre-4.0). Those are:
1. ``buff_size``, which can be altered with
:meth:`~yt.visualization.plot_window.PlotWindow.set_buff_size`, which
is inherited by
:class:`~yt.visualization.plot_window.AxisAlignedSlicePlot`,
:class:`~yt.visualization.plot_window.OffAxisSlicePlot`,
:class:`~yt.visualization.plot_window.AxisAlignedProjectionPlot`, and
:class:`~yt.visualization.plot_window.OffAxisProjectionPlot`. This
controls the number of resolution elements in the
:class:`~yt.visualization.fixed_resolution.FixedResolutionBuffer`,
which can be thought of as the number of individually colored
squares (on a side) in a 2D image. ``buff_size`` can be set
after creating the image with
:meth:`~yt.visualization.plot_window.PlotWindow.set_buff_size`,
or during image creation with the ``buff_size`` argument to any
of the four preceding classes.
2. ``figure_size``, which can be altered with either
:meth:`~yt.visualization.plot_container.PlotContainer.set_figure_size`,
or can be set during image creation with the ``window_size`` argument.
This sets the size of the final image (including the visualization and,
if applicable, the axes and colorbar as well) in inches.
3. ``dpi``, i.e. the dots-per-inch in your final file, which can also
be thought of as the actual resolution of your image. This can
only be set on save via the ``mpl_kwargs`` parameter to
:meth:`~yt.visualization.plot_container.PlotContainer.save`. The
``dpi`` and ``figure_size`` together set the true resolution of your
image (final image will be ``dpi`` :math:`*` ``figure_size`` pixels on a
side), so if these are set too low, then your ``buff_size`` will not
matter. On the other hand, increasing these without increasing
``buff_size`` accordingly will simply blow up your resolution
elements to fill several real pixels.
4. (only for meshed particle data) ``n_ref``, the maximum number of
particles in a cell in the oct-tree allowed before it is refined
(removed in yt-4.0 as particle data is no longer deposited onto
an oct-tree). For particle data, ``n_ref`` effectively sets the
underlying resolution of your simulation. Regardless, for either
grid data or deposited particle data, your image will never be
higher resolution than your simulation data. In other words,
if you are visualizing a region 50 kpc across that includes
data that reaches a resolution of 100 pc, then there's no reason
to set a ``buff_size`` (or a ``dpi`` :math:`*` ``figure_size``) above
50 kpc/ 100 pc = 500.
The below script demonstrates how each of these can be varied.
.. yt_cookbook:: image_resolution.py
Multipanel with Axes Labels
~~~~~~~~~~~~~~~~~~~~~~~~~~~
This illustrates how to use a SlicePlot to control a multipanel plot. This
plot uses axes labels to illustrate the length scales in the plot.
See :ref:`slice-plots` and the
`Matplotlib AxesGrid Object <https://matplotlib.org/mpl_toolkits/axes_grid/api/axes_grid_api.html>`_
for more information.
.. yt_cookbook:: multiplot_2x2.py
The above example gives you full control over the plots, but for most
purposes, the ``export_to_mpl_figure`` method is a simpler option,
allowing us to make a similar plot as:
.. yt_cookbook:: multiplot_export_to_mpl.py
Multipanel with PhasePlot
~~~~~~~~~~~~~~~~~~~~~~~~~~~
This illustrates how to use PhasePlot in a multipanel plot.
See :ref:`how-to-make-2d-profiles` and the
`Matplotlib AxesGrid Object <https://matplotlib.org/mpl_toolkits/axes_grid/api/axes_grid_api.html>`_
for more information.
.. yt_cookbook:: multiplot_phaseplot.py
Time Series Multipanel
~~~~~~~~~~~~~~~~~~~~~~
This illustrates how to create a multipanel plot of a time series dataset.
See :ref:`projection-plots`, :ref:`time-series-analysis`, and the
`Matplotlib AxesGrid Object <https://matplotlib.org/mpl_toolkits/axes_grid/api/axes_grid_api.html>`_
for more information.
.. yt_cookbook:: multiplot_2x2_time_series.py
Multiple Slice Multipanel
~~~~~~~~~~~~~~~~~~~~~~~~~
This illustrates how to create a multipanel plot of slices along the coordinate
axes. To focus on what's happening in the x-y plane, we make an additional
Temperature slice for the bottom-right subpanel.
See :ref:`slice-plots` and the
`Matplotlib AxesGrid Object <https://matplotlib.org/mpl_toolkits/axes_grid/api/axes_grid_api.html>`_
for more information.
.. yt_cookbook:: multiplot_2x2_coordaxes_slice.py
Multi-Plot Slice and Projections
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This shows how to combine multiple slices and projections into a single image,
with detailed control over colorbars, titles and color limits.
See :ref:`slice-plots` and :ref:`projection-plots` for more information.
.. yt_cookbook:: multi_plot_slice_and_proj.py
.. _advanced-multi-panel:
Advanced Multi-Plot Multi-Panel
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This produces a series of slices of multiple fields with different color maps
and zlimits, and makes use of the FixedResolutionBuffer. While this is more
complex than the equivalent plot collection-based solution, it allows for a
*lot* more flexibility. Every part of the script uses matplotlib commands,
allowing its full power to be exercised.
See :ref:`slice-plots` and :ref:`projection-plots` for more information.
.. yt_cookbook:: multi_plot_3x2_FRB.py
Time Series Movie
~~~~~~~~~~~~~~~~~
This shows how to use matplotlib's animation framework with yt plots.
.. yt_cookbook:: matplotlib-animation.py
.. _cookbook-offaxis_projection:
Off-Axis Projection (an alternate method)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This recipe demonstrates how to take an image-plane line integral along an
arbitrary axis in a simulation. This uses alternate machinery than the
standard :ref:`PlotWindow interface <off-axis-projections>` to create an
off-axis projection as demonstrated in this
:ref:`recipe <cookbook-simple-off-axis-projection>`.
.. yt_cookbook:: offaxis_projection.py
Off-Axis Projection with a Colorbar (an alternate method)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This recipe shows how to generate a colorbar with a projection of a dataset
from an arbitrary projection angle (so you are not confined to the x, y, and z
axes).
This uses alternate machinery than the standard
:ref:`PlotWindow interface <off-axis-projections>` to create an off-axis
projection as demonstrated in this
:ref:`recipe <cookbook-simple-off-axis-projection>`.
.. yt_cookbook:: offaxis_projection_colorbar.py
.. _thin-slice-projections:
Thin-Slice Projections
~~~~~~~~~~~~~~~~~~~~~~
This recipe is an example of how to project through only a given data object,
in this case a thin region, and then display the result.
See :ref:`projection-plots` and :ref:`available-objects` for more information.
.. yt_cookbook:: thin_slice_projection.py
Plotting Particles Over Fluids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This recipe demonstrates how to overplot particles on top of a fluid image.
See :ref:`annotate-particles` for more information.
.. yt_cookbook:: overplot_particles.py
Plotting Grid Edges Over Fluids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This recipe demonstrates how to overplot grid boxes on top of a fluid image.
Each level is represented with a different color from white (low refinement) to
black (high refinement). One can change the colormap used for the grids colors
by using the cmap keyword (or set it to None to get all grid edges as black).
See :ref:`annotate-grids` for more information.
.. yt_cookbook:: overplot_grids.py
Overplotting Velocity Vectors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This recipe demonstrates how to plot velocity vectors on top of a slice.
See :ref:`annotate-velocity` for more information.
.. yt_cookbook:: velocity_vectors_on_slice.py
Overplotting Contours
~~~~~~~~~~~~~~~~~~~~~
This is a simple recipe to show how to open a dataset, plot a slice through it,
and add contours of another quantity on top.
See :ref:`annotate-contours` for more information.
.. yt_cookbook:: contours_on_slice.py
Simple Contours in a Slice
~~~~~~~~~~~~~~~~~~~~~~~~~~
Sometimes it is useful to plot just a few contours of a quantity in a
dataset. This shows how one does this by first making a slice, adding
contours, and then hiding the colormap plot of the slice to leave the
plot containing only the contours that one has added.
See :ref:`annotate-contours` for more information.
.. yt_cookbook:: simple_contour_in_slice.py
Styling Radial Profile Plots
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This recipe demonstrates a method of calculating radial profiles for several
quantities, styling them and saving out the resultant plot.
See :ref:`how-to-make-1d-profiles` for more information.
.. yt_cookbook:: radial_profile_styles.py
Customized Profile Plot
~~~~~~~~~~~~~~~~~~~~~~~
This recipe demonstrates how to create a fully customized 1D profile object
using the :func:`~yt.data_objects.profiles.create_profile` function and then
create a :class:`~yt.visualization.profile_plotter.ProfilePlot` using the
customized profile. This illustrates how a ``ProfilePlot`` created this way
inherits the properties of the profile it is constructed from.
See :ref:`how-to-make-1d-profiles` for more information.
.. yt_cookbook:: customized_profile_plot.py
Customized Phase Plot
~~~~~~~~~~~~~~~~~~~~~
Similar to the recipe above, this demonstrates how to create a fully customized
2D profile object using the :func:`~yt.data_objects.profiles.create_profile`
function and then create a :class:`~yt.visualization.profile_plotter.PhasePlot`
using the customized profile object. This illustrates how a ``PhasePlot``
created this way inherits the properties of the profile object from which it
is constructed. See :ref:`how-to-make-2d-profiles` for more information.
.. yt_cookbook:: customized_phase_plot.py
.. _cookbook-camera_movement:
Moving a Volume Rendering Camera
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In this recipe, we move a camera through a domain and take multiple volume
rendering snapshots. This recipe uses an unstructured mesh dataset (see
:ref:`unstructured_mesh_rendering`), which makes it easier to visualize what
the Camera is doing, but you can manipulate the Camera for other dataset types
in exactly the same manner.
See :ref:`camera_movement` for more information.
.. yt_cookbook:: camera_movement.py
Volume Rendering with Custom Camera
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In this recipe we modify the :ref:`cookbook-simple_volume_rendering` recipe to
use customized camera properties. See :ref:`volume_rendering` for more
information.
.. yt_cookbook:: custom_camera_volume_rendering.py
.. _cookbook-custom-transfer-function:
Volume Rendering with a Custom Transfer Function
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In this recipe we modify the :ref:`cookbook-simple_volume_rendering` recipe to
use customized camera properties. See :ref:`volume_rendering` for more
information.
.. yt_cookbook:: custom_transfer_function_volume_rendering.py
.. _cookbook-sigma_clip:
Volume Rendering with Sigma Clipping
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In this recipe we output several images with different values of sigma_clip
set in order to change the contrast of the resulting image. See
:ref:`sigma_clip` for more information.
.. yt_cookbook:: sigma_clip.py
Zooming into an Image
~~~~~~~~~~~~~~~~~~~~~
This is a recipe that takes a slice through the most dense point, then creates
a bunch of frames as it zooms in. It's important to note that this particular
recipe is provided to show how to be more flexible and add annotations and the
like -- the base system, of a zoomin, is provided by the "yt zoomin" command on
the command line.
See :ref:`slice-plots` and :ref:`callbacks` for more information.
.. yt_cookbook:: zoomin_frames.py
.. _cookbook-various_lens:
Various Lens Types for Volume Rendering
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This example illustrates the usage and feature of different lenses for volume rendering.
.. yt_cookbook:: various_lens.py
.. _cookbook-opaque_rendering:
Opaque Volume Rendering
~~~~~~~~~~~~~~~~~~~~~~~
This recipe demonstrates how to make semi-opaque volume renderings, but also
how to step through and try different things to identify the type of volume
rendering you want.
See :ref:`opaque_rendering` for more information.
.. yt_cookbook:: opaque_rendering.py
Volume Rendering Multiple Fields
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can render multiple fields by adding new ``VolumeSource`` objects to the
scene for each field you want to render.
.. yt_cookbook:: render_two_fields.py
.. _cookbook-amrkdtree_downsampling:
Downsampling Data for Volume Rendering
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This recipe demonstrates how to downsample data in a simulation to speed up
volume rendering.
See :ref:`volume_rendering` for more information.
.. yt_cookbook:: amrkdtree_downsampling.py
.. _cookbook-volume_rendering_annotations:
Volume Rendering with Bounding Box and Overlaid Grids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This recipe demonstrates how to overplot a bounding box on a volume rendering
as well as overplotting grids representing the level of refinement achieved
in different regions of the code.
See :ref:`volume_rendering_annotations` for more information.
.. yt_cookbook:: rendering_with_box_and_grids.py
Volume Rendering with Annotation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This recipe demonstrates how to write the simulation time, show an
axis triad indicating the direction of the coordinate system, and show
the transfer function on a volume rendering. Please note that this
recipe relies on the old volume rendering interface. While one can
continue to use this interface, it may be incompatible with some of the
new developments and the infrastructure described in :ref:`volume_rendering`.
.. yt_cookbook:: vol-annotated.py
.. _cookbook-render_two_fields_tf:
Volume Rendering Multiple Fields And Annotation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This recipe shows how to display the transfer functions when rendering multiple
fields in a volume render.
.. yt_cookbook:: render_two_fields_tf.py
.. _cookbook-vol-points:
Volume Rendering with Points
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This recipe demonstrates how to make a volume rendering composited with point
sources. This could represent star or dark matter particles, for example.
.. yt_cookbook:: vol-points.py
.. _cookbook-vol-lines:
Volume Rendering with Lines
~~~~~~~~~~~~~~~~~~~~~~~~~~~
This recipe demonstrates how to make a volume rendering composited with line
sources.
.. yt_cookbook:: vol-lines.py
Plotting Streamlines
~~~~~~~~~~~~~~~~~~~~
This recipe demonstrates how to display streamlines in a simulation. (Note:
streamlines can also be queried for values!)
See :ref:`streamlines` for more information.
.. yt_cookbook:: streamlines.py
Plotting Isocontours
~~~~~~~~~~~~~~~~~~~~
This recipe demonstrates how to extract an isocontour and then plot it in
matplotlib, coloring the surface by a second quantity.
See :ref:`surfaces` for more information.
.. yt_cookbook:: surface_plot.py
Plotting Isocontours and Streamlines
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This recipe plots both isocontours and streamlines simultaneously. Note that
this will not include any blending, so streamlines that are occluded by the
surface will still be visible.
See :ref:`streamlines` and :ref:`surfaces` for more information.
.. yt_cookbook:: streamlines_isocontour.py
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/cookbook/complex_plots.rst | complex_plots.rst |
Making Simple Plots
-------------------
One of the easiest ways to interact with yt is by creating simple
visualizations of your data. Below we show how to do this, as well as how to
extend these plots to be ready for publication.
Simple Slices
~~~~~~~~~~~~~
This script shows the simplest way to make a slice through a dataset. See
:ref:`slice-plots` for more information.
.. yt_cookbook:: simple_slice.py
Simple Projections (Non-Weighted)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This is the simplest way to make a projection through a dataset. There are
several different :ref:`projection-types`, but non-weighted line integrals
and weighted line integrals are the two most common. Here we create
density projections (non-weighted line integral).
See :ref:`projection-plots` for more information.
.. yt_cookbook:: simple_projection.py
Simple Projections (Weighted)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
And here we produce density-weighted temperature projections (weighted line
integral) for the same dataset as the non-weighted projections above.
See :ref:`projection-plots` for more information.
.. yt_cookbook:: simple_projection_weighted.py
Simple Projections (Methods)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
And here we illustrate different methods for projection plots (integrate,
minimum, maximum).
.. yt_cookbook:: simple_projection_methods.py
Simple Projections (Weighted Standard Deviation)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
And here we produce a density-weighted projection (weighted line integral)
of the line-of-sight velocity from the same dataset (see :ref:`projection-plots`
for more information).
.. yt_cookbook:: simple_projection_stddev.py
Simple Phase Plots
~~~~~~~~~~~~~~~~~~
This demonstrates how to make a phase plot. Phase plots can be thought of as
two-dimensional histograms, where the value is either the weighted-average or
the total accumulation in a cell.
See :ref:`how-to-make-2d-profiles` for more information.
.. yt_cookbook:: simple_phase.py
Simple 1D Line Plotting
~~~~~~~~~~~~~~~~~~~~~~~
This script shows how to make a ``LinePlot`` through a dataset.
See :ref:`manual-line-plots` for more information.
.. yt_cookbook:: simple_1d_line_plot.py
.. note:: Not every data types have support for ``yt.LinePlot`` yet.
Currently, this operation is supported for grid based data with cartesian geometry.
Simple Probability Distribution Functions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Often, one wants to examine the distribution of one variable as a function of
another. This shows how to see the distribution of mass in a simulation, with
respect to the total mass in the simulation.
See :ref:`how-to-make-2d-profiles` for more information.
.. yt_cookbook:: simple_pdf.py
Simple 1D Histograms (Profiles)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This is a "profile," which is a 1D histogram. This can be thought of as either
the total accumulation (when weight_field is set to ``None``) or the average
(when a weight_field is supplied.)
See :ref:`how-to-make-1d-profiles` for more information.
.. yt_cookbook:: simple_profile.py
Simple Radial Profiles
~~~~~~~~~~~~~~~~~~~~~~
This shows how to make a profile of a quantity with respect to the radius.
See :ref:`how-to-make-1d-profiles` for more information.
.. yt_cookbook:: simple_radial_profile.py
1D Profiles Over Time
~~~~~~~~~~~~~~~~~~~~~
This is a simple example of overplotting multiple 1D profiles from a number
of datasets to show how they evolve over time.
See :ref:`how-to-make-1d-profiles` for more information.
.. yt_cookbook:: time_series_profiles.py
.. _cookbook-profile-stddev:
Profiles with Standard Deviation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This shows how to plot a 1D profile with error bars indicating the standard
deviation of the field values in each profile bin. In this example, we manually
create a 1D profile object, which gives us access to the standard deviation
data. See :ref:`how-to-make-1d-profiles` for more information.
.. yt_cookbook:: profile_with_standard_deviation.py
Making Plots of Multiple Fields Simultaneously
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
By adding multiple fields to a single
:class:`~yt.visualization.plot_window.SlicePlot` or
:class:`~yt.visualization.plot_window.ProjectionPlot` some of the overhead of
creating the data object can be reduced, and better performance squeezed out.
This recipe shows how to add multiple fields to a single plot.
See :ref:`slice-plots` and :ref:`projection-plots` for more information.
.. yt_cookbook:: simple_slice_with_multiple_fields.py
Off-Axis Slicing
~~~~~~~~~~~~~~~~
One can create slices from any arbitrary angle, not just those aligned with
the x,y,z axes.
See :ref:`off-axis-slices` for more information.
.. yt_cookbook:: simple_off_axis_slice.py
.. _cookbook-simple-off-axis-projection:
Off-Axis Projection
~~~~~~~~~~~~~~~~~~~
Like off-axis slices, off-axis projections can be created from any arbitrary
viewing angle.
See :ref:`off-axis-projections` for more information.
.. yt_cookbook:: simple_off_axis_projection.py
.. _cookbook-simple-particle-plot:
Simple Particle Plot
~~~~~~~~~~~~~~~~~~~~
You can also use yt to make particle-only plots. This script shows how to
plot all the particle x and y positions in a dataset, using the particle mass
to set the color scale.
See :ref:`particle-plots` for more information.
.. yt_cookbook:: particle_xy_plot.py
.. _cookbook-non-spatial-particle-plot:
Non-spatial Particle Plots
~~~~~~~~~~~~~~~~~~~~~~~~~~
You are not limited to plotting spatial fields on the x and y axes. This
example shows how to plot the particle x-coordinates versus their z-velocities,
again using the particle mass to set the colorbar.
See :ref:`particle-plots` for more information.
.. yt_cookbook:: particle_xvz_plot.py
.. _cookbook-single-color-particle-plot:
Single-color Particle Plots
~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you don't want to display a third field on the color bar axis, simply pass
in a color string instead of a particle field.
See :ref:`particle-plots` for more information.
.. yt_cookbook:: particle_one_color_plot.py
.. _cookbook-simple_volume_rendering:
Simple Volume Rendering
~~~~~~~~~~~~~~~~~~~~~~~
Volume renderings are 3D projections rendering isocontours in any arbitrary
field (e.g. density, temperature, pressure, etc.)
See :ref:`volume_rendering` for more information.
.. yt_cookbook:: simple_volume_rendering.py
.. _show-hide-axes-colorbar:
Showing and Hiding Axis Labels and Colorbars
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This example illustrates how to create a SlicePlot and then suppress the axes
labels and colorbars. This is useful when you don't care about the physical
scales and just want to take a closer look at the raw plot data. See
:ref:`hiding-colorbar-and-axes` for more information.
.. yt_cookbook:: show_hide_axes_colorbar.py
.. _cookbook_label_formats:
Setting Field Label Formats
---------------------------
This example illustrates how to change the label format for
ion species from the default roman numeral style.
.. yt_cookbook:: changing_label_formats.py
.. _matplotlib-primitives:
Accessing and Modifying Plots Directly
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
While often the Plot Window, and its affiliated :ref:`callbacks` can
cover normal use cases, sometimes more direct access to the underlying
Matplotlib engine is necessary. This recipe shows how to modify the plot
window :class:`matplotlib.axes.Axes` object directly.
See :ref:`matplotlib-customization` for more information.
.. yt_cookbook:: simple_slice_matplotlib_example.py
Changing the Colormap used in a Plot
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
yt has sensible defaults for colormaps, but there are over a hundred available
for customizing your plots. Here we generate a projection and then change
its colormap. See :ref:`colormaps` for a list and for images of all the
available colormaps.
.. yt_cookbook:: colormaps.py
Image Background Colors
~~~~~~~~~~~~~~~~~~~~~~~
Here we see how to take an image and save it using different background colors.
In this case we use the :ref:`cookbook-simple_volume_rendering`
recipe to generate the image, but it works for any NxNx4 image array
(3 colors and 1 opacity channel). See :ref:`volume_rendering` for more
information.
.. yt_cookbook:: image_background_colors.py
.. _annotations-recipe:
Annotating Plots to Include Lines, Text, Shapes, etc.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It can be useful to add annotations to plots to show off certain features
and make it easier for your audience to understand the plot's purpose. There
are a variety of available :ref:`plot modifications <callbacks>` one can use
to add annotations to their plots. Below includes just a handful, but please
look at the other :ref:`plot modifications <callbacks>` to get a full
description of what you can do to highlight your figures.
.. yt_cookbook:: annotations.py
Annotating Plots with a Timestamp and Physical Scale
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When creating movies of multiple outputs from the same simulation (see :ref:`time-series-analysis`), it can be helpful to include a timestamp and the physical scale of each individual output. This is simply achieved using the :ref:`annotate_timestamp() <annotate-timestamp>` and :ref:`annotate_scale() <annotate-scale>` callbacks on your plots. For more information about similar plot modifications using other callbacks, see the section on :ref:`Plot Modifications <callbacks>`.
.. yt_cookbook:: annotate_timestamp_and_scale.py
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/cookbook/simple_plots.rst | simple_plots.rst |
# OWLS Examples
## Setup
The first thing you will need to run these examples is a working installation of yt. The author or these examples followed the instructions under "Get yt: from source" at https://yt-project.org/ to install an up to date development version of yt.
We will be working with an OWLS snapshot: snapshot_033
Now we will tell the notebook that we want figures produced inline.
```
%matplotlib inline
```
## Loading
```
import yt
```
Now we will load the snapshot.
```
ds = yt.load_sample("snapshot_033")
```
Set a ``YTRegion`` that contains all the data.
```
ad = ds.all_data()
```
## Inspecting
The dataset can tell us what fields it knows about,
```
ds.field_list
ds.derived_field_list
```
Note that the ion fields follow the naming convention described in YTEP-0003 http://ytep.readthedocs.org/en/latest/YTEPs/YTEP-0003.html#molecular-and-atomic-species-names
## Accessing Particle Data
The raw particle data can be accessed using the particle types. This corresponds directly with what is in the hdf5 snapshots.
```
ad[("PartType0", "Coordinates")]
ad[("PartType4", "IronFromSNIa")]
ad[("PartType1", "ParticleIDs")]
ad[("PartType0", "Hydrogen")]
```
## Projection Plots
The projection plots make use of derived fields that store the smoothed particle data (particles smoothed onto an oct-tree). Below we make a projection of all hydrogen gas followed by only the neutral hydrogen gas.
```
pz = yt.ProjectionPlot(ds, "z", ("gas", "H_density"))
pz.show()
pz = yt.ProjectionPlot(ds, "z", ("gas", "H_p0_density"))
pz.show()
```
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/cookbook/yt_gadget_owls_analysis.ipynb | yt_gadget_owls_analysis.ipynb |
Constructing Data Objects
-------------------------
These recipes demonstrate a few uncommon methods of constructing data objects
from a simulation.
Creating Particle Filters
~~~~~~~~~~~~~~~~~~~~~~~~~
Create particle filters based on the age of star particles in an isolated
disk galaxy simulation. Determine the total mass of each stellar age bin
in the simulation. Generate projections for each of the stellar age bins.
.. yt_cookbook:: particle_filter.py
.. _cookbook-find_clumps:
Identifying Clumps
~~~~~~~~~~~~~~~~~~
This is a recipe to show how to find topologically connected sets of cells
inside a dataset. It returns these clumps and they can be inspected or
visualized as would any other data object. More detail on this method can be
found in `Smith et al. 2009
<https://ui.adsabs.harvard.edu/abs/2009ApJ...691..441S>`_.
.. yt_cookbook:: find_clumps.py
.. _extract_frb:
Extracting Fixed Resolution Data
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This is a recipe to show how to open a dataset and extract it to a file at a
fixed resolution with no interpolation or smoothing. Additionally, this recipe
shows how to insert a dataset into an external HDF5 file using h5py.
.. yt_cookbook:: extract_fixed_resolution_data.py
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/cookbook/constructing_data_objects.rst | constructing_data_objects.rst |
import numpy as np
import yt
from yt.visualization.volume_rendering.api import Scene, create_volume_source
field = ("gas", "density")
# normal_vector points from camera to the center of the final projection.
# Now we look at the positive x direction.
normal_vector = [1.0, 0.0, 0.0]
# north_vector defines the "top" direction of the projection, which is
# positive z direction here.
north_vector = [0.0, 0.0, 1.0]
# Follow the simple_volume_rendering cookbook for the first part of this.
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
sc = Scene()
vol = create_volume_source(ds, field=field)
tf = vol.transfer_function
tf.grey_opacity = True
# Plane-parallel lens
cam = sc.add_camera(ds, lens_type="plane-parallel")
# Set the resolution of the final projection.
cam.resolution = [250, 250]
# Set the location of the camera to be (x=0.2, y=0.5, z=0.5)
# For plane-parallel lens, the location info along the normal_vector (here
# is x=0.2) is ignored.
cam.position = ds.arr(np.array([0.2, 0.5, 0.5]), "code_length")
# Set the orientation of the camera.
cam.switch_orientation(normal_vector=normal_vector, north_vector=north_vector)
# Set the width of the camera, where width[0] and width[1] specify the length and
# height of final projection, while width[2] in plane-parallel lens is not used.
cam.set_width(ds.domain_width * 0.5)
sc.add_source(vol)
sc.save("lens_plane-parallel.png", sigma_clip=6.0)
# Perspective lens
cam = sc.add_camera(ds, lens_type="perspective")
cam.resolution = [250, 250]
# Standing at (x=0.2, y=0.5, z=0.5), we look at the area of x>0.2 (with some open angle
# specified by camera width) along the positive x direction.
cam.position = ds.arr([0.2, 0.5, 0.5], "code_length")
cam.switch_orientation(normal_vector=normal_vector, north_vector=north_vector)
# Set the width of the camera, where width[0] and width[1] specify the length and
# height of the final projection, while width[2] specifies the distance between the
# camera and the final image.
cam.set_width(ds.domain_width * 0.5)
sc.add_source(vol)
sc.save("lens_perspective.png", sigma_clip=6.0)
# Stereo-perspective lens
cam = sc.add_camera(ds, lens_type="stereo-perspective")
# Set the size ratio of the final projection to be 2:1, since stereo-perspective lens
# will generate the final image with both left-eye and right-eye ones jointed together.
cam.resolution = [500, 250]
cam.position = ds.arr([0.2, 0.5, 0.5], "code_length")
cam.switch_orientation(normal_vector=normal_vector, north_vector=north_vector)
cam.set_width(ds.domain_width * 0.5)
# Set the distance between left-eye and right-eye.
cam.lens.disparity = ds.domain_width[0] * 1.0e-3
sc.add_source(vol)
sc.save("lens_stereo-perspective.png", sigma_clip=6.0)
# Fisheye lens
dd = ds.sphere(ds.domain_center, ds.domain_width[0] / 10)
cam = sc.add_camera(dd, lens_type="fisheye")
cam.resolution = [250, 250]
v, c = ds.find_max(field)
cam.set_position(c - 0.0005 * ds.domain_width)
cam.switch_orientation(normal_vector=normal_vector, north_vector=north_vector)
cam.set_width(ds.domain_width)
cam.lens.fov = 360.0
sc.add_source(vol)
sc.save("lens_fisheye.png", sigma_clip=6.0)
# Spherical lens
cam = sc.add_camera(ds, lens_type="spherical")
# Set the size ratio of the final projection to be 2:1, since spherical lens
# will generate the final image with length of 2*pi and height of pi.
# Recommended resolution for YouTube 360-degree videos is [3840, 2160]
cam.resolution = [500, 250]
# Standing at (x=0.4, y=0.5, z=0.5), we look in all the radial directions
# from this point in spherical coordinate.
cam.position = ds.arr([0.4, 0.5, 0.5], "code_length")
cam.switch_orientation(normal_vector=normal_vector, north_vector=north_vector)
# In (stereo)spherical camera, camera width is not used since the entire volume
# will be rendered
sc.add_source(vol)
sc.save("lens_spherical.png", sigma_clip=6.0)
# Stereo-spherical lens
cam = sc.add_camera(ds, lens_type="stereo-spherical")
# Set the size ratio of the final projection to be 1:1, since spherical-perspective lens
# will generate the final image with both left-eye and right-eye ones jointed together,
# with left-eye image on top and right-eye image on bottom.
# Recommended resolution for YouTube virtual reality videos is [3840, 2160]
cam.resolution = [500, 500]
cam.position = ds.arr([0.4, 0.5, 0.5], "code_length")
cam.switch_orientation(normal_vector=normal_vector, north_vector=north_vector)
# In (stereo)spherical camera, camera width is not used since the entire volume
# will be rendered
# Set the distance between left-eye and right-eye.
cam.lens.disparity = ds.domain_width[0] * 1.0e-3
sc.add_source(vol)
sc.save("lens_stereo-spherical.png", sigma_clip=6.0) | yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/cookbook/various_lens.py | various_lens.py |
import numpy as np
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
# We start by building a default volume rendering scene
im, sc = yt.volume_render(ds, field=("gas", "density"), fname="v0.png", sigma_clip=6.0)
sc.camera.set_width(ds.arr(0.1, "code_length"))
tf = sc.get_source().transfer_function
tf.clear()
tf.add_layers(
4, 0.01, col_bounds=[-27.5, -25.5], alpha=np.logspace(-3, 0, 4), colormap="RdBu_r"
)
sc.save("v1.png", sigma_clip=6.0)
# In this case, the default alphas used (np.logspace(-3,0,Nbins)) does not
# accentuate the outer regions of the galaxy. Let's start by bringing up the
# alpha values for each contour to go between 0.1 and 1.0
tf = sc.get_source().transfer_function
tf.clear()
tf.add_layers(
4, 0.01, col_bounds=[-27.5, -25.5], alpha=np.logspace(0, 0, 4), colormap="RdBu_r"
)
sc.save("v2.png", sigma_clip=6.0)
# Now let's set the grey_opacity to True. This should make the inner portions
# start to be obscured
tf.grey_opacity = True
sc.save("v3.png", sigma_clip=6.0)
# That looks pretty good, but let's start bumping up the opacity.
tf.clear()
tf.add_layers(
4,
0.01,
col_bounds=[-27.5, -25.5],
alpha=10.0 * np.ones(4, dtype="float64"),
colormap="RdBu_r",
)
sc.save("v4.png", sigma_clip=6.0)
# Let's bump up again to see if we can obscure the inner contour.
tf.clear()
tf.add_layers(
4,
0.01,
col_bounds=[-27.5, -25.5],
alpha=30.0 * np.ones(4, dtype="float64"),
colormap="RdBu_r",
)
sc.save("v5.png", sigma_clip=6.0)
# Now we are losing sight of everything. Let's see if we can obscure the next
# layer
tf.clear()
tf.add_layers(
4,
0.01,
col_bounds=[-27.5, -25.5],
alpha=100.0 * np.ones(4, dtype="float64"),
colormap="RdBu_r",
)
sc.save("v6.png", sigma_clip=6.0)
# That is very opaque! Now lets go back and see what it would look like with
# grey_opacity = False
tf.grey_opacity = False
sc.save("v7.png", sigma_clip=6.0)
# That looks pretty different, but the main thing is that you can see that the
# inner contours are somewhat visible again. | yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/cookbook/opaque_rendering.py | opaque_rendering.py |
```
%matplotlib inline
import yt
```
This notebook demonstrates some of the capabilities of yt on some FITS "position-position-spectrum" cubes of radio data.
Note that it depends on some external dependencies, including `astropy` and `regions`.
## M33 VLA Image
The dataset `"m33_hi.fits"` has `NaN`s in it, so we'll mask them out by setting `nan_mask` = 0:
```
ds = yt.load("radio_fits/m33_hi.fits", nan_mask=0.0)
```
First, we'll take a slice of the data along the z-axis, which is the velocity axis of the FITS cube:
```
slc = yt.SlicePlot(ds, "z", ("fits", "intensity"), origin="native")
slc.show()
```
The x and y axes are in units of the image pixel. When making plots of FITS data, to see the image coordinates as they are in the file, it is helpful to set the keyword `origin = "native"`. If you want to see the celestial coordinates along the axes, you can import the `PlotWindowWCS` class and feed it the `SlicePlot`. For this to work, a version of AstroPy >= 1.3 needs to be installed.
```
from yt.frontends.fits.misc import PlotWindowWCS
PlotWindowWCS(slc)
```
Generally, it is best to get the plot in the shape you want it before feeding it to `PlotWindowWCS`. Once it looks the way you want, you can save it just like a normal `PlotWindow` plot:
```
slc.save()
```
We can also take slices of this dataset at a few different values along the "z" axis (corresponding to the velocity), so let's try a few. To pick specific velocity values for slices, we will need to use the dataset's `spec2pixel` method to determine which pixels to slice on:
```
import yt.units as u
new_center = ds.domain_center
new_center[2] = ds.spec2pixel(-250000.0 * u.m / u.s)
```
Now we can use this new center to create a new slice:
```
slc = yt.SlicePlot(ds, "z", ("fits", "intensity"), center=new_center, origin="native")
slc.show()
```
We can do this a few more times for different values of the velocity:
```
new_center[2] = ds.spec2pixel(-100000.0 * u.m / u.s)
slc = yt.SlicePlot(ds, "z", ("fits", "intensity"), center=new_center, origin="native")
slc.show()
new_center[2] = ds.spec2pixel(-150000.0 * u.m / u.s)
slc = yt.SlicePlot(ds, "z", ("fits", "intensity"), center=new_center, origin="native")
slc.show()
```
These slices demonstrate the intensity of the radio emission at different line-of-sight velocities.
We can also make a projection of all the emission along the line of sight:
```
prj = yt.ProjectionPlot(ds, "z", ("fits", "intensity"), origin="native")
prj.show()
```
We can also look at the slices perpendicular to the other axes, which will show us the structure along the velocity axis:
```
slc = yt.SlicePlot(ds, "x", ("fits", "intensity"), origin="native", window_size=(8, 8))
slc.show()
slc = yt.SlicePlot(ds, "y", ("fits", "intensity"), origin="native", window_size=(8, 8))
slc.show()
```
In these cases, we needed to explicitly declare a square `window_size` to get a figure that looks good.
## $^{13}$CO GRS Data
This next example uses one of the cubes from the [Boston University Galactic Ring Survey](http://www.bu.edu/galacticring/new_index.htm).
```
ds = yt.load("radio_fits/grs-50-cube.fits", nan_mask=0.0)
```
We can use the `quantities` methods to determine derived quantities of the dataset. For example, we could find the maximum and minimum temperature:
```
dd = ds.all_data() # A region containing the entire dataset
extrema = dd.quantities.extrema(("fits", "temperature"))
print(extrema)
```
We can compute the average temperature along the "velocity" axis for all positions by making a `ProjectionPlot`:
```
prj = yt.ProjectionPlot(
ds, "z", ("fits", "temperature"), origin="native", weight_field=("index", "ones")
) # "ones" weights each cell by 1
prj.set_zlim(("fits", "temperature"), zmin=(1e-3, "K"))
prj.set_log(("fits", "temperature"), True)
prj.show()
```
We can also make a histogram of the temperature field of this region:
```
pplot = yt.ProfilePlot(
dd, ("fits", "temperature"), [("index", "ones")], weight_field=None, n_bins=128
)
pplot.show()
```
We can see from this histogram and our calculation of the dataset's extrema that there is a lot of noise. Suppose we wanted to make a projection, but instead make it only of the cells which had a positive temperature value. We can do this by doing a "field cut" on the data:
```
fc = dd.cut_region(['obj["fits", "temperature"] > 0'])
```
Now let's check the extents of this region:
```
print(fc.quantities.extrema(("fits", "temperature")))
```
Looks like we were successful in filtering out the negative temperatures. To compute the average temperature of this new region:
```
fc.quantities.weighted_average_quantity(("fits", "temperature"), ("index", "ones"))
```
Now, let's make a projection of the dataset, using the field cut `fc` as a `data_source`:
```
prj = yt.ProjectionPlot(
ds,
"z",
[("fits", "temperature")],
data_source=fc,
origin="native",
weight_field=("index", "ones"),
) # "ones" weights each cell by 1
prj.set_log(("fits", "temperature"), True)
prj.show()
```
Finally, we can also take an existing [ds9](http://ds9.si.edu/site/Home.html) region and use it to create a "cut region" as well, using `ds9_region` (the [regions](https://astropy-regions.readthedocs.io/) package needs to be installed for this):
```
from yt.frontends.fits.misc import ds9_region
```
For this example we'll create a ds9 region from scratch and load it up:
```
region = 'galactic;box(+49:26:35.150,-0:30:04.410,1926.1927",1483.3701",0.0)'
box_reg = ds9_region(ds, region)
```
This region may now be used to compute derived quantities:
```
print(box_reg.quantities.extrema(("fits", "temperature")))
```
Or in projections:
```
prj = yt.ProjectionPlot(
ds,
"z",
("fits", "temperature"),
origin="native",
data_source=box_reg,
weight_field=("index", "ones"),
) # "ones" weights each cell by 1
prj.set_zlim(("fits", "temperature"), 1.0e-2, 1.5)
prj.set_log(("fits", "temperature"), True)
prj.show()
```
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/cookbook/fits_radio_cubes.ipynb | fits_radio_cubes.ipynb |
.. _asking-for-help:
What to do if you run into problems
===================================
If you run into problems with yt, there are a number of steps to follow
to come to a solution. The first handful of options are things you can do
on your own, but if those don't yield results, we have provided a number of
ways to connect with our community of users and developers to solve the
problem together.
To summarize, here are the steps in order:
.. contents::
:depth: 1
:local:
:backlinks: none
.. _dont-panic:
Don't panic and don't give up
-----------------------------
This may seem silly, but it's effective. While yt is a robust code with
lots of functionality, like all actively-developed codes sometimes there
are bugs. Chances are good that your problems have a quick fix, either
because someone encountered it before and fixed it, the documentation is
out of date, or some other simple solution. Don't give up! We want
to help you succeed!
.. _update-the-code:
Update to the latest version
----------------------------
Sometimes the pace of development is pretty fast on yt, particularly in the
development branch, so a fix to your problem may have already been developed by
the time you encounter it. Many users' problems can simply be corrected by
updating to the latest version of the code and/or its dependencies. If you have
installed the latest stable release of yt then you should update yt using the
package manager appropriate for your python installation. See :ref:updating.
.. _search-the-documentation:
Search the documentation, FAQ, and mailing lists
------------------------------------------------
The documentation has a lot of the answers to everyday problems. This doesn't
mean you have to read all of the docs top-to-bottom, but you should at least
run a search to see if relevant topics have been answered in the docs. Click
on the search field to the right of this window and enter your text. Another
good place to look for answers in the documentation is our :ref:`faq` page.
OK, so there was no obvious solution to your problem in the documentation.
It is possible that someone else experienced the problem before you did, and
wrote to the mailing list about it. You can easily check the mailing list
archive with the other search field to the right of this window (or you can
use the search field below).
.. raw:: html
<form action="http://www.google.com/cse" id="cse-search-box">
<div>
<input type="hidden" name="cx" value="010428198273461986377:xyfd9ztykqm" />
<input type="hidden" name="ie" value="UTF-8" />
<input type="text" name="q" size="31" />
<input type="submit" name="sa" value="Search" />
</div>
</form>
<script type="text/javascript" src="http://www.google.com/cse/brand?form=cse-search-box&lang=en"></script>
.. _look-at-the-source:
Look at the source code
-----------------------
We've done our best to make the source clean, and it is easily searchable from
your computer.
If you have not done so already (see :ref:`install-from-source`), clone a copy
of the yt git repository and make it the 'active' installation by doing
Once inside the yt git repository, you can then search for the class,
function, or keyword which is giving you problems with ``grep -r *``, which will
recursively search throughout the code base. (For a much faster and cleaner
experience, we recommend ``grin`` instead of ``grep -r *``. To install ``grin``
with python, just type ``python -m pip install grin``.)
So let's say that ``SlicePlot`` is giving you problems still, and you want to
look at the source to figure out what is going on.
.. code-block:: bash
$ cd $YT_GIT/yt
$ grep -r SlicePlot * (or $ grin SlicePlot)
This will print a number of locations in the yt source tree where ``SlicePlot``
is mentioned. You can now follow-up on this and open up the files that have
references to ``SlicePlot`` (particularly the one that defines SlicePlot) and
inspect their contents for problems or clarification.
.. _isolate_and_document:
Isolate and document your problem
---------------------------------
As you gear up to take your question to the rest of the community, try to distill
your problem down to the fewest number of steps needed to produce it in a
script. This can help you (and us) to identify the basic problem. Follow
these steps:
* Identify what it is that went wrong, and how you knew it went wrong.
* Put your script, errors, inputs and outputs online:
* ``$ yt pastebin script.py`` - pastes script.py online
* ``$ yt upload_image image.png`` - pastes image online
* ``$ yt upload my_input.tar`` - pastes my_input.tar online
* Identify which version of the code you’re using.
* ``$ yt version`` - provides version information, including changeset hash
It may be that through the mere process of doing this, you end up solving
the problem!
.. _irc:
Go on Slack to ask a question
-----------------------------
If you want a fast, interactive experience, you could try jumping into our Slack
to get your questions answered in a chatroom style environment.
To join our slack channel you will need to request an invite by going to
https://yt-project.org/development.html, click the "Join as @ Slack!" button, and
fill out the form. You will get an invite as soon as an administrator approves
your request.
.. _mailing-list:
Ask the mailing list
--------------------
If you still haven't yet found a solution, feel free to
write to the mailing list regarding your problems. There are two mailing lists,
`yt-users <https://mail.python.org/archives/list/[email protected]/>`_ and
`yt-dev <https://mail.python.org/archives/list/[email protected]/>`_. The
first should be used for asking for help, suggesting features and so on, and
the latter has more chatter about the way the code is developed and discussions
of changes and feature improvements.
If you email ``yt-users`` asking for help, remember to include the information
about your problem you identified in :ref:`this step <isolate_and_document>`.
When you email the list, providing this information can help the developers
understand what you did, how it went wrong, and any potential fixes or similar
problems they have seen in the past. Without this context, it can be very
difficult to help out!
.. _reporting-a-bug:
Submit a bug report
-------------------
If you have gone through all of the above steps, and you're still encountering
problems, then you have found a bug. To submit a bug report, you can either
directly create one through the GitHub `web interface
<https://github.com/yt-project/yt/issues/new>`_. Alternatively, email the
``yt-users`` mailing list and we will construct a new ticket in your stead.
Remember to include the information about your problem you identified in
:ref:`this step <isolate_and_document>`.
Special Issues
--------------
Installation Issues
^^^^^^^^^^^^^^^^^^^
If you are having installation issues and nothing from the
:ref:`installation instructions <installing-yt>` seems to work, you should
*definitely* email the ``yt-users`` email list. You should provide information
about the host, the version of the code you are using, and the output of
``yt_install.log`` from your installation. We are very interested in making
sure that yt installs everywhere!
Customization and Scripting Issues
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you have customized yt in some way, or created your own plugins file (as
described in :ref:`plugin-file`) then it may be necessary to supply users
willing to help you (or the mailing list) with both your patches to the
source, the plugin file, and perhaps even the datafile on which you're running.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/help/index.rst | index.rst |
.. _faq:
Frequently Asked Questions
==========================
.. contents::
:depth: 2
:local:
:backlinks: none
Version & Installation
----------------------
.. _determining-version:
How can I tell what version of yt I'm using?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you run into problems with yt and you're writing to the mailing list
or contacting developers on Slack, they will likely want to know what version of
yt you're using. Often times, you'll want to know both the yt version,
as well as the last changeset that was committed to the branch you're using.
To reveal this, go to a command line and type:
.. code-block:: bash
$ yt version
The result will look something like this:
.. code-block:: bash
yt module located at:
/Users/mitchell/src/yt-conda/src/yt-git
The current version of yt is:
---
Version = 4.0.dev0
Changeset = 9f947a930ab4
---
This installation CAN be automatically updated.
For more information on this topic, see :ref:`updating`.
.. _yt-3.0-problems:
I upgraded to yt 4.0 but my code no longer works. What do I do?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We've tried to keep the number of backward-incompatible changes to a minimum
with the release of yt-4.0, but because of the wide-reaching changes to how
yt manages data, there may be updates you have to make.
You can see many of the changes in :ref:`yt4differences`, and
in :ref:`transitioning-to-4.0` there are helpful tips on how to modify your scripts to update them.
Code Errors and Failures
------------------------
Python fails saying that it cannot import yt modules
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is commonly exhibited with an error about not being able to import code
that is part of yt. This is likely because the code that is failing to import
needs to be compiled or recompiled.
This error tends to occur when there are changes in the underlying Cython files
that need to be rebuilt, like after a major code update or when switching
between distant branches.
This is solved by running the install command again. See
:ref:`install-from-source`.
.. _faq-mpi4py:
yt complains that it needs the mpi4py module
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For yt to be able to incorporate parallelism on any of its analysis (see
:ref:`parallel-computation`), it needs to be able to use MPI libraries.
This requires the ``mpi4py`` module to be installed in your version of python.
Unfortunately, installation of ``mpi4py`` is *just* tricky enough to elude the
yt batch installer. So if you get an error in yt complaining about mpi4py
like:
.. code-block:: bash
ImportError: No module named mpi4py
then you should install ``mpi4py``. The easiest way to install it is through
the pip interface. At the command line, type:
.. code-block:: bash
$ python -m pip install mpi4py
What this does is it finds your default installation of Python (presumably
in the yt source directory), and it installs the mpi4py module. If this
action is successful, you should never have to worry about your aforementioned
problems again. If, on the other hand, this installation fails (as it does on
such machines as NICS Kraken, NASA Pleaides and more), then you will have to
take matters into your own hands. Usually when it fails, it is due to pip
being unable to find your MPI C/C++ compilers (look at the error message).
If this is the case, you can specify them explicitly as per:
.. code-block:: bash
$ env MPICC=/path/to/MPICC python -m pip install mpi4py
So for example, on Kraken, I switch to the gnu C compilers (because yt
doesn't work with the portland group C compilers), then I discover that
cc is the mpi-enabled C compiler (and it is in my path), so I run:
.. code-block:: bash
$ module swap PrgEnv-pgi PrgEnv-gnu
$ env MPICC=cc python -m pip install mpi4py
And voila! It installs! If this *still* fails for you, then you can
build and install from source and specify the mpi-enabled c and c++
compilers in the mpi.cfg file. See the
`mpi4py installation page <https://mpi4py.readthedocs.io/en/stable/install.html>`_
for details.
Units
-----
.. _conversion-factors:
How do I convert between code units and physical units for my dataset?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Starting with yt-3.0, and continuing to yt-4.0, yt uses an internal symbolic
unit system. In yt-3.0 this was bundled with the main yt codebase, and with
yt-4.0 it is now available as a separate package called `unyt
<https://unyt.readthedocs.org/>`_. Conversion factors are tied up in the
``length_unit``, ``times_unit``, ``mass_unit``, and ``velocity_unit``
attributes, which can be converted to any arbitrary desired physical unit:
.. code-block:: python
print("Length unit: ", ds.length_unit)
print("Time unit: ", ds.time_unit)
print("Mass unit: ", ds.mass_unit)
print("Velocity unit: ", ds.velocity_unit)
print("Length unit: ", ds.length_unit.in_units("code_length"))
print("Time unit: ", ds.time_unit.in_units("code_time"))
print("Mass unit: ", ds.mass_unit.in_units("kg"))
print("Velocity unit: ", ds.velocity_unit.in_units("Mpc/year"))
So to accomplish the example task of converting a scalar variable ``x`` in
code units to kpc in yt-4.0, you can do one of two things. If ``x`` is
already a YTQuantity with units in ``code_length``, you can run:
.. code-block:: python
x.in_units("kpc")
However, if ``x`` is just a numpy array or native python variable without
units, you can convert it to a YTQuantity with units of ``kpc`` by running:
.. code-block:: python
x = x * ds.length_unit.in_units("kpc")
For more information about unit conversion, see :ref:`units`.
How do I make a YTQuantity tied to a specific dataset's units?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you want to create a variable or array that is tied to a particular dataset
(and its specific conversion factor to code units), use the ``ds.quan`` (for
individual variables) and ``ds.arr`` (for arrays):
.. code-block:: python
import yt
ds = yt.load(filename)
one_Mpc = ds.quan(1, "Mpc")
x_vector = ds.arr([1, 0, 0], "code_length")
You can then naturally exploit the units system:
.. code-block:: python
print("One Mpc in code_units:", one_Mpc.in_units("code_length"))
print("One Mpc in AU:", one_Mpc.in_units("AU"))
print("One Mpc in comoving kpc:", one_Mpc.in_units("kpccm"))
For more information about unit conversion, see :ref:`units`.
.. _accessing-unitless-data:
How do I access the unitless data in a YTQuantity or YTArray?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
While there are numerous benefits to having units tied to individual
quantities in yt, they can also produce issues when simply trying to combine
YTQuantities with numpy arrays or native python floats that lack units. A
simple example of this is::
# Create a YTQuantity that is 1 kpc in length and tied to the units of
# dataset ds
>>> x = ds.quan(1, 'kpc')
# Try to add this to some non-dimensional quantity
>>> print(x + 1)
YTUnitOperationError: The addition operator for YTArrays with units (kpc) and (1) is not well defined.
The solution to this means using the YTQuantity and YTArray objects for all
of one's computations, but this isn't always feasible. A quick fix for this
is to just grab the unitless data out of a YTQuantity or YTArray object with
the ``value`` and ``v`` attributes, which return a copy, or with the ``d``
attribute, which returns the data itself:
.. code-block:: python
x = ds.quan(1, "kpc")
x_val = x.v
print(x_val)
array(1.0)
# Try to add this to some non-dimensional quantity
print(x + 1)
2.0
For more information about this functionality with units, see :ref:`units`.
Fields
------
.. _faq-handling-log-vs-linear-space:
How do I modify whether or not yt takes the log of a particular field?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
yt sets up defaults for many fields for whether or not a field is presented
in log or linear space. To override this behavior, you can modify the
``field_info`` dictionary. For example, if you prefer that ``density`` not be
logged, you could type:
.. code-block:: python
ds = load("my_data")
ds.index
ds.field_info["gas", "density"].take_log = False
From that point forward, data products such as slices, projections, etc., would
be presented in linear space. Note that you have to instantiate ds.index before
you can access ds.field info. For more information see the documentation on
:ref:`fields` and :ref:`creating-derived-fields`.
.. _faq-new-field:
I added a new field to my simulation data, can yt see it?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Yes! yt identifies all the fields in the simulation's output file
and will add them to its ``field_list`` even if they aren't listed in
:ref:`field-list`. These can then be accessed in the usual manner. For
example, if you have created a field for the potential called
``PotentialField``, you could type:
.. code-block:: python
ds = load("my_data")
ad = ds.all_data()
potential_field = ad["PotentialField"]
The same applies to fields you might derive inside your yt script
via :ref:`creating-derived-fields`. To check what fields are
available, look at the properties ``field_list`` and ``derived_field_list``:
.. code-block:: python
print(ds.field_list)
print(ds.derived_field_list)
or for a more legible version, try:
.. code-block:: python
for field in ds.derived_field_list:
print(field)
.. _faq-add-field-diffs:
What is the difference between ``yt.add_field()`` and ``ds.add_field()``?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The global ``yt.add_field()``
(:meth:`~yt.fields.field_info_container.FieldInfoContainer.add_field`)
function is for adding a field for every subsequent dataset that is loaded
in a particular python session, whereas ``ds.add_field()``
(:meth:`~yt.data_objects.static_output.Dataset.add_field`) will only add it
to dataset ``ds``.
Data Objects
------------
.. _ray-data-ordering:
Why are the values in my Ray object out of order?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Using the Ray objects
(:class:`~yt.data_objects.selection_data_containers.YTOrthoRay` and
:class:`~yt.data_objects.selection_data_containers.YTRay`) with AMR data
gives non-contiguous cell information in the Ray's data array. The
higher-resolution cells are appended to the end of the array. Unfortunately,
due to how data is loaded by chunks for data containers, there is really no
easy way to fix this internally. However, there is an easy workaround.
One can sort the ``Ray`` array data by the ``t`` field, which is the value of
the parametric variable that goes from 0 at the start of the ray to 1 at the
end. That way the data will always be ordered correctly. As an example you can:
.. code-block:: python
my_ray = ds.ray(...)
ray_sort = np.argsort(my_ray["t"])
density = my_ray["gas", "density"][ray_sort]
There is also a full example in the :ref:`manual-line-plots` section of the
docs.
Developing
----------
.. _making-a-PR:
Someone asked me to make a Pull Request (PR) to yt. How do I do that?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A pull request is the action by which you contribute code to yt. You make
modifications in your local copy of the source code, then *request* that
other yt developers review and accept your changes to the main code base.
For a full description of the steps necessary to successfully contribute
code and issue a pull request (or manage multiple versions of the source code)
please see :ref:`sharing-changes`.
.. _making-an-issue:
Someone asked me to file an issue or a bug report for a bug I found. How?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
See :ref:`reporting-a-bug` and :ref:`sharing-changes`.
Miscellaneous
-------------
.. _getting-sample-data:
How can I get some sample data for yt?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Many different sample datasets can be found at https://yt-project.org/data/ .
These can be downloaded, unarchived, and they will each create their own
directory. It is generally straight forward to load these datasets, but if
you have any questions about loading data from a code with which you are
unfamiliar, please visit :ref:`loading-data`.
To make things easier to load these sample datasets, you can add the parent
directory to your downloaded sample data to your *yt path*.
If you set the option ``test_data_dir``, in the section ``[yt]``,
in ``~/.config/yt/yt.toml``, yt will search this path for them.
This means you can download these datasets to ``/big_drive/data_for_yt`` , add
the appropriate item to ``~/.config/yt/yt.toml``, and no matter which directory you are
in when running yt, it will also check in *that* directory.
In many cases, these are also available using the ``load_sample`` command,
described in :ref:`loading-sample-data`.
.. _faq-scroll-up:
I can't scroll-up to previous commands inside python
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If the up-arrow key does not recall the most recent commands, there is
probably an issue with the readline library. To ensure the yt python
environment can use readline, run the following command:
.. code-block:: bash
$ python -m pip install gnureadline
.. _faq-old-data:
.. _faq-log-level:
How can I change yt's log level?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
yt's default log level is ``INFO``. However, you may want less voluminous logging,
especially if you are in an IPython notebook or running a long or parallel script.
On the other hand, you may want it to output a lot more, since you can't figure out
exactly what's going wrong, and you want to output some debugging information.
The default yt log level can be changed using the :ref:`configuration-file`,
either by setting it in the ``$HOME/.config/yt/yt.toml`` file:
.. code-block:: bash
$ yt config set yt log_level 10 # This sets the log level to "DEBUG"
which would produce debug (as well as info, warning, and error) messages, or at runtime:
.. code-block:: python
yt.set_log_level("error")
This is the same as doing:
.. code-block:: python
yt.set_log_level(40)
which in this case would suppress everything below error messages. For reference,
the numerical values corresponding to different log levels are:
.. csv-table::
:header: Level, Numeric Value
:widths: 10, 10
``CRITICAL``,50
``ERROR``,40
``WARNING``,30
``INFO``,20
``DEBUG``,10
``NOTSET``,0
Can I always load custom data objects, fields, quantities, and colormaps with every dataset?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The :ref:`plugin-file` provides a means for always running custom code whenever
yt is loaded up. This custom code can be new data objects, or fields, or
colormaps, which will then be accessible in any future session without having
modified the source code directly. See the description in :ref:`plugin-file`
for more details.
How do I cite yt?
^^^^^^^^^^^^^^^^^
If you use yt in a publication, we'd very much appreciate a citation! You
should feel free to cite the `ApJS paper
<https://ui.adsabs.harvard.edu/abs/2011ApJS..192....9T>`_ with the following BibTeX
entry: ::
@ARTICLE{2011ApJS..192....9T,
author = {{Turk}, M.~J. and {Smith}, B.~D. and {Oishi}, J.~S. and {Skory}, S. and
{Skillman}, S.~W. and {Abel}, T. and {Norman}, M.~L.},
title = "{yt: A Multi-code Analysis Toolkit for Astrophysical Simulation Data}",
journal = {The Astrophysical Journal Supplement Series},
archivePrefix = "arXiv",
eprint = {1011.3514},
primaryClass = "astro-ph.IM",
keywords = {cosmology: theory, methods: data analysis, methods: numerical },
year = 2011,
month = jan,
volume = 192,
eid = {9},
pages = {9},
doi = {10.1088/0067-0049/192/1/9},
adsurl = {https://ui.adsabs.harvard.edu/abs/2011ApJS..192....9T},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/faq/index.rst | index.rst |
# Welcome to the yt quickstart!
In this brief tutorial, we'll go over how to load up data, analyze things, inspect your data, and make some visualizations.
Our documentation page can provide information on a variety of the commands that are used here, both in narrative documentation as well as recipes for specific functionality in our cookbook. The documentation exists at https://yt-project.org/doc/. If you encounter problems, look for help here: https://yt-project.org/doc/help/index.html.
## Acquiring the datasets for this tutorial
If you are executing these tutorials interactively, you need some sample datasets on which to run the code. You can download these datasets at https://yt-project.org/data/, or you can use the built-in yt sample data loader (using [pooch](https://www.fatiando.org/pooch/latest/api/index.html) under the hood) to automatically download the data for you.
The datasets necessary for each lesson are noted next to the corresponding tutorial, and by default it will use the pooch-based dataset downloader. If you would like to supply your own paths, you can choose to do so.
## Using the Automatic Downloader
For the purposes of this tutorial, or whenever you want to use sample data, you can use the `load_sample` command to utilize the pooch auto-downloader. For instance:
```python
ds = yt.load_sample("IsolatedGalaxy")
```
## Using manual loading
The way you will *most frequently* interact with `yt` is using the standard `load` command. This accepts a path and optional arguments. For instance:
```python
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
```
would load the `IsolatedGalaxy` dataset by supplying the full path to the parameter file.
## What's Next?
The Notebooks are meant to be explored in this order:
1. Introduction (this file!)
2. Data Inspection (IsolatedGalaxy dataset)
3. Simple Visualization (enzo_tiny_cosmology & Enzo_64 datasets)
4. Data Objects and Time Series (IsolatedGalaxy dataset)
5. Derived Fields and Profiles (IsolatedGalaxy dataset)
6. Volume Rendering (IsolatedGalaxy dataset)
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/quickstart/1)_Introduction.ipynb | 1)_Introduction.ipynb |
# Starting Out and Loading Data
We're going to get started by loading up yt. This next command brings all of the libraries into memory and sets up our environment.
```
import yt
```
Now that we've loaded yt, we can load up some data. Let's load the `IsolatedGalaxy` dataset.
```
ds = yt.load_sample("IsolatedGalaxy")
```
## Fields and Facts
When you call the `load` function, yt tries to do very little -- this is designed to be a fast operation, just setting up some information about the simulation. Now, the first time you access the "index" it will read and load the mesh and then determine where data is placed in the physical domain and on disk. Once it knows that, yt can tell you some statistics about the simulation:
```
ds.print_stats()
```
yt can also tell you the fields it found on disk:
```
ds.field_list
```
And, all of the fields it thinks it knows how to generate:
```
ds.derived_field_list
```
yt can also transparently generate fields. However, we encourage you to examine exactly what yt is doing when it generates those fields. To see, you can ask for the source of a given field.
```
print(ds.field_info["gas", "vorticity_x"].get_source())
```
yt stores information about the domain of the simulation:
```
ds.domain_width
```
yt can also convert this into various units:
```
print(ds.domain_width.in_units("kpc"))
print(ds.domain_width.in_units("au"))
print(ds.domain_width.in_units("mile"))
```
Finally, we can get basic information about the particle types and number of particles in a simulation:
```
print(ds.particle_types)
print(ds.particle_types_raw)
print(ds.particle_type_counts)
```
For this dataset, we see that there are two particle types defined, (`io` and `all`), but that only one of these particle types in `ds.particle_types_raw`. The `ds.particle_types` list contains *all* particle types in the simulation, including ones that are dynamically defined like particle unions. The `ds.particle_types_raw` list includes only particle types that are in the output file we loaded the dataset from.
We can also see that there are a bit more than 1.1 million particles in this simulation. Only particle types in `ds.particle_types_raw` will appear in the `ds.particle_type_counts` dictionary.
# Mesh Structure
If you're using a simulation type that has grids (for instance, here we're using an Enzo simulation) you can examine the structure of the mesh. For the most part, you probably won't have to use this unless you're debugging a simulation or examining in detail what is going on.
```
print(ds.index.grid_left_edge)
```
But, you may have to access information about individual grid objects! Each grid object mediates accessing data from the disk and has a number of attributes that tell you about it. The index (`ds.index` here) has an attribute `grids` which is all of the grid objects.
```
ds.index.grids[1]
g = ds.index.grids[1]
print(g)
```
Grids have dimensions, extents, level, and even a list of Child grids.
```
g.ActiveDimensions
g.LeftEdge, g.RightEdge
g.Level
g.Children
```
## Advanced Grid Inspection
If we want to examine grids only at a given level, we can! Not only that, but we can load data and take a look at various fields.
*This section can be skipped!*
```
gs = ds.index.select_grids(ds.index.max_level)
g2 = gs[0]
print(g2)
print(g2.Parent)
print(g2.get_global_startindex())
g2["density"][:, :, 0]
print((g2.Parent.child_mask == 0).sum() * 8)
print(g2.ActiveDimensions.prod())
for f in ds.field_list:
fv = g[f]
if fv.size == 0:
continue
print(f, fv.min(), fv.max())
```
# Examining Data in Regions
yt provides data object selectors. In subsequent notebooks we'll examine these in more detail, but we can select a sphere of data and perform a number of operations on it. yt makes it easy to operate on fluid fields in an object in *bulk*, but you can also examine individual field values.
This creates a sphere selector positioned at the most dense point in the simulation that has a radius of 10 kpc.
```
sp = ds.sphere("max", (10, "kpc"))
sp
```
We can calculate a bunch of bulk quantities. Here's that list, but there's a list in the docs, too!
```
list(sp.quantities.keys())
```
Let's look at the total mass. This is how you call a given quantity. yt calls these "Derived Quantities". We'll talk about a few in a later notebook.
```
sp.quantities.total_mass()
```
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/quickstart/2)_Data_Inspection.ipynb | 2)_Data_Inspection.ipynb |
.. _quickstart:
yt Quickstart
=============
The quickstart is a series of worked examples of how to use much of the
functionality of yt. These are simple, short introductions to give you a taste
of what the code can do and are not meant to be detailed walkthroughs.
There are two ways in which you can go through the quickstart: interactively and
non-interactively. We recommend the interactive method, but if you're pressed
on time, you can non-interactively go through the linked pages below and view the
worked examples.
To execute the quickstart interactively, you have a couple of options: 1) run
the notebook from your own system or 2) run it from the url
https://girder.hub.yt/#raft/5b5b4686323d12000122aa8a.
Option 1 requires an existing installation of yt (see
:ref:`installing-yt`), a copy of the yt source (which you may
already have depending on your installation choice), and a download of the
tutorial data-sets (total about 3 GB). If you know you are going to be a yt user
and have the time to download the data-sets, option 1 is a good choice. However,
if you're only interested in getting a feel for yt and its capabilities, or you
already have yt but don't want to spend time downloading the data, go ahead to
https://girder.hub.yt/#raft/5b5b4686323d12000122aa8a.
If you're running the tutorial from your own system and you do not already have
the yt repository, the easiest way to get the repository is to clone it using
git:
.. code-block:: bash
git clone https://github.com/yt-project/yt
Now start the IPython notebook from within the repository (we presume you have
yt and [jupyter](https://jupyter.org/) installed):
.. code-block:: bash
cd yt/doc/source/quickstart
yt notebook
This command will give you information about the notebook server and how to
access it. You will basically just pick a password (for security reasons) and then
redirect your web browser to point to the notebook server.
Once you have done so, choose "Introduction" from the list of
notebooks, which includes an introduction and information about how to download
the sample data.
.. warning:: The pre-filled out notebooks are *far* less fun than running them
yourselves! Check out the repo and give it a try.
Here are the notebooks, which have been filled in for inspection:
.. toctree::
:maxdepth: 1
introduction
data_inspection
simple_visualization
data_objects_and_time_series
derived_fields_and_profiles
volume_rendering
.. note::
The notebooks use sample datasets that are available for download at
https://yt-project.org/data. See :ref:`quickstart-introduction` for more
details.
Let us know if you would like to contribute other example notebooks, or have
any suggestions for how these can be improved.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/quickstart/index.rst | index.rst |
# Simple Visualizations of Data
Just like in our first notebook, we have to load yt and then some data.
```
import yt
```
For this notebook, we'll load up a cosmology dataset.
```
ds = yt.load_sample("enzo_tiny_cosmology")
print("Redshift =", ds.current_redshift)
```
In the terms that yt uses, a projection is a line integral through the domain. This can either be unweighted (in which case a column density is returned) or weighted, in which case an average value is returned. Projections are, like all other data objects in yt, full-fledged data objects that churn through data and present that to you. However, we also provide a simple method of creating Projections and plotting them in a single step. This is called a Plot Window, here specifically known as a `ProjectionPlot`. One thing to note is that in yt, we project all the way through the entire domain at a single time. This means that the first call to projecting can be somewhat time consuming, but panning, zooming and plotting are all quite fast.
yt is designed to make it easy to make nice plots and straightforward to modify those plots directly. The cookbook in the documentation includes detailed examples of this.
```
p = yt.ProjectionPlot(ds, "y", ("gas", "density"))
p.show()
```
The `show` command simply sends the plot to the IPython notebook. You can also call `p.save()` which will save the plot to the file system. This function accepts an argument, which will be prepended to the filename and can be used to name it based on the width or to supply a location.
Now we'll zoom and pan a bit.
```
p.zoom(2.0)
p.pan_rel((0.1, 0.0))
p.zoom(10.0)
p.pan_rel((-0.25, -0.5))
p.zoom(0.1)
```
If we specify multiple fields, each time we call `show` we get multiple plots back. Same for `save`!
```
p = yt.ProjectionPlot(
ds,
"z",
[("gas", "density"), ("gas", "temperature")],
weight_field=("gas", "density"),
)
p.show()
```
We can adjust the colormap on a field-by-field basis.
```
p.set_cmap(("gas", "temperature"), "hot")
```
And, we can re-center the plot on different locations. One possible use of this would be to make a single `ProjectionPlot` which you move around to look at different regions in your simulation, saving at each one.
```
v, c = ds.find_max(("gas", "density"))
p.set_center((c[0], c[1]))
p.zoom(10)
```
Okay, let's load up a bigger simulation (from `Enzo_64` this time) and make a slice plot.
```
ds = yt.load_sample("Enzo_64/DD0043/data0043")
s = yt.SlicePlot(
ds, "z", [("gas", "density"), ("gas", "velocity_magnitude")], center="max"
)
s.set_cmap(("gas", "velocity_magnitude"), "cmyt.pastel")
s.zoom(10.0)
```
We can adjust the logging of various fields:
```
s.set_log(("gas", "velocity_magnitude"), True)
```
yt provides many different annotations for your plots. You can see all of these in the documentation, or if you type `s.annotate_` and press tab, a list will show up here. We'll annotate with velocity arrows.
```
s.annotate_velocity()
```
Contours can also be overlaid:
```
s = yt.SlicePlot(ds, "x", ("gas", "density"), center="max")
s.annotate_contour(("gas", "temperature"))
s.zoom(2.5)
```
Finally, we can save out to the file system.
```
s.save()
```
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/quickstart/3)_Simple_Visualization.ipynb | 3)_Simple_Visualization.ipynb |
# Data Objects and Time Series Data
Just like before, we will load up yt. Since we'll be using pyplot to plot some data in this notebook, we additionally tell matplotlib to place plots inline inside the notebook.
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import yt
```
## Time Series Data
Unlike before, instead of loading a single dataset, this time we'll load a bunch which we'll examine in sequence. This command creates a `DatasetSeries` object, which can be iterated over (including in parallel, which is outside the scope of this quickstart) and analyzed. There are some other helpful operations it can provide, but we'll stick to the basics here.
Note that you can specify either a list of filenames, or a glob (i.e., asterisk) pattern in this.
```
ts = yt.load("enzo_tiny_cosmology/DD????/DD????")
```
### Simple Time Series
As a simple example of how we can use this functionality, let's find the min and max of the density as a function of time in this simulation. To do this we use the construction `for ds in ts` where `ds` means "Dataset" and `ts` is the "Time Series" we just loaded up. For each dataset, we'll create an object (`ad`) that covers the entire domain. (`all_data` is a shorthand function for this.) We'll then call the `extrema` Derived Quantity, and append the min and max to our extrema outputs. Lastly, we're turn down yt's logging to only show "error"s so as to not produce too much logging text, as it loads each individual dataset below.
```
yt.set_log_level("error")
rho_ex = []
times = []
for ds in ts:
ad = ds.all_data()
rho_ex.append(ad.quantities.extrema("density"))
times.append(ds.current_time.in_units("Gyr"))
rho_ex = np.array(rho_ex)
```
Now we plot the minimum and the maximum:
```
fig, ax = plt.subplots()
ax.set(
xlabel="Time (Gyr)",
ylabel="Density ($g/cm^3$)",
yscale="log",
ylim=(1e-32, 1e-21),
)
ax.plot(times, rho_ex[:, 0], "-xk", label="Minimum")
ax.plot(times, rho_ex[:, 1], "-xr", label="Maximum")
ax.legend()
```
## Data Objects
Time series data have many applications, but most of them rely on examining the underlying data in some way. Below, we'll see how to use and manipulate data objects.
### Ray Queries
yt provides the ability to examine rays, or lines, through the domain. Note that these are not periodic, unlike most other data objects. We create a ray object and can then examine quantities of it. Rays have the special fields `t` and `dts`, which correspond to the time the ray enters a given cell and the distance it travels through that cell.
To create a ray, we specify the start and end points.
Note that we need to convert these arrays to numpy arrays due to a bug in matplotlib 1.3.1.
```
fig, ax = plt.subplots()
ray = ds.ray([0.1, 0.2, 0.3], [0.9, 0.8, 0.7])
ax.semilogy(np.array(ray["t"]), np.array(ray["density"]))
print(ray["dts"])
print(ray["t"])
print(ray["gas", "x"])
```
### Slice Queries
While slices are often used for visualization, they can be useful for other operations as well. yt regards slices as multi-resolution objects. They are an array of cells that are not all the same size; it only returns the cells at the highest resolution that it intersects. (This is true for all yt data objects.) Slices and projections have the special fields `px`, `py`, `pdx` and `pdy`, which correspond to the coordinates and half-widths in the pixel plane.
```
ds = yt.load_sample("IsolatedGalaxy")
v, c = ds.find_max(("gas", "density"))
sl = ds.slice(2, c[0])
print(sl["index", "x"])
print(sl["index", "z"])
print(sl["pdx"])
print(sl["gas", "density"].shape)
```
If we want to do something interesting with a `Slice`, we can turn it into a `FixedResolutionBuffer`. This object can be queried and will return a 2D array of values.
```
frb = sl.to_frb((50.0, "kpc"), 1024)
print(frb["gas", "density"].shape)
```
yt provides a few functions for writing arrays to disk, particularly in image form. Here we'll write out the log of `density`, and then use IPython to display it back here. Note that for the most part, you will probably want to use a `PlotWindow` for this, but in the case that it is useful you can directly manipulate the data.
```
yt.write_image(np.log10(frb["gas", "density"]), "temp.png")
from IPython.display import Image
Image(filename="temp.png")
```
### Off-Axis Slices
yt provides not only slices, but off-axis slices that are sometimes called "cutting planes." These are specified by (in order) a normal vector and a center. Here we've set the normal vector to `[0.2, 0.3, 0.5]` and the center to be the point of maximum density.
We can then turn these directly into plot windows using `to_pw`. Note that the `to_pw` and `to_frb` methods are available on slices, off-axis slices, and projections, and can be used on any of them.
```
cp = ds.cutting([0.2, 0.3, 0.5], "max")
pw = cp.to_pw(fields=[("gas", "density")])
```
Once we have our plot window from our cutting plane, we can show it here.
```
pw.show()
pw.zoom(10)
```
We can, as noted above, do the same with our slice:
```
pws = sl.to_pw(fields=[("gas", "density")])
pws.show()
print(list(pws.plots.keys()))
```
### Covering Grids
If we want to access a 3D array of data that spans multiple resolutions in our simulation, we can use a covering grid. This will return a 3D array of data, drawing from up to the resolution level specified when creating the data. For example, if you create a covering grid that spans two child grids of a single parent grid, it will fill those zones covered by a zone of a child grid with the data from that child grid. Where it is covered only by the parent grid, the cells from the parent grid will be duplicated (appropriately) to fill the covering grid.
There are two different types of covering grids: unsmoothed and smoothed. Smoothed grids will be filled through a cascading interpolation process; they will be filled at level 0, interpolated to level 1, filled at level 1, interpolated to level 2, filled at level 2, etc. This will help to reduce edge effects. Unsmoothed covering grids will not be interpolated, but rather values will be duplicated multiple times.
Here we create an unsmoothed covering grid at level 2, with the left edge at `[0.0, 0.0, 0.0]` and with dimensions equal to those that would cover the entire domain at level 2. We can then ask for the Density field, which will be a 3D array.
```
cg = ds.covering_grid(2, [0.0, 0.0, 0.0], ds.domain_dimensions * 2**2)
print(cg["density"].shape)
```
In this example, we do exactly the same thing: except we ask for a *smoothed* covering grid, which will reduce edge effects.
```
scg = ds.smoothed_covering_grid(2, [0.0, 0.0, 0.0], ds.domain_dimensions * 2**2)
print(scg["density"].shape)
```
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/quickstart/4)_Data_Objects_and_Time_Series.ipynb | 4)_Data_Objects_and_Time_Series.ipynb |
# A Brief Demo of Volume Rendering
This shows a small amount of volume rendering. Really, just enough to get your feet wet!
```
import yt
ds = yt.load_sample("IsolatedGalaxy")
```
To create a volume rendering, we need a camera and a transfer function. We'll use the `ColorTransferFunction`, which accepts (in log space) the minimum and maximum bounds of our transfer function. This means behavior for data outside these values is undefined.
We then add on "layers" like an onion. This function can accept a width (here specified) in data units, and also a color map. Here we add on four layers.
Finally, we create a camera. The focal point is `[0.5, 0.5, 0.5]`, the width is 20 kpc (including front-to-back integration) and we specify a transfer function. Once we've done that, we call `show` to actually cast our rays and display them inline.
```
sc = yt.create_scene(ds)
sc.camera.set_width(ds.quan(20, "kpc"))
source = sc.sources["source_00"]
tf = yt.ColorTransferFunction((-28, -24))
tf.add_layers(4, w=0.01)
source.set_transfer_function(tf)
sc.show()
```
If we want to apply a clipping, we can specify the `sigma_clip`. This will clip the upper bounds to this value times the standard deviation of the values in the image array.
```
sc.show(sigma_clip=4)
```
There are several other options we can specify. Note that here we have turned on the use of ghost zones, shortened the data interval for the transfer function, and widened our gaussian layers.
```
sc = yt.create_scene(ds)
sc.camera.set_width(ds.quan(20, "kpc"))
source = sc.sources["source_00"]
source.field = "density"
tf = yt.ColorTransferFunction((-28, -25))
tf.add_layers(4, w=0.03)
source.transfer_function = tf
sc.show(sigma_clip=4.0)
```
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/quickstart/6)_Volume_Rendering.ipynb | 6)_Volume_Rendering.ipynb |
.. _creating-objects:
Creating Data Objects
=====================
The three-dimensional datatypes in yt follow a fairly simple protocol. The
basic principle is that if you want to define a region in space, that region
must be identifiable from some sort of cut applied against the cells --
typically, in yt, this is done by examining the geometry.
Creating a new data object requires modifications to two different files, one
of which is in Python and the other in Cython. First, a subclass of
:class:`~yt.data_objects.data_containers.YTDataContainer` must be defined;
typically you actually want to subclass one of:
:class:`~yt.data_objects.data_containers.YTSelectionContainer0D`
:class:`~yt.data_objects.data_containers.YTSelectionContainer1D`
:class:`~yt.data_objects.data_containers.YTSelectionContainer2D`
:class:`~yt.data_objects.data_containers.YTSelectionContainer3D`.
The following attributes must be defined:
* ``_type_name`` - this is the short name by which the object type will be
known as. Remember this for later, as we will have to use it when defining
the underlying selector.
* ``_con_args`` - this is the set of arguments passed to the object, and their
names as attributes on the data object.
* ``_container_fields`` - any fields that are generated by the object, rather
than by another derived field in yt.
The rest of the object can be defined in Cython, in the file
``yt/geometry/selection_routines.pyx``. You must define a subclass of
``SelectorObject``, which will require implementation of the following methods:
* ``fill_mask`` - this takes a grid object and fills a mask of which zones
should be included. It must take into account the child mask of the grid.
* ``select_cell`` - this routine accepts a position and a width, and returns
either zero or one for whether or not that cell is included in the selector.
* ``select_sphere`` - this routine returns zero or one whether a sphere (point
and radius) is included in the selector.
* ``select_point`` - this identifies whether or not a point is included in the
selector. It should be identical to selecting a cell or a sphere with
zero extent.
* ``select_bbox`` - this returns whether or not a bounding box (i.e., grid) is
included in the selector.
* ``_hash_vals`` - this must return some combination of parameters that
semi-uniquely identifies the selector.
Once the object has been defined, it must then be aliased within
``selection_routines.pyx`` as ``typename_selector``. For instance,
``ray_selector`` or ``sphere_selector`` for ``_type_name`` values of ``ray``
and ``sphere``, respectively.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/developing/creating_datatypes.rst | creating_datatypes.rst |
.. _documentation:
Documentation
=============
.. _writing_documentation:
How to Write Documentation
--------------------------
Writing documentation is one of the most important but often overlooked tasks
for increasing yt's impact in the community. It is the way in which the
world will understand how to use our code, so it needs to be done concisely
and understandably. Typically, when a developer submits some piece of code
with new functionality, the developer should also include documentation on how
to use that functionality (as per :ref:`requirements-for-code-submission`).
Depending on the nature of the code addition, this could be a new narrative
docs section describing how the new code works and how to use it, it could
include a recipe in the cookbook section, or it could simply be adding a note
in the relevant docs text somewhere.
The documentation exists in the main code repository for yt in the
``doc`` directory (i.e. ``$YT_GIT/doc/source`` where ``$YT_GIT`` is the path of
the yt git repository). It is organized hierarchically into the main
categories of:
* Visualizing
* Analyzing
* Analysis Modules
* Examining
* Cookbook
* Quickstart
* Developing
* Reference
* FAQ
* Help
You will have to figure out where your new/modified doc fits into this, but
browsing through the existing documentation is a good way to sort that out.
All the source for the documentation is written in
`Sphinx <http://www.sphinx-doc.org/en/master/>`_, which uses ReST for markup. ReST is very
straightforward to markup in a text editor, and if you are new to it, we
recommend just using other .rst files in the existing yt documentation as
templates or checking out the
`ReST reference documentation <http://www.sphinx-doc.org/en/master/usage/restructuredtext/>`_.
New cookbook recipes (see :ref:`cookbook`) are very helpful for the community
as they provide simple annotated recipes on how to use specific functionality.
To add one, create a concise Python script which demonstrates some
functionality and pare it down to its minimum. Add some comment lines to
describe what it is that you're doing along the way. Place this ``.py`` file
in the ``source/cookbook/`` directory, and then link to it explicitly in one
of the relevant ``.rst`` files in that directory (e.g. ``complex_plots.rst``,
etc.), and add some description of what the script
actually does. We recommend that you use one of the
`sample data sets <https://yt-project.org/data>`_ in your recipe. When the full
docs are built, each of the cookbook recipes is executed dynamically on
a system which has access to all of the sample datasets. Any output images
generated by your script will then be attached inline in the built documentation
directly following your script.
After you have made your modifications to the docs, you will want to make sure
that they render the way you expect them to render. For more information on
this, see the section on :ref:`docs_build`. Unless you're contributing cookbook
recipes or notebooks which require a dynamic build, you can probably get away
with just doing a 'quick' docs build.
When you have completed your documentation additions, commit your changes
to your repository and make a pull request in the same way you would contribute
a change to the codebase, as described in the section on :ref:`sharing-changes`.
.. _docs_build:
Building the Documentation
--------------------------
The yt documentation makes heavy use of the Sphinx documentation automation
suite. Sphinx, written in Python, was originally created for the documentation
of the Python project and has many nice capabilities for managing the
documentation of Python code.
While much of the yt documentation is static text, we make heavy use of
cross-referencing with API documentation that is automatically generated at
build time by Sphinx. We also use Sphinx to run code snippets (e.g. the
cookbook and the notebooks) and embed resulting images and example data.
Essential tools for building the docs can be installed alongside yt itself. From
the top level of a local copy, run
.. code-block:: bash
$ python -m pip install -e ".[doc]"
Quick versus Full Documentation Builds
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Building the entire set of yt documentation is a laborious task, since you
need to have a large number of packages in order to successfully execute
and render all of the notebooks and yt recipes drawing from every corner
of the yt source. As a quick alternative, one can do a ``quick`` build
of the documentation, which eschews the need for downloading all of these
dependencies, but it only produces the static docs. The static docs do
not include the cookbook outputs and the notebooks, but this is good
enough for most cases of people testing out whether or not their documentation
contributions look OK before submitting them to the yt repository.
If you want to create the full documentation locally, then you'll need
to follow the instructions for building the ``full`` docs, so that you can
dynamically execute and render the cookbook recipes, the notebooks, etc.
Building the Docs (Quick)
^^^^^^^^^^^^^^^^^^^^^^^^^
In order to tell Sphinx not to do all of the dynamic building, you must set the
``$READTHEDOCS`` environment variable to be ``True`` by run from the command
line (using bash syntax for example), as
.. code-block:: bash
export READTHEDOCS=True
This variable is set for automated builds on the free ReadTheDocs service but
can be used by anyone to force a quick, minimal build.
Now all you need to do is execute Sphinx on the yt doc source. Go to the
documentation directory and build the docs:
.. code-block:: bash
cd $YT_GIT/doc
make html
This will produce an html version of the documentation locally in the
``$YT_GIT/doc/build/html`` directory. You can now go there and open
up ``index.html`` or whatever file you wish in your web browser.
Building the Docs (Full)
^^^^^^^^^^^^^^^^^^^^^^^^
As alluded to earlier, building the full documentation is a bit more involved
than simply building the static documentation.
The full documentation makes heavy use of custom Sphinx extensions to transform
recipes, notebooks, and inline code snippets into Python scripts, IPython_
notebooks, or notebook cells that are executed when the docs are built.
To do this, we use Jupyter's nbconvert module to transform notebooks into
HTML. to simplify versioning of the notebook JSON format, we store notebooks in
an unevaluated state.
To build the full documentation, you will need yt, jupyter, and all dependencies
needed for yt's analysis modules installed. The following dependencies were
used to generate the yt documentation during the release of yt 3.2 in 2015.
* Sphinx_ 1.3.1
* Jupyter 1.0.0
* RunNotebook 0.1
* pandoc_ 1.13.2
* Rockstar halo finder 0.99.6
* SZpack_ 1.1.1
* ffmpeg_ 2.7.1 (compiled with libvpx support)
* Astropy_ 0.4.4
.. _SZpack: http://www.jb.man.ac.uk/~jchluba/Science/SZpack/SZpack.html
.. _Astropy: https://www.astropy.org/
.. _Sphinx: http://www.sphinx-doc.org/en/master/
.. _pandoc: https://pandoc.org/
.. _ffmpeg: http://www.ffmpeg.org/
.. _IPython: https://ipython.org/
You will also need the full yt suite of `yt test data
<https://yt-project.org/data/>`_, including the larger datasets that are not used
in the answer tests.
You will need to ensure that your testing configuration is properly
configured and that all of the yt test data is in the testing directory. See
:ref:`run_answer_testing` for more details on how to set up the testing
configuration.
Now that you have everything set up properly, go to the documentation directory
and build it using Sphinx:
.. code-block:: bash
cd $YT_GIT/doc
make html
If all of the dependencies are installed and all of the test data is in the
testing directory, this should churn away for a while (several hours) and
eventually generate a docs build. We suggest setting
:code:`suppress_stream_logging = True` in your yt configuration (See
:ref:`configuration-file`) to suppress large amounts of debug output from
yt.
To clean the docs build, use :code:`make clean`.
Building the Docs (Hybrid)
^^^^^^^^^^^^^^^^^^^^^^^^^^
It's also possible to create a custom Sphinx build that builds a restricted set
of notebooks or scripts. This can be accomplished by editing the Sphinx
:code:`conf.py` file included in the :code:`source` directory at the top level
of the docs. The extensions included in the build are contained in the
:code:`extensions` list. To disable an extension, simply remove it from the
list. Doing so will raise a warning when Sphinx encounters the directive in the
docs and will prevent Sphinx from evaluating the directive.
As a concrete example, if one wanted to include the :code:`notebook`, and
:code:`notebook-cell` directives, but not the :code:`python-script` or
:code:`autosummary` directives, one would just need to comment out the lines
that append these extensions to the :code:`extensions` list. The resulting docs
build will be significantly quicker since it would avoid executing the lengthy
API autodocumentation as well as a large number of Python script snippets in
the narrative docs.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/developing/building_the_docs.rst | building_the_docs.rst |
.. _creating_frontend:
Creating A New Code Frontend
============================
yt is designed to support analysis and visualization of data from
multiple different simulation codes. For a list of codes and the level
of support they enjoy, see :ref:`code-support`.
We'd like to support a broad range of codes, both Adaptive Mesh
Refinement (AMR)-based and otherwise. To add support for a new code, a
few things need to be put into place. These necessary structures can
be classified into a couple categories:
* Data meaning: This is the set of parameters that convert the data into
physically relevant units; things like spatial and mass conversions, time
units, and so on.
* Data localization: These are structures that help make a "first pass" at data
loading. Essentially, we need to be able to make a first pass at guessing
where data in a given physical region would be located on disk. With AMR
data, this is typically quite easy: the grid patches are the "first pass" at
localization.
* Data reading: This is the set of routines that actually perform a read of
either all data in a region or a subset of that data.
Note that a frontend can be built as an external package. This is useful to
develop and maintain a maturing frontend at your own pace. For technical details, see
:ref:`frontends-as-extensions`.
If you are interested in adding a new code, be sure to drop us a line on
`yt-dev <https://mail.python.org/archives/list/[email protected]/>`_!
Bootstrapping a new frontend
----------------------------
To get started
* make a new directory in ``yt/frontends`` with the name of your code and add the name
into ``yt/frontends/api.py:_frontends`` (in alphabetical order).
* copy the contents of the ``yt/frontends/_skeleton`` directory, and replace every
occurrence of ``Skeleton`` with your frontend's name (preserving case). This
adds a lot of boilerplate for the required classes and methods that are needed.
Data Meaning Structures
-----------------------
You will need to create a subclass of ``Dataset`` in the ``data_structures.py``
file. This subclass will need to handle conversion between the different physical
units and the code units (typically in the ``_set_code_unit_attributes()``
method), read in metadata describing the overall data on disk (via the
``_parse_parameter_file()`` method), and provide a ``classmethod``
called ``_is_valid()`` that lets the ``yt.load`` method help identify an
input file as belonging to *this* particular ``Dataset`` subclass
(see :ref:`data-format-detection`).
For the most part, the examples of
``yt.frontends.boxlib.data_structures.OrionDataset`` and
``yt.frontends.enzo.data_structures.EnzoDataset`` should be followed,
but ``yt.frontends.chombo.data_structures.ChomboDataset``, as a
slightly newer addition, can also be used as an instructive example.
A new set of fields must be added in the file ``fields.py`` in your
new directory. For the most part this means subclassing
``FieldInfoContainer`` and adding the necessary fields specific to
your code. Here is a snippet from the base BoxLib field container:
.. code-block:: python
from yt.fields.field_info_container import FieldInfoContainer
class BoxlibFieldInfo(FieldInfoContainer):
known_other_fields = (
("density", (rho_units, ["density"], None)),
("eden", (eden_units, ["energy_density"], None)),
("xmom", (mom_units, ["momentum_x"], None)),
("ymom", (mom_units, ["momentum_y"], None)),
("zmom", (mom_units, ["momentum_z"], None)),
("temperature", ("K", ["temperature"], None)),
("Temp", ("K", ["temperature"], None)),
("x_velocity", ("cm/s", ["velocity_x"], None)),
("y_velocity", ("cm/s", ["velocity_y"], None)),
("z_velocity", ("cm/s", ["velocity_z"], None)),
("xvel", ("cm/s", ["velocity_x"], None)),
("yvel", ("cm/s", ["velocity_y"], None)),
("zvel", ("cm/s", ["velocity_z"], None)),
)
known_particle_fields = (
("particle_mass", ("code_mass", [], None)),
("particle_position_x", ("code_length", [], None)),
("particle_position_y", ("code_length", [], None)),
("particle_position_z", ("code_length", [], None)),
("particle_momentum_x", (mom_units, [], None)),
("particle_momentum_y", (mom_units, [], None)),
("particle_momentum_z", (mom_units, [], None)),
("particle_angmomen_x", ("code_length**2/code_time", [], None)),
("particle_angmomen_y", ("code_length**2/code_time", [], None)),
("particle_angmomen_z", ("code_length**2/code_time", [], None)),
("particle_id", ("", ["particle_index"], None)),
("particle_mdot", ("code_mass/code_time", [], None)),
)
The tuples, ``known_other_fields`` and ``known_particle_fields`` contain
entries, which are tuples of the form ``("name", ("units", ["fields", "to",
"alias"], "display_name"))``. ``"name"`` is the name of a field stored on-disk
in the dataset. ``"units"`` corresponds to the units of that field. The list
``["fields", "to", "alias"]`` allows you to specify additional aliases to this
particular field; for example, if your on-disk field for the x-direction
velocity were ``"x-direction-velocity"``, maybe you'd prefer to alias to the
more terse name of ``"xvel"``. By convention in yt we use a set of "universal"
fields. Currently these fields are enumerated in the stream frontend. If you
take a look at ``yt/frontends/stream/fields.py``, you will see a listing of
fields following the format described above with field names that will be
recognized by the rest of the built-in yt field system. In the example from the
boxlib frontend above many of the fields in the ``known_other_fields`` tuple
follow this convention. If you would like your frontend to mesh nicely with the
rest of yt's built-in fields, it is probably a good idea to alias your
frontend's field names to the yt "universal" field names. Finally,
"display_name"`` is an optional parameter that can be used to specify how you
want the field to be displayed on a plot; this can be LaTeX code, for example
the density field could have a display name of ``r"\rho"``. Omitting the
``"display_name"`` will result in using a capitalized version of the ``"name"``.
.. _data-format-detection:
How to make ``yt.load`` magically detect your data format ?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``yt.load`` takes in a file or directory name, as well as any number of
positional and keyword arguments. On call, ``yt.load`` attempts to determine
what ``Dataset`` subclasses are compatible with the set of arguments it
received. It does so by passing its arguments to *every* ``Dataset`` subclasses'
``_is_valid`` method. These methods are intended to be heuristics that quickly
determine whether the arguments (in particular the file/directory) can be loaded
with their respective classes. In some cases, more than one class might be
detected as valid. If all candidate classes are siblings, ``yt.load`` will
select the most specialized one.
When writing a new frontend, it is important to write ``_is_valid`` methods to be
as specific as possible, otherwise one might constrain the design space for
future frontends or in some cases deny their ability to leverage ``yt.load``'s
magic.
Performance is also critical since the new method is going to get called every
single time along with ``yt.load``, even for unrelated data formats.
Note that ``yt.load`` knows about every ``Dataset`` subclass because they are
automatically registered on creation.
.. _bfields-frontend:
Creating Aliases for Magnetic Fields
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Setting up access to the magnetic fields in your dataset requires special
handling, because in different unit systems magnetic fields have different
dimensions (see :ref:`bfields` for an explanation). If your dataset includes
magnetic fields, you should include them in ``known_other_fields``, but do
not set up aliases for them--instead use the special handling function
:meth:`~yt.fields.magnetic_fields.setup_magnetic_field_aliases`. It takes
as arguments the ``FieldInfoContainer`` instance, the field type of the
frontend, and the list of magnetic fields from the frontend. Here is an
example of how this is implemented in the FLASH frontend:
.. code-block:: python
class FLASHFieldInfo(FieldInfoContainer):
known_other_fields = (
("magx", (b_units, [], "B_x")), # Note there is no alias here
("magy", (b_units, [], "B_y")),
("magz", (b_units, [], "B_z")),
...,
)
def setup_fluid_fields(self):
from yt.fields.magnetic_field import setup_magnetic_field_aliases
...
setup_magnetic_field_aliases(self, "flash", ["mag%s" % ax for ax in "xyz"])
This function should always be imported and called from within the
``setup_fluid_fields`` method of the ``FieldInfoContainer``. If this
function is used, converting between magnetic fields in different
unit systems will be handled automatically.
Data Localization Structures
----------------------------
These functions and classes let yt know about how the arrangement of
data on disk corresponds to the physical arrangement of data within
the simulation. yt has grid datastructures for handling both
patch-based and octree-based AMR codes. The terms 'patch-based'
and 'octree-based' are used somewhat loosely here. For example,
traditionally, the FLASH code used the paramesh AMR library, which is
based on a tree structure, but the FLASH frontend in yt utilizes yt's
patch-based datastructures. It is up to the frontend developer to
determine which yt datastructures best match the datastructures of
their simulation code.
Both approaches -- patch-based and octree-based -- have a concept of a
*Hierarchy* or *Index* (used somewhat interchangeably in the code) of
datastructures and something that describes the elements that make up
the Hierarchy or Index. For patch-based codes, the Index is a
collection of ``AMRGridPatch`` objects that describe a block of zones.
For octree-based codes, the Index contains datastructures that hold
information about the individual octs, namely an ``OctreeContainer``.
Hierarchy or Index
^^^^^^^^^^^^^^^^^^
To set up data localization, a ``GridIndex`` subclass for patch-based
codes or an ``OctreeIndex`` subclass for octree-based codes must be
added in the file ``data_structures.py``. Examples of these different
types of ``Index`` can be found in, for example, the
``yt.frontends.chombo.data_structures.ChomboHierarchy`` for patch-based
codes and ``yt.frontends.ramses.data_structures.RAMSESIndex`` for
octree-based codes.
For the most part, the ``GridIndex`` subclass must override (at a
minimum) the following methods:
* ``_detect_output_fields()``: ``self.field_list`` must be populated as a list
of strings corresponding to "native" fields in the data files.
* ``_count_grids()``: this must set ``self.num_grids`` to be the total number
of grids (equivalently ``AMRGridPatch``'es) in the simulation.
* ``_parse_index()``: this must fill in ``grid_left_edge``,
``grid_right_edge``, ``grid_particle_count``, ``grid_dimensions`` and
``grid_levels`` with the appropriate information. Each of these variables
is an array, with an entry for each of the ``self.num_grids`` grids.
Additionally, ``grids`` must be an array of ``AMRGridPatch`` objects that
already know their IDs.
* ``_populate_grid_objects()``: this initializes the grids by calling
``_prepare_grid()`` and ``_setup_dx()`` on all of them. Additionally, it
should set up ``Children`` and ``Parent`` lists on each grid object.
The ``OctreeIndex`` has somewhat analogous methods, but often with
different names; both ``OctreeIndex`` and ``GridIndex`` are subclasses
of the ``Index`` class. In particular, for the ``OctreeIndex``, the
method ``_initialize_oct_handler()`` setups up much of the oct
metadata that is analogous to the grid metadata created in the
``GridIndex`` methods ``_count_grids()``, ``_parse_index()``, and
``_populate_grid_objects()``.
Grids
^^^^^
.. note:: This section only applies to the approach using yt's patch-based
datastructures. For the octree-based approach, one does not create
a grid object, but rather an ``OctreeSubset``, which has methods
for filling out portions of the octree structure. Again, see the
code in ``yt.frontends.ramses.data_structures`` for an example of
the octree approach.
A new grid object, subclassing ``AMRGridPatch``, will also have to be added in
``data_structures.py``. For the most part, this may be all
that is needed:
.. code-block:: python
class ChomboGrid(AMRGridPatch):
_id_offset = 0
__slots__ = ["_level_id"]
def __init__(self, id, index, level=-1):
AMRGridPatch.__init__(self, id, filename=index.index_filename, index=index)
self.Parent = None
self.Children = []
self.Level = level
Even one of the more complex grid objects,
``yt.frontends.boxlib.BoxlibGrid``, is still relatively simple.
Data Reading Functions
----------------------
In ``io.py``, there are a number of IO handlers that handle the
mechanisms by which data is read off disk. To implement a new data
reader, you must subclass ``BaseIOHandler``. The various frontend IO
handlers are stored in an IO registry - essentially a dictionary that
uses the name of the frontend as a key, and the specific IO handler as
a value. It is important, therefore, to set the ``dataset_type``
attribute of your subclass, which is what is used as the key in the IO
registry. For example:
.. code-block:: python
class IOHandlerBoxlib(BaseIOHandler):
_dataset_type = "boxlib_native"
...
At a minimum, one should also override the following methods
* ``_read_fluid_selection()``: this receives a collection of data "chunks", a
selector describing which "chunks" you are concerned with, a list of fields,
and the size of the data to read. It should create and return a dictionary
whose keys are the fields, and whose values are numpy arrays containing the
data. The data should actually be read via the ``_read_chunk_data()``
method.
* ``_read_chunk_data()``: this method receives a "chunk" of data along with a
list of fields we want to read. It loops over all the grid objects within
the "chunk" of data and reads from disk the specific fields, returning a
dictionary whose keys are the fields and whose values are numpy arrays of
the data.
If your dataset has particle information, you'll want to override the
``_read_particle_coords()`` and ``read_particle_fields()`` methods as
well. Each code is going to read data from disk in a different
fashion, but the ``yt.frontends.boxlib.io.IOHandlerBoxlib`` is a
decent place to start.
And that just about covers it. Please feel free to email
`yt-users <https://mail.python.org/archives/list/[email protected]/>`_ or
`yt-dev <https://mail.python.org/archives/list/[email protected]/>`_ with
any questions, or to let us know you're thinking about adding a new code to yt.
How to add extra dependencies ?
-------------------------------
.. note:: This section covers the technical details of how optional runtime
dependencies are implemented and used in yt.
If your frontend has specific or complicated dependencies other than yt's,
we advise writing your frontend as an extension package :ref:`frontends-as-extensions`
It is required that a specific target be added to ``pyproject.toml`` to define a list
of additional requirements (even if empty), see :ref:`install-additional`.
At runtime, extra third party dependencies should be loaded lazily, meaning their import
needs to be delayed until actually needed. This is achieved by importing a wrapper from
``yt.utitilies.on_demand_imports.py``, instead of the actual package like so
.. code-block:: python
from yt.utilities.on_demand_imports import _mypackage as mypackage
Such import statements can live at the top of a module without generating overhead or errors
in case the actual package isn't installed.
If the extra third party dependency is new, a new import wrapper must also be added. To do so,
follow the example of the existing wrappers in ``yt.utilities.on_demand_imports.py``.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/developing/creating_frontend.rst | creating_frontend.rst |
.. _extensions:
Extension Packages
==================
.. note:: For some additional discussion, see `YTEP-0029
<https://ytep.readthedocs.io/en/master/YTEPs/YTEP-0029.html>`_, where
this plan was designed.
As of version 3.3 of yt, we have put into place new methods for easing the
process of developing "extensions" to yt. Extensions might be analysis
packages, visualization tools, or other software projects that use yt as a base
engine but that are versioned, developed and distributed separately. This
brings with it the advantage of retaining control over the versioning,
contribution guidelines, scope, etc, while also providing a mechanism for
disseminating information about it, and potentially a method of interacting
with other extensions.
We have created a few pieces of infrastructure for developing extensions,
making them discoverable, and distributing them to collaborators.
If you have a module you would like to retain some external control over, or
that you don't feel would fit into yt, we encourage you to build it as an
extension module and distribute and version it independently.
Hooks for Extensions
--------------------
Starting with version 3.3 of yt, any package named with the prefix ``yt_`` is
importable from the namespace ``yt.extensions``. For instance, the
``yt_interaction`` package ( https://bitbucket.org/data-exp-lab/yt_interaction
) is importable as ``yt.extensions.interaction``.
In subsequent versions, we plan to include in yt a catalog of known extensions
and where to find them; this will put discoverability directly into the code
base.
.. _frontends-as-extensions:
Frontends as extensions
-----------------------
Starting with version 4.2 of yt, any externally installed package that exports
:class:`~yt.data_objects.static_output.Dataset` subclass as an entrypoint in
``yt.frontends`` namespace in ``setup.py`` or ``pyproject.toml`` will be
automatically loaded and immediately available in :func:`~yt.loaders.load`.
To add an entrypoint in an external project's ``setup.py``:
.. code-block:: python
setup(
# ...,
entry_points={
"yt.frontends": [
"myFrontend = my_frontend.api.MyFrontendDataset",
"myOtherFrontend = my_frontend.api.MyOtherFrontendDataset",
]
}
)
or ``pyproject.toml``:
.. code-block:: toml
[project.entry-points."yt.frontends"]
myFrontend = "my_frontend.api:MyFrontendDataset"
myOtherFrontend = "my_frontend.api:MyOtherFrontendDataset"
Extension Template
------------------
A template for starting an extension module (or converting an existing set of
code to an extension module) can be found at
https://github.com/yt-project/yt_extension_template .
To get started, download a zipfile of the template (
https://codeload.github.com/yt-project/yt_extension_template/zip/master ) and
follow the directions in ``README.md`` to modify the metadata.
Distributing Extensions
-----------------------
We encourage you to version on your choice of hosting platform (Bitbucket,
GitHub, etc), and to distribute your extension widely. We are presently
working on deploying a method for listing extension modules on the yt webpage.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/developing/extensions.rst | extensions.rst |
How to Do a Release
-------------------
Periodically, the yt development community issues new releases. Since yt follows
`semantic versioning <https://semver.org/>`_, the type of release can be read off
from the version number used. Version numbers should follow the scheme
``MAJOR.MINOR.PATCH``. There are three kinds of possible releases:
* Bugfix releases
These releases are regularly scheduled and will optimally happen approximately
once a month. These releases should contain only fixes for bugs discovered in
earlier releases and should not contain new features or API changes. Bugfix
releases should increment the ``PATCH`` version number. Bugfix releases should
*not* be generated by merging from the ``main`` branch, instead bugfix pull
requests should be manually backported. Version ``3.2.2`` is a bugfix release.
* Minor releases
These releases happen when new features are deemed ready to be merged into the
``stable`` branch and should not happen on a regular schedule. Minor releases
can also include fixes for bugs if the fix is determined to be too invasive
for a bugfix release. Minor releases should *not* include
backwards-incompatible changes and should not change APIs. If an API change
is deemed to be necessary, the old API should continue to function but might
trigger deprecation warnings. Minor releases should happen by merging the
``main`` branch into the ``stable`` branch. Minor releases should increment the
``MINOR`` version number and reset the ``PATCH`` version number to zero.
Version ``3.3.0`` is a minor release.
* Major releases
These releases happen when the development community decides to make major
backwards-incompatible changes. In principle a major version release could
include arbitrary changes to the library. Major version releases should only
happen after extensive discussion and vetting among the developer and user
community. Like minor releases, a major release should happen by merging the
``main`` branch into the ``stable`` branch. Major releases should increment the
``MAJOR`` version number and reset the ``MINOR`` and ``PATCH`` version numbers
to zero. If it ever happens, version ``4.0.0`` will be a major release.
The job of doing a release differs depending on the kind of release. Below, we
describe the necessary steps for each kind of release in detail.
Doing a Bugfix Release
~~~~~~~~~~~~~~~~~~~~~~
As described above, bugfix releases are regularly scheduled updates for minor
releases to ensure fixes for bugs make their way out to users in a timely
manner. Since bugfix releases should not include new features, we do not issue
bugfix releases by simply merging from the development ``main`` branch into
the ``stable`` branch. Instead, we manually cherry-pick bugfixes from the from
``main`` branch onto the ``stable`` branch.
You may find the ``pr_backport.py`` script located in the ``scripts`` folder at
the root of the repository to be helpful. This script uses the github API to
find the list of pull requests made since the last release and prompts the user
to backport each pull request individually. Note that the backport process is
fully manual. The easiest way to do it is to download the diff for the pull
request (the URL for the diff is printed out by the backport script) and then
use ``git apply`` to apply the patch for the pull request to a local copy of yt
with the ``stable`` branch checked out.
Once you've finished backporting push your work to Github. Once you've pushed to
your fork, you will be able to issue a pull request containing the backported
fixes just like any other yt pull request.
Doing a Minor or Major Release
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This is much simpler than a bugfix release. All that needs to happen is the
``main`` branch must get merged into the ``stable`` branch, and any conflicts
that happen must be resolved, almost always in favor of the state of the code on
the ``main`` branch.
Incrementing Version Numbers and Tagging a Release
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Before creating the tag for the release, you must increment the version numbers
that are hard-coded in a few files in the yt source so that version metadata
for the code is generated correctly. This includes things like ``yt.__version__``
and the version that gets read by the Python Package Index (PyPI) infrastructure.
The paths relative to the root of the repository for the three files that need
to be edited are:
* ``doc/source/conf.py``
The ``version`` and ``release`` variables need to be updated.
* ``setup.py``
The ``VERSION`` variable needs to be updated
* ``yt/__init__.py``
The ``__version__`` variable must be updated.
Once these files have been updated, commit these updates. This is the commit we
will tag for the release.
To actually create the tag, issue the following command from the ``stable``
branch:
.. code-block:: bash
git tag <tag-name>
Where ``<tag-name>`` follows the project's naming scheme for tags
(e.g. ``yt-3.2.1``). Once you are done, you will need to push the
tag to github::
git push origin --tags
This assumes that you have configured the remote ``origin`` to point at the main
yt git repository. If you are doing a minor or major version number release, you
will also need to update back to the development branch and update the
development version numbers in the same files.
Uploading to yt-project.org
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Before uploading the release to the Python Package Index (pypi.org) we will
first upload the package to yt-project.org. This facilitates building binary
wheels for pypi and binary conda packages on conda-forge before doing the
"official" release. This also ensures that there isn't a period of time when
users do ``pip install yt`` and end up downloading the source distribution
instead of one of the binary wheels.
To create the source distribution, issue the following command in the root of
the yt repository::
$ python setup.py sdist
This will generate a tarball in a ``dist/`` directory located in the root of the
repository.
Access to yt-project.org mediated via SSH login. Please contact one of the
current yt developers for access to the webserver running yt-project.org if you
do not already have it. You will need a copy of your SSH public key so that your
key can be added to the list of authorized keys. Once you login, use
e.g. ``scp`` to upload a copy of the source distribution tarball to
https://yt-project.org/sdist, like so::
$ scp dist/yt-3.5.1.tar.gz [email protected]:yt-project.org/sdist
You may find it helpful to set up an ssh config for dickenson to make this
command a bit easier to execute.
Updating conda-forge and building wheels
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Before we finish the release, we need to generate new binary builds by updating
yt's conda-forge feedstock and the yt-wheels repository.
Wheels and ``multibuild``
+++++++++++++++++++++++++
Binary wheels for yt are managed via the ``multibuild`` project. For yt the main
point of access is at https://github.com/yt-project/yt-wheels. Take a look at
the pull requests from the previous few releases to get an idea of what to do,
but briefly you will need to update the multibuild and yt submodules to their
latest state and then commit the changes to the submodules::
$ cd multibuild
$ git pull origin devel
$ cd ../yt
$ git pull origin stable
$ cd ..
$ git commit -am "updating multibuild and yt submodules"
Next you will need to update the ``.travis.yml`` and ``appveyor.yaml`` files to
build the latest tag of yt. You may also need to update elsewhere in the file if
yt's dependencies changed or if yt dropped or added support for a Python
version. To generate new wheels you need to push the changes to GitHub. A good
process to follow is to first submit a pull request to test the changes and make sure
the wheels can be built. Once they pass, you can merge the changes into ``main``
and wait for the wheel files to be uploaded to
https://anaconda.org/multibuild-wheels-staging/yt/files
(note that the wheels will not be uploaded until the changes have been
merged into ``main``). Once the wheels are uploaded, download the
wheel files for the release and copy them to the ``dist`` folder in the yt
repository so that they are sitting next to the source distribution
we created earlier. Here's a
one-liner to download all of the wheels for the yt 3.6.1 release::
$ wget -r -nd -A 'yt-3.6.1-*whl' https://anaconda.org/multibuild-wheels-staging/yt/files
Uploading to PyPI
+++++++++++++++++
To actually upload the release to the Python Package Index, you just need to
issue the following command:
.. code-block:: bash
twine upload dist/*
Please ensure that both the source distribution and binary wheels are present in
the ``dist`` folder before doing this. Directions on generating binary wheels
are described in the section immediately preceding this one.
You will be prompted for your PyPI credentials and then the package should
upload. Note that for this to complete successfully, you will need an account on
PyPI and that account will need to be registered as an "owner" or "maintainer"
of the yt package.
Right now the following people have access to upload packages: Matt Turk,
Britton Smith, Nathan Goldbaum, John ZuHone, Kacper Kowalik, and Madicken Munk.
The yt package source distribution should be uploaded along with compiled
binary wheel packages for various platforms that we support.
``conda-forge``
+++++++++++++++
Conda-forge packages for yt are managed via the yt feedstock, located at
https://github.com/conda-forge/yt-feedstock. When a release is pushed to PyPI a
bot should detect a new version and issue a PR to the feedstock with the new
version automatically. When this feedstock is updated, make sure that the
SHA256 hash of the tarball matches the one you uploaded to dickenson and that
the version number matches the one that is being released.
Should you need to update the feedstock manually, you will
need to update the ``meta.yaml`` file located in the ``recipe`` folder in the
root of the feedstock repository. Most likely you will only need to update the
version number and the SHA256 hash of the tarball. If yt's dependencies change
you may also need to update the recipe. Once you have updated the recipe,
propose a pull request on github and merge it once all builds pass.
Announcing
~~~~~~~~~~
After the release is uploaded to `PyPI <https://pypi.org/project/yt/#files>`_ and
`conda-forge <https://anaconda.org/conda-forge/yt>`_,
you should send out an announcement
e-mail to the yt mailing lists as well as other possibly interested mailing
lists for all but bugfix releases.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/developing/releasing.rst | releasing.rst |
.. _external-analysis-tools:
Using yt with External Analysis Tools
=====================================
yt can be used as a ``glue`` code between simulation data and other methods of
analyzing data. Its facilities for understanding units, disk IO and data
selection set it up ideally to use other mechanisms for analyzing, processing
and visualizing data.
Calling External Python Codes
-----------------------------
Calling external Python codes very straightforward. For instance, if you had a
Python code that accepted a set of structured meshes and then post-processed
them to apply radiative feedback, one could imagine calling it directly:
.. code-block:: python
import radtrans
import yt
ds = yt.load("DD0010/DD0010")
rt_grids = []
for grid in ds.index.grids:
rt_grid = radtrans.RegularBox(
grid.LeftEdge,
grid.RightEdge,
grid["density"],
grid["temperature"],
grid["metallicity"],
)
rt_grids.append(rt_grid)
grid.clear_data()
radtrans.process(rt_grids)
Or if you wanted to run a population synthesis module on a set of star
particles (and you could fit them all into memory) it might look something like
this:
.. code-block:: python
import pop_synthesis
import yt
ds = yt.load("DD0010/DD0010")
ad = ds.all_data()
star_masses = ad["StarMassMsun"]
star_metals = ad["StarMetals"]
pop_synthesis.CalculateSED(star_masses, star_metals)
If you have a code that's written in Python that you are having trouble getting
data into from yt, please feel encouraged to email the users list and we'll
help out.
Calling Non-Python External Codes
---------------------------------
Independent of its ability to process, analyze and visualize data, yt can also
serve as a mechanism for reading and selecting simulation data. In this way,
it can be used to supply data to an external analysis routine written in
Fortran, C or C++. This document describes how to supply that data, using the
example of a simple code that calculates the best axes that describe a
distribution of particles as a starting point. (The underlying method is left
as an exercise for the reader; we're only currently interested in the function
specification and structs.)
If you have written a piece of code that performs some analysis function, and
you would like to include it in the base distribution of yt, we would be happy
to do so; drop us a line or see :ref:`contributing-code` for more information.
To accomplish the process of linking Python with our external code, we will be
using a language called `Cython <https://cython.org/>`_, which is
essentially a superset of Python that compiles down to C. It is aware of NumPy
arrays, and it is able to massage data between the interpreted language Python
and C, Fortran or C++. It will be much easier to utilize routines and analysis
code that have been separated into subroutines that accept data structures, so
we will assume that our halo axis calculator accepts a set of structs.
Our Example Code
++++++++++++++++
Here is the ``axes.h`` file in our imaginary code, which we will then wrap:
.. code-block:: c
typedef struct structParticleCollection {
long npart;
double *xpos;
double *ypos;
double *zpos;
} ParticleCollection;
void calculate_axes(ParticleCollection *part,
double *ax1, double *ax2, double *ax3);
There are several components to this analysis routine which we will have to
wrap.
#. We have to wrap the creation of an instance of ``ParticleCollection``.
#. We have to transform a set of NumPy arrays into pointers to doubles.
#. We have to create a set of doubles into which ``calculate_axes`` will be
placing the values of the axes it calculates.
#. We have to turn the return values back into Python objects.
Each of these steps can be handled in turn, and we'll be doing it using Cython
as our interface code.
Setting Up and Building Our Wrapper
+++++++++++++++++++++++++++++++++++
To get started, we'll need to create two files:
.. code-block:: bash
axes_calculator.pyx
axes_calculator_setup.py
These can go anywhere, but it might be useful to put them in their own
directory. The contents of ``axes_calculator.pyx`` will be left for the next
section, but we will need to put some boilerplate code into
``axes_calculator_setup.pyx``. As a quick sidenote, you should call these
whatever is most appropriate for the external code you are wrapping;
``axes_calculator`` is probably not the best bet.
Here's a rough outline of what should go in ``axes_calculator_setup.py``:
.. code-block:: python
NAME = "axes_calculator"
EXT_SOURCES = []
EXT_LIBRARIES = ["axes_utils", "m"]
EXT_LIBRARY_DIRS = ["/home/rincewind/axes_calculator/"]
EXT_INCLUDE_DIRS = []
DEFINES = []
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
ext_modules = [
Extension(
NAME,
[NAME + ".pyx"] + EXT_SOURCES,
libraries=EXT_LIBRARIES,
library_dirs=EXT_LIBRARY_DIRS,
include_dirs=EXT_INCLUDE_DIRS,
define_macros=DEFINES,
)
]
setup(name=NAME, cmdclass={"build_ext": build_ext}, ext_modules=ext_modules)
The only variables you should have to change in this are the first six, and
possibly only the first one. We'll go through these variables one at a time.
``NAME``
This is the name of our source file, minus the ``.pyx``. We're also
mandating that it be the name of the module we import. You're free to
modify this.
``EXT_SOURCES``
Any additional sources can be listed here. For instance, if you are only
linking against a single ``.c`` file, you could list it here -- if our axes
calculator were fully contained within a file called ``calculate_my_axes.c``
we could link against it using this variable, and then we would not have to
specify any libraries. This is usually the simplest way to do things, and in
fact, yt makes use of this itself for things like HEALPix and interpolation
functions.
``EXT_LIBRARIES``
Any libraries that will need to be linked against (like ``m``!) should be
listed here. Note that these are the name of the library minus the leading
``lib`` and without the trailing ``.so``. So ``libm.so`` would become ``m``
and ``libluggage.so`` would become ``luggage``.
``EXT_LIBRARY_DIRS``
If the libraries listed in ``EXT_LIBRARIES`` reside in some other directory
or directories, those directories should be listed here. For instance,
``["/usr/local/lib", "/home/rincewind/luggage/"]`` .
``EXT_INCLUDE_DIRS``
If any header files have been included that live in external directories,
those directories should be included here.
``DEFINES``
Any define macros that should be passed to the C compiler should be listed
here; if they just need to be defined, then they should be specified to be
defined as "None." For instance, if you wanted to pass ``-DTWOFLOWER``, you
would set this to equal: ``[("TWOFLOWER", None)]``.
To build our extension, we would run:
.. code-block:: bash
$ python axes_calculator_setup.py build_ext -i
Note that since we don't yet have an ``axes_calculator.pyx``, this will fail.
But once we have it, it ought to run.
Writing and Calling our Wrapper
+++++++++++++++++++++++++++++++
Now we begin the tricky part, of writing our wrapper code. We've already
figured out how to build it, which is halfway to being able to test that it
works, and we now need to start writing Cython code.
For a more detailed introduction to Cython, see the Cython documentation at
http://docs.cython.org/en/latest/ . We'll cover a few of the basics for wrapping code
however.
To start out with, we need to open up and edit our file,
``axes_calculator.pyx``. Open this in your favorite version of vi (mine is
vim) and we will get started by declaring the struct we need to pass in. But
first, we need to include some header information:
.. code-block:: cython
import numpy as np
cimport numpy as np
cimport cython
from stdlib cimport malloc, free
These lines simply import and "Cython import" some common routines. For more
information about what is already available, see the Cython documentation. For
now, we need to start translating our data.
To do so, we tell Cython both where the struct should come from, and then we
describe the struct itself. One fun thing to note is that if you don't need to
set or access all the values in a struct, and it just needs to be passed around
opaquely, you don't have to include them in the definition. For an example of
this, see the ``png_writer.pyx`` file in the yt repository. Here's the syntax
for pulling in (from a file called ``axes_calculator.h``) a struct like the one
described above:
.. code-block:: cython
cdef extern from "axes_calculator.h":
ctypedef struct ParticleCollection:
long npart
double *xpos
double *ypos
double *zpos
So far, pretty easy! We've basically just translated the declaration from the
``.h`` file. Now that we have done so, any other Cython code can create and
manipulate these ``ParticleCollection`` structs -- which we'll do shortly.
Next up, we need to declare the function we're going to call, which looks
nearly exactly like the one in the ``.h`` file. (One common problem is that
Cython doesn't know what ``const`` means, so just remove it wherever you see
it.) Declare it like so:
.. code-block:: cython
void calculate_axes(ParticleCollection *part,
double *ax1, double *ax2, double *ax3)
Note that this is indented one level, to indicate that it, too, comes from
``axes_calculator.h``. The next step is to create a function that accepts
arrays and converts them to the format the struct likes. We declare our
function just like we would a normal Python function, using ``def``. You can
also use ``cdef`` if you only want to call a function from within Cython. We
want to call it from Python, too, so we just use ``def``. Note that we don't
here specify types for the various arguments. In a moment we'll refine this to
have better argument types.
.. code-block:: cython
def examine_axes(xpos, ypos, zpos):
cdef double ax1[3], ax2[3], ax3[3]
cdef ParticleCollection particles
cdef int i
particles.npart = len(xpos)
particles.xpos = <double *> malloc(particles.npart * sizeof(double))
particles.ypos = <double *> malloc(particles.npart * sizeof(double))
particles.zpos = <double *> malloc(particles.npart * sizeof(double))
for i in range(particles.npart):
particles.xpos[i] = xpos[i]
particles.ypos[i] = ypos[i]
particles.zpos[i] = zpos[i]
calculate_axes(&particles, ax1, ax2, ax3)
free(particles.xpos)
free(particles.ypos)
free(particles.zpos)
return ( (ax1[0], ax1[1], ax1[2]),
(ax2[0], ax2[1], ax2[2]),
(ax3[0], ax3[1], ax3[2]) )
This does the rest. Note that we've weaved in C-type declarations (ax1, ax2,
ax3) and Python access to the variables fed in. This function will probably be
quite slow -- because it doesn't know anything about the variables xpos, ypos,
zpos, it won't be able to speed up access to them. Now we will see what we can
do by declaring them to be of array-type before we start handling them at all.
We can do that by annotating in the function argument list. But first, let's
test that it works. From the directory in which you placed these files, run:
.. code-block:: bash
$ python2.6 setup.py build_ext -i
Now, create a sample file that feeds in the particles:
.. code-block:: python
import axes_calculator
axes_calculator.examine_axes(xpos, ypos, zpos)
Most of the time in that function is spent in converting the data. So now we
can go back and we'll try again, rewriting our converter function to believe
that its being fed arrays from NumPy:
.. code-block:: cython
def examine_axes(np.ndarray[np.float64_t, ndim=1] xpos,
np.ndarray[np.float64_t, ndim=1] ypos,
np.ndarray[np.float64_t, ndim=1] zpos):
cdef double ax1[3], ax2[3], ax3[3]
cdef ParticleCollection particles
cdef int i
particles.npart = len(xpos)
particles.xpos = <double *> malloc(particles.npart * sizeof(double))
particles.ypos = <double *> malloc(particles.npart * sizeof(double))
particles.zpos = <double *> malloc(particles.npart * sizeof(double))
for i in range(particles.npart):
particles.xpos[i] = xpos[i]
particles.ypos[i] = ypos[i]
particles.zpos[i] = zpos[i]
calculate_axes(&particles, ax1, ax2, ax3)
free(particles.xpos)
free(particles.ypos)
free(particles.zpos)
return ( (ax1[0], ax1[1], ax1[2]),
(ax2[0], ax2[1], ax2[2]),
(ax3[0], ax3[1], ax3[2]) )
This should be substantially faster, assuming you feed it arrays.
Now, there's one last thing we can try. If we know our function won't modify
our arrays, and they are C-Contiguous, we can simply grab pointers to the data:
.. code-block:: cython
def examine_axes(np.ndarray[np.float64_t, ndim=1] xpos,
np.ndarray[np.float64_t, ndim=1] ypos,
np.ndarray[np.float64_t, ndim=1] zpos):
cdef double ax1[3], ax2[3], ax3[3]
cdef ParticleCollection particles
cdef int i
particles.npart = len(xpos)
particles.xpos = <double *> xpos.data
particles.ypos = <double *> ypos.data
particles.zpos = <double *> zpos.data
for i in range(particles.npart):
particles.xpos[i] = xpos[i]
particles.ypos[i] = ypos[i]
particles.zpos[i] = zpos[i]
calculate_axes(&particles, ax1, ax2, ax3)
return ( (ax1[0], ax1[1], ax1[2]),
(ax2[0], ax2[1], ax2[2]),
(ax3[0], ax3[1], ax3[2]) )
But note! This will break or do weird things if you feed it arrays that are
non-contiguous.
At this point, you should have a mostly working piece of wrapper code. And it
was pretty easy! Let us know if you run into any problems, or if you are
interested in distributing your code with yt.
A complete set of files is available with this documentation. These are
slightly different, so that the whole thing will simply compile, but they
provide a useful example.
* `axes.c <../_static/axes.c>`_
* `axes.h <../_static/axes.h>`_
* `axes_calculator.pyx <../_static/axes_calculator.pyx>`_
* `axes_calculator_setup.py <../_static/axes_calculator_setup.txt>`_
Exporting Data from yt
----------------------
yt is installed alongside h5py. If you need to export your data from yt, to
share it with people or to use it inside another code, h5py is a good way to do
so. You can write out complete datasets with just a few commands. You have to
import, and then save things out into a file.
.. code-block:: python
import h5py
f = h5py.File("some_file.h5", mode="w")
f.create_dataset("/data", data=some_data)
This will create ``some_file.h5`` if necessary and add a new dataset
(``/data``) to it. Writing out in ASCII should be relatively straightforward.
For instance:
.. code-block:: python
f = open("my_file.txt", "w")
for halo in halos:
x, y, z = halo.center_of_mass()
f.write("%0.2f %0.2f %0.2f\n", x, y, z)
f.close()
This example could be extended to work with any data object's fields, as well.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/developing/external_analysis.rst | external_analysis.rst |
Developing in yt
================
yt is an open-source project with a community of contributing scientists.
While you can use the existing framework within yt to help answer questions
about your own datasets, yt thrives by the addition of new functionality by
users just like yourself. Maybe you have a new data format that you would like
supported, a new derived quantity that you feel should be included, or a new
way of visualizing data--please add them to the code base! We are eager to
help you make it happen.
There are many ways to get involved with yt -- participating in the mailing
list, helping people out in IRC, providing suggestions for the documentation,
and contributing code!
.. toctree::
:maxdepth: 2
developing
building_the_docs
testing
extensions
debugdrive
releasing
creating_datatypes
creating_derived_fields
creating_frontend
external_analysis
deprecating_features
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/developing/index.rst | index.rst |
How to deprecate a feature
--------------------------
Since the 4.0.0 release, deprecation happens on a per-release basis.
A functionality can be marked as deprecated using
``~yt._maintenance.deprecation.issue_deprecation_warning``, which takes a warning
message and two version numbers, indicating the earliest release deprecating the feature
and the one in which it will be removed completely.
The message should indicate a viable alternative to replace the deprecated feature at
the user level.
``since`` and ``removal`` are required [#]_ keyword-only arguments so as to enforce
readability of the source code.
Here's an example call.
.. code-block::python
def old_function(*args, **kwargs):
from yt._maintenance.deprecation import issue_deprecation_warning
issue_deprecation_warning(
"`old_function` is deprecated, use `replacement_function` instead."
since="4.0.0",
removal="4.1.0",
)
...
If a whole function or class is marked as deprecated, it should be removed from
``doc/source/reference/api/api.rst``.
.. [#] ``since`` is not required yet as of yt 4.0.0 because existing warnings predate its introduction.
Deprecating Derived Fields
--------------------------
Occasionally, one may want to deprecate a derived field in yt, normally
because naming conventions for fields have changed, or simply because a
field has outlived its usefulness. There are two ways to mark fields as
deprecated in yt.
The first way is if you simply want to mark a specific derived field as
deprecated. In that case, you call
:meth:`~yt.fields.field_info_container.FieldInfoContainer.add_deprecated_field`:
.. code-block:: python
def _cylindrical_radial_absolute(field, data):
"""This field is deprecated and will be removed in a future version"""
return np.abs(data[ftype, f"{basename}_cylindrical_radius"])
registry.add_deprecated_field(
(ftype, f"cylindrical_radial_{basename}_absolute"),
sampling_type="local",
function=_cylindrical_radial_absolute,
since="4.0.0",
removal="4.1.0",
units=field_units,
validators=[ValidateParameter("normal")],
)
Note that the signature for
:meth:`~yt.fields.field_info_container.FieldInfoContainer.add_deprecated_field`
is the same as :meth:`~yt.fields.field_info_container.FieldInfoContainer.add_field`,
with the exception of the ``since`` and ``removal`` arguments which indicate in
what version the field was deprecated and in what version it will be removed.
The effect is to add a warning to the logger when the field is first used:
.. code-block:: python
import yt
ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100")
sp = ds.sphere("c", (100.0, "kpc"))
print(sp["gas", "cylindrical_radial_velocity_absolute"])
.. code-block:: pycon
yt : [WARNING ] 2021-03-09 16:30:47,460 The Derived Field
('gas', 'cylindrical_radial_velocity_absolute') is deprecated
as of yt v4.0.0 and will be removed in yt v4.1.0
The second way to deprecate a derived field is to take an existing field
definition and change its name. In order to mark the original name as deprecated,
use the :meth:`~yt.fields.field_info_container.FieldInfoContainer.alias` method
and pass the ``since`` and ``removal`` arguments (see above) as a tuple in the
``deprecate`` keyword argument:
.. code-block:: python
registry.alias(
(ftype, "kinetic_energy"),
(ftype, "kinetic_energy_density"),
deprecate=("4.0.0", "4.1.0"),
)
Note that the old field name which is to be deprecated goes first, and the new,
replacement field name goes second. In this case, the log message reports to
the user what field they should use:
.. code-block:: python
print(sp["gas", "kinetic_energy"])
.. code-block:: pycon
yt : [WARNING ] 2021-03-09 16:29:12,911 The Derived Field
('gas', 'kinetic_energy') is deprecated as of yt v4.0.0 and will be removed
in yt v4.1.0 Use ('gas', 'kinetic_energy_density') instead.
In most cases, the ``since`` and ``removal`` arguments should have a delta of
one minor release, and that should be the minimum value. However, the developer
is free to use their judgment about whether or not the delta should be multiple
minor releases if the field has a long provenance.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/developing/deprecating_features.rst | deprecating_features.rst |
.. _debug-drive:
Debugging yt
============
There are several different convenience functions that allow you to control yt
in perhaps unexpected and unorthodox manners. These will allow you to conduct
in-depth debugging of processes that may be running in parallel on multiple
processors, as well as providing a mechanism of signalling to yt that you need
more information about a running process. Additionally, yt has a built-in
mechanism for optional reporting of errors to a central server. All of these
allow for more rapid development and debugging of any problems you might
encounter.
Additionally, yt is able to leverage existing developments in the IPython
community for parallel, interactive analysis. This allows you to initialize
multiple yt processes through ``mpirun`` and interact with all of them from a
single, unified interactive prompt. This enables and facilitates parallel
analysis without sacrificing interactivity and flexibility.
.. _pastebin:
Pastebin
--------
A pastebin is a website where you can easily copy source code and error
messages to share with yt developers or your collaborators. At
http://paste.yt-project.org/ a pastebin is available for placing scripts. With
yt the script ``yt_lodgeit.py`` is distributed and wrapped with
the ``pastebin`` and ``pastebin_grab`` commands, which allow for commandline
uploading and downloading of pasted snippets. To upload a script you
would supply it to the command:
.. code-block:: bash
$ yt pastebin some_script.py
The URL will be returned. If you'd like it to be marked 'private' and not show
up in the list of pasted snippets, supply the argument ``--private``. All
snippets are given either numbers or hashes. To download a pasted snippet, you
would use the ``pastebin_grab`` option:
.. code-block:: bash
$ yt pastebin_grab 1768
The snippet will be output to the window, so output redirection can be used to
store it in a file.
Use the Python Debugger
-----------------------
yt is almost entirely composed of python code, so it makes sense to use
the `python debugger`_ as your first stop in trying to debug it.
.. _python debugger: https://docs.python.org/3/library/pdb.html
Signaling yt to Do Something
----------------------------
During startup, yt inserts handlers for two operating system-level signals.
These provide two diagnostic methods for interacting with a running process.
Signalling the python process that is running your script with these signals
will induce the requested behavior.
SIGUSR1
This will cause the python code to print a stack trace, showing exactly
where in the function stack it is currently executing.
SIGUSR1
This will cause the python code to insert an IPython session wherever it
currently is, with all local variables in the local namespace. It should
allow you to change the state variables.
If your yt-running process has PID 5829, you can signal it to print a
traceback with:
.. code-block:: bash
$ kill -SIGUSR1 5829
Note, however, that if the code is currently inside a C function, the signal
will not be handled, and the stacktrace will not be printed, until it returns
from that function.
.. _remote-debugging:
Remote and Disconnected Debugging
---------------------------------
If you are running a parallel job that fails, often it can be difficult to do a
post-mortem analysis to determine what went wrong. To facilitate this, yt
has implemented an `XML-RPC <https://en.wikipedia.org/wiki/XML-RPC>`_ interface
to the Python debugger (``pdb``) event loop.
Running with the ``--rpdb`` command will cause any uncaught exception during
execution to spawn this interface, which will sit and wait for commands,
exposing the full Python debugger. Additionally, a frontend to this is
provided through the yt command. So if you run the command:
.. code-block:: bash
$ mpirun -np 4 python some_script.py --parallel --rpdb
and it reaches an error or an exception, it will launch the debugger.
Additionally, instructions will be printed for connecting to the debugger.
Each of the four processes will be accessible via:
.. code-block:: bash
$ yt rpdb 0
where ``0`` here indicates the process 0.
For security reasons, this will only work on local processes; to connect on a
cluster, you will have to execute the command ``yt rpdb`` on the node on which
that process was launched.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/developing/debugdrive.rst | debugdrive.rst |
.. _creating-derived-fields:
Creating Derived Fields
=======================
One of the more powerful means of extending yt is through the usage of derived
fields. These are fields that describe a value at each cell in a simulation.
Defining a New Field
--------------------
Once a new field has been conceived of, the best way to create it is to
construct a function that performs an array operation -- operating on a
collection of data, neutral to its size, shape, and type.
A simple example of this is the pressure field, which demonstrates the ease of
this approach.
.. code-block:: python
import yt
def _pressure(field, data):
return (
(data.ds.gamma - 1.0)
* data["gas", "density"]
* data["gas", "specific_thermal_energy"]
)
Note that we do a couple different things here. We access the ``gamma``
parameter from the dataset, we access the ``density`` field and we access
the ``specific_thermal_energy`` field. ``specific_thermal_energy`` is, in
fact, another derived field! We don't do any loops, we don't do any
type-checking, we can simply multiply the three items together.
In this example, the ``density`` field will return data with units of
``g/cm**3`` and the ``specific_thermal_energy`` field will return data units of
``erg/g``, so the result will automatically have units of pressure,
``erg/cm**3``. This assumes the unit system is set to the default, which is
CGS: if a different unit system is selected, the result will be in the same
dimensions of pressure but different units. See :ref:`units` for more
information.
Once we've defined our function, we need to notify yt that the field is
available. The :func:`add_field` function is the means of doing this; it has a
number of fairly specific parameters that can be passed in, but here we'll only
look at the most basic ones needed for a simple scalar baryon field.
.. note::
There are two different :func:`add_field` functions. For the differences,
see :ref:`faq-add-field-diffs`.
.. code-block:: python
yt.add_field(
name=("gas", "pressure"),
function=_pressure,
sampling_type="local",
units="dyne/cm**2",
)
We feed it the name of the field, the name of the function, the sampling type,
and the units. The ``sampling_type`` keyword determines which elements are
used to make the field (i.e., grid cell or particles) and controls how volume
is calculated. It can be set to "cell" for grid/mesh fields, "particle" for
particle and SPH fields, or "local" to use the primary format of the loaded
dataset. In most cases, "local" is sufficient, but "cell" and "particle"
can be used to specify the source for datasets that have both grids and
particles. In a dataset with both grids and particles, using "cell" will
ensure a field is created with a value for every grid cell, while using
"particle" will result in a field with a value for every particle.
The units parameter is a "raw" string, in the format that yt
uses in its :ref:`symbolic units implementation <units>` (e.g., employing only
unit names, numbers, and mathematical operators in the string, and using
``"**"`` for exponentiation). For cosmological datasets and fields, see
:ref:`cosmological-units <cosmological-units>`. We suggest that you name the function that creates
a derived field with the intended field name prefixed by a single underscore,
as in the ``_pressure`` example above.
Field definitions return array data with units. If the field function returns
data in a dimensionally equivalent unit (e.g. a ``"dyne"`` versus a ``"N"``), the
field data will be converted to the units specified in ``add_field`` before
being returned in a data object selection. If the field function returns data
with dimensions that are incompatible with units specified in ``add_field``,
you will see an error. To clear this error, you must ensure that your field
function returns data in the correct units. Often, this means applying units to
a dimensionless float or array.
If your field definition includes physical constants rather than defining a
constant as a float, you can import it from ``yt.units``
to get a predefined version of the constant with the correct units. If you know
the units your data is supposed to have ahead of time, you can also import unit
symbols like ``g`` or ``cm`` from the ``yt.units`` namespace and multiply the
return value of your field function by the appropriate combination of unit
symbols for your field's units. You can also convert floats or NumPy arrays into
:class:`~yt.units.yt_array.YTArray` or :class:`~yt.units.yt_array.YTQuantity`
instances by making use of the
:func:`~yt.data_objects.static_output.Dataset.arr` and
:func:`~yt.data_objects.static_output.Dataset.quan` convenience functions.
Lastly, if you do not know the units of your field ahead of time, you can
specify ``units='auto'`` in the call to ``add_field`` for your field. This will
automatically determine the appropriate units based on the units of the data
returned by the field function. This is also a good way to let your derived
fields be automatically converted to the units of the unit system in your
dataset.
If ``units='auto'`` is set, it is also required to set the ``dimensions`` keyword
argument so that error-checking can be done on the derived field to make sure that
the dimensionality of the returned array and the field are the same:
.. code-block:: python
import yt
from yt.units import dimensions
def _pressure(field, data):
return (
(data.ds.gamma - 1.0)
* data["gas", "density"]
* data["gas", "specific_thermal_energy"]
)
yt.add_field(
("gas", "pressure"),
function=_pressure,
sampling_type="local",
units="auto",
dimensions=dimensions.pressure,
)
If ``dimensions`` is not set, an error will be thrown. The ``dimensions`` keyword
can be a SymPy ``symbol`` object imported from ``yt.units.dimensions``, a compound
dimension of these, or a string corresponding to one of these objects.
:func:`add_field` can be invoked in two other ways. The first is by the
function decorator :func:`derived_field`. The following code is equivalent to
the previous example:
.. code-block:: python
from yt import derived_field
@derived_field(name="pressure", sampling_type="cell", units="dyne/cm**2")
def _pressure(field, data):
return (
(data.ds.gamma - 1.0)
* data["gas", "density"]
* data["gas", "specific_thermal_energy"]
)
The :func:`derived_field` decorator takes the same arguments as
:func:`add_field`, and is often a more convenient shorthand in cases where
you want to quickly set up a new field.
Defining derived fields in the above fashion must be done before a dataset is
loaded, in order for the dataset to recognize it. If you want to set up a
derived field after you have loaded a dataset, or if you only want to set up
a derived field for a particular dataset, there is an
:func:`~yt.data_objects.static_output.Dataset.add_field` method that hangs off
dataset objects. The calling syntax is the same:
.. code-block:: python
ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100")
ds.add_field(
("gas", "pressure"),
function=_pressure,
sampling_type="cell",
units="dyne/cm**2",
)
If you specify fields in this way, you can take advantage of the dataset's unit
system to define the units for you, so that the units will be returned in the
units of that system:
.. code-block:: python
ds.add_field(
("gas", "pressure"),
function=_pressure,
sampling_type="cell",
units=ds.unit_system["pressure"],
)
Since the :class:`yt.units.unit_systems.UnitSystem` object returns a :class:`yt.units.unit_object.Unit` object when
queried, you're not limited to specifying units in terms of those already available. You can specify units for fields
using basic arithmetic if necessary:
.. code-block:: python
ds.add_field(
("gas", "my_acceleration"),
function=_my_acceleration,
sampling_type="cell",
units=ds.unit_system["length"] / ds.unit_system["time"] ** 2,
)
If you find yourself using the same custom-defined fields over and over, you should put them in your plugins file as
described in :ref:`plugin-file`.
A More Complicated Example
--------------------------
But what if we want to do something a bit more fancy? Here's an example of getting
parameters from the data object and using those to define the field;
specifically, here we obtain the ``center`` and ``bulk_velocity`` parameters
and use those to define a field for radial velocity (there is already
a ``radial_velocity`` field in yt, but we create this one here just as a
transparent and simple example).
.. code-block:: python
import numpy as np
from yt.fields.api import ValidateParameter
def _my_radial_velocity(field, data):
if data.has_field_parameter("bulk_velocity"):
bv = data.get_field_parameter("bulk_velocity").in_units("cm/s")
else:
bv = data.ds.arr(np.zeros(3), "cm/s")
xv = data["gas", "velocity_x"] - bv[0]
yv = data["gas", "velocity_y"] - bv[1]
zv = data["gas", "velocity_z"] - bv[2]
center = data.get_field_parameter("center")
x_hat = data["gas", "x"] - center[0]
y_hat = data["gas", "y"] - center[1]
z_hat = data["gas", "z"] - center[2]
r = np.sqrt(x_hat * x_hat + y_hat * y_hat + z_hat * z_hat)
x_hat /= r
y_hat /= r
z_hat /= r
return xv * x_hat + yv * y_hat + zv * z_hat
yt.add_field(
("gas", "my_radial_velocity"),
function=_my_radial_velocity,
sampling_type="cell",
units="cm/s",
take_log=False,
validators=[ValidateParameter(["center", "bulk_velocity"])],
)
Note that we have added a few optional arguments to ``yt.add_field``; we specify
that we do not wish to display this field as logged, that we require both the
``bulk_velocity`` and ``center`` field parameters to be present in a given data
object we wish to calculate this for, and we say that it should not be displayed
in a drop-down box of fields to display. This is done through the parameter
*validators*, which accepts a list of
:class:`~yt.fields.derived_field.FieldValidator` objects. These objects define
the way in which the field is generated, and when it is able to be created. In
this case, we mandate that parameters ``center`` and ``bulk_velocity`` are set
before creating the field. These are set via
:meth:`~yt.data_objects.data_containers.set_field_parameter`, which can be
called on any object that has fields:
.. code-block:: python
ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100")
sp = ds.sphere("max", (200.0, "kpc"))
sp.set_field_parameter("bulk_velocity", yt.YTArray([-100.0, 200.0, 300.0], "km/s"))
In this case, we already know what the ``center`` of the sphere is, so we do
not set it. Also, note that ``center`` and ``bulk_velocity`` need to be
:class:`~yt.units.yt_array.YTArray` objects with units.
If you are writing a derived field that uses a field parameter that changes the
behavior of the field depending on the value of the field parameter, you can
make yt test to make sure the field handles all possible values for the field
parameter using a special form of the ``ValidateParameter`` field validator. In
particular, ``ValidateParameter`` supports an optional second argument, which
takes a dictionary mapping from parameter names to parameter values that
you would like yt to test. This is useful when a field will select different
fields to access based on the value of a field parameter. This option allows you
to force yt to select *all* needed dependent fields for your derived field
definition at field detection time. This can avoid errors related to missing fields.
For example, let's write a field that depends on a field parameter named ``'axis'``:
.. code-block:: python
def my_axis_field(field, data):
axis = data.get_field_parameter("axis")
if axis == 0:
return data["gas", "velocity_x"]
elif axis == 1:
return data["gas", "velocity_y"]
elif axis == 2:
return data["gas", "velocity_z"]
else:
raise ValueError
ds.add_field(
"my_axis_field",
function=my_axis_field,
units="cm/s",
validators=[ValidateParameter("axis", {"axis": [0, 1, 2]})],
)
In this example, we've told yt's field system that the data object we are
querying ``my_axis_field`` must have the ``axis`` field parameter set. In
addition, it forces yt to recognize that this field might depend on any one of
``x-velocity``, ``y-velocity``, or ``z-velocity``. By specifying that ``axis``
might be 0, 1, or 2 in the ``ValidataParameter`` call, this ensures that this
field will only be valid and available for datasets that have all three fields
available.
Other examples for creating derived fields can be found in the cookbook recipe
:ref:`cookbook-simple-derived-fields`.
.. _derived-field-options:
Field Options
-------------
The arguments to :func:`add_field` are passed on to the constructor of :class:`DerivedField`.
There are a number of options available, but the only mandatory ones are ``name``,
``units``, and ``function``.
``name``
This is the name of the field -- how you refer to it. For instance,
``pressure`` or ``magnetic_field_strength``.
``function``
This is a function handle that defines the field
``units``
This is a string that describes the units, or a query to a UnitSystem
object, e.g. ``ds.unit_system["energy"]``. Powers must be in Python syntax (``**``
instead of ``^``). Alternatively, it may be set to ``"auto"`` to have the units
determined automatically. In this case, the ``dimensions`` keyword must be set to the
correct dimensions of the field.
``display_name``
This is a name used in the plots, for instance ``"Divergence of
Velocity"``. If not supplied, the ``name`` value is used.
``take_log``
This is *True* or *False* and describes whether the field should be logged
when plotted.
``particle_type``
Is this field a *particle* field?
``validators``
(*Advanced*) This is a list of :class:`FieldValidator` objects, for instance to mandate
spatial data.
``display_field``
(*Advanced*) Should this field appear in the dropdown box in Reason?
``not_in_all``
(*Advanced*) If this is *True*, the field may not be in all the grids.
``output_units``
(*Advanced*) For fields that exist on disk, which we may want to convert to other
fields or that get aliased to themselves, we can specify a different
desired output unit than the unit found on disk.
``force_override``
(*Advanced*) Overrides the definition of an old field if a field with the
same name has already been defined.
``dimensions``
Set this if ``units="auto"``. Can be either a string or a dimension object from
``yt.units.dimensions``.
Debugging a Derived Field
-------------------------
If your derived field is not behaving as you would like, you can insert a call
to ``data._debug()`` to spawn an interactive interpreter whenever that line is
reached. Note that this is slightly different from calling
``pdb.set_trace()``, as it will *only* trigger when the derived field is being
called on an actual data object, rather than during the field detection phase.
The starting position will be one function lower in the stack than you are
likely interested in, but you can either step through back to the derived field
function, or simply type ``u`` to go up a level in the stack.
For instance, if you had defined this derived field:
.. code-block:: python
@yt.derived_field(name=("gas", "funthings"))
def funthings(field, data):
return data["sillythings"] + data["humorousthings"] ** 2.0
And you wanted to debug it, you could do:
.. code-block:: python
@yt.derived_field(name=("gas", "funthings"))
def funthings(field, data):
data._debug()
return data["sillythings"] + data["humorousthings"] ** 2.0
And now, when that derived field is actually used, you will be placed into a
debugger.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/developing/creating_derived_fields.rst | creating_derived_fields.rst |
Introduction to yt
==================
Herein, we present a brief introduction to yt's capabilities and
infrastructure with numerous links to relevant portions of the documentation
on each topic. It is our hope that readers will not only gain insight to
what can be done with yt, but also they will learn how to *think in yt* to
solve science questions, learn some of the yt jargon, and figure out
where to go in the docs for help.
.. contents::
:depth: 2
:local:
:backlinks: none
Fields
^^^^^^
yt is an analysis toolkit operating on multidimensional datasets for
:ref:`a variety of data formats <code-support>`. It represents quantities
varying over a multidimensional space as :ref:`fields <fields>` such as gas density,
gas temperature, etc. Many fields are defined when yt :ref:`loads the external
dataset <examining-data>` into "native fields" as defined by individual
frontends for each code format. However, yt additionally
creates many "derived fields" by manipulating and combining
native fields. yt comes with a large existing :ref:`set of derived fields
<field-list>`, but you can also :ref:`create your own
<creating-derived-fields>`.
Objects
^^^^^^^
Central to yt's infrastructure are :ref:`data objects <data-objects>`,
which act as a means of :ref:`filtering data <filtering-data>` based on
:ref:`spatial location <geometric-objects>` (e.g. lines, spheres, boxes,
cylinders), based on :ref:`field values <collection-objects>` (e.g. all gas >
10^6 K), or for :ref:`constructing new data products <construction-objects>`
(e.g. projections, slices, isosurfaces). Furthermore, yt can calculate
the :ref:`bulk quantities <derived-quantities>` associated with these data
objects (e.g. total mass, bulk velocity, angular momentum).
General Analysis
^^^^^^^^^^^^^^^^
The documentation section on :ref:`analyzing data <analyzing>` has a full
description of :ref:`fields <fields>`, :ref:`data objects <data-objects>`,
and :ref:`filters <filtering-data>`. It also includes an explanation of how
the :ref:`units system <units>` works to tag every individual field and
quantity with a physical unit (e.g. cm, AU, kpc, Mpc, etc.), and it describes
ways of analyzing multiple chronological data outputs from the same underlying
dataset known as :ref:`time series <time-series-analysis>`. Lastly, it includes
information on how to enable yt to operate :ref:`in parallel over multiple
processors simultaneously <parallel-computation>`.
Datasets can be analyzed by simply :ref:`examining raw source data
<low-level-data-inspection>`, or they can be processed in a number of ways
to extract relevant information and to explore the data including
:ref:`visualizing data <visualizing>`.
Visualization
^^^^^^^^^^^^^
yt provides many tools for :ref:`visualizing data <visualizing>`, and herein
we highlight a few of them. yt can create :ref:`slice plots <slice-plots>`,
wherein a three-dimensional volume (or any of the :ref:`data objects
<data-objects>`) is *sliced* by a plane to return the two-dimensional field
data intersected by that plane. Similarly, yt can generate
:ref:`line queries (i.e. rays) <generating-line-queries>` of a single
line intersecting a three-dimensional dataset. :ref:`Projection plots
<projection-plots>` are generated by projecting a three-dimensional volume
into two dimensions either :ref:`by summing or integrating <projection-types>`
the field along each pixel's line of sight with or without a weighting field.
Slices, projections, and rays can be made to align with the primary axes of
the simulation (e.g. x,y,z) or at any arbitrary angle throughout the volume.
For these operations, a number of :ref:`"callbacks" <callbacks>` exist that
will annotate your figures with field contours, velocity vectors, particle and
halo positions, streamlines, simple shapes, and text.
yt can examine correlations between two or three fields simultaneously with
:ref:`profile plots <how-to-make-1d-profiles>` and :ref:`phase plots
<how-to-make-2d-profiles>`. By querying field data for two separate fields
at each position in your dataset or :ref:`data object <data-objects>`, yt
can show the relationship between those two fields in a :ref:`profile plot
<how-to-make-1d-profiles>` (e.g. average gas density as a function radius).
Similarly, a :ref:`phase plot <how-to-make-2d-profiles>` correlates two fields
as described above, but it weights those fields by a third field. Phase plots
commonly use mass as the weighting field and are oftentimes used to relate
gas density and temperature.
More advanced visualization functionality in yt includes generating
:ref:`streamlines <streamlines>` to track the velocity flow in your datasets,
creating photorealistic isocontour images of your data called :ref:`volume
renderings <volume_rendering>`, and :ref:`visualizing isosurfaces in an external
interactive tool <surfaces>`. yt even has a special web-based tool for
exploring your data with a :ref:`google-maps-like interface <mapserver>`.
Executing and Scripting yt
^^^^^^^^^^^^^^^^^^^^^^^^^^
yt is written almost entirely in python and it functions as a library
that you can import into your python scripts. There is full docstring
documentation for all of the major classes and functions in the :ref:`API docs
<api-reference>`. yt has support for running in IPython and for running
IPython notebooks for fully interactive sessions both
locally and on remote supercomputers. yt also has a number of ways it can
be :ref:`executed at the command line <command-line>` for simple tasks like
automatically loading a dataset, updating the yt sourcecode, starting an
IPython notebook, or uploading scripts and images to public locations. There
is an optional :ref:`yt configuration file <configuration-file>` you can
modify for controlling local settings like color, logging, output settings.
There is also an optional :ref:`yt plugin file <plugin-file>` you can create
to automatically load certain datasets, custom derived fields, derived
quantities, and more.
Cookbook and Quickstart
^^^^^^^^^^^^^^^^^^^^^^^
yt contains a number of example recipes for demonstrating simple and complex
tasks in :ref:`the cookbook <cookbook>` including many of the topics discussed
above. The cookbook also contains :ref:`more lengthy notebooks
<example-notebooks>` to demonstrate more sophisticated machinery on a variety
of topics. If you're new to yt and you just want to see a broad demonstration
of some of the things yt can do, check out the
:ref:`yt quickstart <quickstart>`.
Developing in yt
^^^^^^^^^^^^^^^^
yt is an open source development project, with only scientist-developers
like you to support it, add code, add documentation, etc. As such, we welcome
members of the public to join :ref:`our community <who-is-yt>` by contributing
code, bug reports, documentation, and helping to :ref:`support the code in a
number of ways <getting-involved>`. Sooner or later, you'll want to
:ref:`add your own derived field <creating-derived-fields>`, :ref:`data object
<creating-objects>`, :ref:`code frontend <creating_frontend>` or :ref:`make
yt compatible with an external code <external-analysis-tools>`. We have
detailed instructions on how to :ref:`contribute code <contributing-code>`
:ref:`documentation <documentation>`, and :ref:`tests <testing>`, and how
to :ref:`debug this code <debug-drive>`.
Getting Help
^^^^^^^^^^^^
We have all been there, where something is going wrong and we cannot
understand why. Check out our :ref:`frequently asked questions <faq>` and
the documentation section :ref:`asking-for-help` to get solutions for your
problems.
Getting Started
^^^^^^^^^^^^^^^
We have detailed :ref:`installation instructions <installing-yt>`
and support for a number of platforms including Unix, Linux, MacOS, and
Windows. If you are new to yt, check out the :ref:`yt Quickstart
<quickstart>` and the :ref:`cookbook <cookbook>` for a demonstration of yt's
capabilities. If you previously used yt version 2, check out our guide
on :ref:`how to make your scripts work in yt 3 <yt3differences>`. So what
are you waiting for? Good luck and welcome to the yt community.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/intro/index.rst | index.rst |
.. _how-to-make-plots:
How to Make Plots
=================
.. note::
In this document, and the rest of the yt documentation, we use field tuples;
for instance, we specify density as ``("gas", "density")`` whereas in
previous versions of this document we typically just used ``"density"``.
While the latter will still work in many or most cases, and may suffice for
your purposes, for ensuring we explicitly avoid ambiguity we use field tuples
here.
In this section we explain how to use yt to create visualizations
of simulation data, derived fields, and the data produced by yt
analysis objects. For details about the data extraction and
algorithms used to produce the image and analysis data, please see the
yt `method paper
<https://ui.adsabs.harvard.edu/abs/2011ApJS..192....9T>`_. There are also
many example scripts in :ref:`cookbook`.
The :class:`~yt.visualization.plot_window.PlotWindow` interface is useful for
taking a quick look at simulation outputs. Simple mechanisms exist for making
plots of slices, projections, 1D spatial line plots, 1D profiles, and 2D
profiles (phase plots), all of which are described below.
.. _viewing-plots:
Viewing Plots
-------------
yt uses an environment neutral plotting mechanism that detects the appropriate
matplotlib configuration for a given environment, however it defaults to a basic
renderer. To utilize interactive plots in matplotlib supported
environments (Qt, GTK, WX, etc.) simply call the ``toggle_interactivity()`` function. Below is an
example in a jupyter notebook environment, but the same command should work
in other environments as well:
.. code-block:: IPython
%matplotlib notebook
import yt
yt.toggle_interactivity()
.. _simple-inspection:
Slices & Projections
--------------------
If you need to take a quick look at a single simulation output, yt
provides the :class:`~yt.visualization.plot_window.PlotWindow` interface for
generating annotated 2D visualizations of simulation data. You can create a
:class:`~yt.visualization.plot_window.PlotWindow` plot by
supplying a dataset, a list of fields to plot, and a plot center to
create a :class:`~yt.visualization.plot_window.AxisAlignedSlicePlot`,
:class:`~yt.visualization.plot_window.OffAxisSlicePlot`,
:class:`~yt.visualization.plot_window.ProjectionPlot`, or
:class:`~yt.visualization.plot_window.OffAxisProjectionPlot`.
Plot objects use yt data objects to extract the maximum resolution
data available to render a 2D image of a field. Whenever a
two-dimensional image is created, the plotting object first obtains
the necessary data at the *highest resolution*. Every time an image
is requested of it -- for instance, when the width or field is changed
-- this high-resolution data is then pixelized and placed in a buffer
of fixed size. This is accomplished behind the scenes using
:class:`~yt.visualization.fixed_resolution.FixedResolutionBuffer`.
The :class:`~yt.visualization.plot_window.PlotWindow` class exposes the
underlying matplotlib
`figure <https://matplotlib.org/stable/api/_as_gen/matplotlib.figure.Figure.html#matplotlib.figure.Figure>`_
and `axes <https://matplotlib.org/stable/api/axes_api.html#matplotlib.axes.Axes>`_
objects, making it easy to customize your plots and
add new annotations. See :ref:`matplotlib-customization` for more information.
.. _slice-plots:
Slice Plots
~~~~~~~~~~~
The quickest way to plot a slice of a field through your data is via
:class:`~yt.visualization.plot_window.SlicePlot`. These plots are generally
quicker than projections because they only need to read and process a slice
through the dataset.
The following script plots a slice through the density field along the z-axis
centered on the center of the simulation box in a simulation dataset we've
opened and stored in ``ds``:
.. code-block:: python
slc = yt.SlicePlot(ds, "z", ("gas", "density"))
slc.save()
These two commands will create a slice object and store it in a variable we've
called ``slc``. Since this plot is aligned with the simulation coordinate
system, ``slc`` is an instance of
:class:`~yt.visualization.plot_window.AxisAlignedSlicePlot`. We then call the
``save()`` function, which automatically saves the plot in png image format with
an automatically generated filename. If you don't want the slice object to
stick around, you can accomplish the same thing in one line:
.. code-block:: python
yt.SlicePlot(ds, "z", ("gas", "density")).save()
It's nice to keep the slice object around if you want to modify the plot. By
default, the plot width will be set to the size of the simulation box. To zoom
in by a factor of ten, you can call the zoom function attached to the slice
object:
.. code-block:: python
slc = yt.SlicePlot(ds, "z", ("gas", "density"))
slc.zoom(10)
slc.save("zoom")
This will save a new plot to disk with a different filename - prepended with
'zoom' instead of the name of the dataset. If you want to set the width
manually, you can do that as well. For example, the following sequence of
commands will create a slice, set the width of the plot to 10 kiloparsecs, and
save it to disk, with the filename prefix being ``10kpc`` and the rest determined
by the field, visualization method, etc.
.. code-block:: python
from yt.units import kpc
slc = yt.SlicePlot(ds, "z", ("gas", "density"))
slc.set_width(10 * kpc)
slc.save("10kpc")
The plot width can be specified independently along the x and y direction by
passing a tuple of widths. An individual width can also be represented using a
``(value, unit)`` tuple. The following sequence of commands all equivalently
set the width of the plot to 200 kiloparsecs in the ``x`` and ``y`` direction.
.. code-block:: python
from yt.units import kpc
slc.set_width(200 * kpc)
slc.set_width((200, "kpc"))
slc.set_width((200 * kpc, 200 * kpc))
The ``SlicePlot`` also optionally accepts the coordinate to center the plot on
and the width of the plot:
.. code-block:: python
yt.SlicePlot(
ds, "z", ("gas", "density"), center=[0.2, 0.3, 0.8], width=(10, "kpc")
).save()
Note that, by default,
:class:`~yt.visualization.plot_window.SlicePlot` shifts the
coordinates on the axes such that the origin is at the center of the
slice. To instead use the coordinates as defined in the dataset, use
the optional argument: ``origin="native"``
If supplied without units, the center is assumed by in code units. There are also
the following alternative options for the ``center`` keyword:
* ``"center"``, ``"c"``: the domain center
* ``"left"``, ``"l"``, ``"right"`` ``"r"``: the domain's left/right edge along the normal direction
(``SlicePlot``'s second argument). Remaining axes use their respective domain center values.
* ``"min"``: the position of the minimum density
* ``"max"``, ``"m"``: the position of the maximum density
* ``"min/max_<field name>"``: the position of the minimum/maximum in the first field matching field name
* ``("min", field)``: the position of the minimum of ``field``
* ``("max", field)``: the position of the maximum of ``field``
where for the last two objects any spatial field, such as ``"density"``,
``"velocity_z"``,
etc., may be used, e.g. ``center=("min", ("gas", "temperature"))``.
``"left"`` and ``"right"`` are not allowed for off-axis slices.
The effective resolution of the plot (i.e. the number of resolution elements
in the image itself) can be controlled with the ``buff_size`` argument:
.. code-block:: python
yt.SlicePlot(ds, "z", ("gas", "density"), buff_size=(1000, 1000))
Here is an example that combines all of the options we just discussed.
.. python-script::
import yt
from yt.units import kpc
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
slc = yt.SlicePlot(
ds,
"z",
("gas", "density"),
center=[0.5, 0.5, 0.5],
width=(20, "kpc"),
buff_size=(1000, 1000),
)
slc.save()
The above example will display an annotated plot of a slice of the
Density field in a 20 kpc square window centered on the coordinate
(0.5, 0.5, 0.5) in the x-y plane. The axis to slice along is keyed to the
letter 'z', corresponding to the z-axis. Finally, the image is saved to
a png file.
Conceptually, you can think of the plot object as an adjustable window
into the data. For example:
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
slc = yt.SlicePlot(ds, "z", ("gas", "pressure"), center="c")
slc.save()
slc.zoom(30)
slc.save("zoom")
will save a plot of the pressure field in a slice along the z
axis across the entire simulation domain followed by another plot that
is zoomed in by a factor of 30 with respect to the original
image. Both plots will be centered on the center of the simulation box.
With these sorts of manipulations, one can easily pan and zoom onto an
interesting region in the simulation and adjust the boundaries of the
region to visualize on the fly.
If you want to slice through a subset of the full dataset volume,
you can use the ``data_source`` keyword with a :ref:`data object <data-objects>`
or a :ref:`cut region <cut-regions>`.
See :class:`~yt.visualization.plot_window.AxisAlignedSlicePlot` for the
full class description.
.. _plot-2d:
Plots of 2D Datasets
~~~~~~~~~~~~~~~~~~~~
If you have a two-dimensional cartesian, cylindrical, or polar dataset,
:func:`~yt.visualization.plot_window.plot_2d` is a way to make a plot
within the dataset's plane without having to specify the axis, which
in this case is redundant. Otherwise, ``plot_2d`` accepts the same
arguments as ``SlicePlot``. The one other difference is that the
``center`` keyword argument can be a two-dimensional coordinate instead
of a three-dimensional one:
.. python-script::
import yt
ds = yt.load("WindTunnel/windtunnel_4lev_hdf5_plt_cnt_0030")
p = yt.plot_2d(ds, ("gas", "density"), center=[1.0, 0.4])
p.set_log(("gas", "density"), False)
p.save()
See :func:`~yt.visualization.plot_window.plot_2d` for the full description
of the function and its keywords.
.. _off-axis-slices:
Off Axis Slices
~~~~~~~~~~~~~~~
Off axis slice plots can be generated in much the same way as
grid-aligned slices. Off axis slices use
:class:`~yt.data_objects.selection_data_containers.YTCuttingPlane` to slice
through simulation domains at an arbitrary oblique angle. A
:class:`~yt.visualization.plot_window.OffAxisSlicePlot` can be
instantiated by specifying a dataset, the normal to the cutting
plane, and the name of the fields to plot. Just like an
:class:`~yt.visualization.plot_window.AxisAlignedSlicePlot`, an
:class:`~yt.visualization.plot_window.OffAxisSlicePlot` can be created via the
:class:`~yt.visualization.plot_window.SlicePlot` class. For example:
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
L = [1, 1, 0] # vector normal to cutting plane
north_vector = [-1, 1, 0]
cut = yt.SlicePlot(ds, L, ("gas", "density"), width=(25, "kpc"), north_vector=north_vector)
cut.save()
In this case, a normal vector for the cutting plane is supplied in the second
argument. Optionally, a ``north_vector`` can be specified to fix the orientation
of the image plane.
.. note:: Not every data types have support for off-axis slices yet.
Currently, this operation is supported for grid based data with cartesian geometry.
In some cases (like SPH data) an off-axis projection over a thin region might be used instead.
.. _projection-plots:
Projection Plots
~~~~~~~~~~~~~~~~
Using a fast adaptive projection, yt is able to quickly project
simulation data along the coordinate axes.
Projection plots are created by instantiating a
:class:`~yt.visualization.plot_window.ProjectionPlot` object. For
example:
.. python-script::
import yt
from yt.units import kpc
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
prj = yt.ProjectionPlot(
ds,
"z",
("gas", "temperature"),
width=25 * kpc,
weight_field=("gas", "density"),
buff_size=(1000, 1000),
)
prj.save()
will create a density-weighted projection of the temperature field along
the x axis with 1000 resolution elements per side, plot it, and then save
the plot to a png image file.
Like :ref:`slice-plots`, annotations and modifications can be applied
after creating the ``ProjectionPlot`` object. Annotations are
described in :ref:`callbacks`. See
:class:`~yt.visualization.plot_window.ProjectionPlot` for the full
class description.
If you want to project through a subset of the full dataset volume,
you can use the ``data_source`` keyword with a :ref:`data object <data-objects>`.
The :ref:`thin-slice-projections` recipes demonstrates this functionality.
.. note:: Not every data types have support for off-axis projections yet.
Currently, this operation is supported for grid based data with cartesian geometry,
as well as SPH particles data.
.. _projection-types:
Types of Projections
""""""""""""""""""""
There are several different methods of projections that can be made either
when creating a projection with :meth:`~yt.static_output.Dataset.proj` or
when making a :class:`~yt.visualization.plot_window.ProjectionPlot`.
In either construction method, set the ``method`` keyword to be one of the
following:
``integrate`` (unweighted):
+++++++++++++++++++++++++++
This is the default projection method. It simply integrates the
requested field :math:`f({\bf x})` along a line of sight :math:`\hat{\bf n}` ,
given by the axis parameter (e.g. :math:`\hat{\bf i},\hat{\bf j},` or
:math:`\hat{\bf k}`). The units of the projected field
:math:`g({\bf X})` will be the units of the unprojected field :math:`f({\bf x})`
multiplied by the appropriate length unit, e.g., density in
:math:`\mathrm{g\ cm^{-3}}` will be projected to :math:`\mathrm{g\ cm^{-2}}`.
.. math::
g({\bf X}) = {\int\ {f({\bf x})\hat{\bf n}\cdot{d{\bf x}}}}
``integrate`` (weighted):
+++++++++++++++++++++++++
When using the ``integrate`` method, a ``weight_field`` argument may also
be specified, which will produce a weighted projection. :math:`w({\bf x})`
is the field used as a weight. One common example would
be to weight the "temperature" field by the "density" field. In this case,
the units of the projected field are the same as the unprojected field.
.. math::
g({\bf X}) = \int\ {f({\bf x})\tilde{w}({\bf x})\hat{\bf n}\cdot{d{\bf x}}}
where the "~" over :math:`w({\bf x})` reflects the fact that it has been normalized
like so:
.. math::
\tilde{w}({\bf x}) = \frac{w({\bf x})}{\int\ {w({\bf x})\hat{\bf n}\cdot{d{\bf x}}}}
For weighted projections using the ``integrate`` method, it is also possible
to project the standard deviation of a field. In this case, the projected
field is mathematically given by:
.. math::
g({\bf X}) = \left[\int\ {f({\bf x})^2\tilde{w}({\bf x})\hat{\bf n}\cdot{d{\bf x}}} - \left(\int\ {f({\bf x})\tilde{w}({\bf x})\hat{\bf n}\cdot{d{\bf x}}}\right)^2\right]^{1/2}
in order to make a weighted projection of the standard deviation of a field
along a line of sight, the ``moment`` keyword argument should be set to 2.
``max``:
++++++++
This method picks out the maximum value of a field along the line of
sight given by the axis parameter.
``min``:
++++++++
This method picks out the minimum value of a field along the line of
sight given by the axis parameter.
``sum``:
++++++++
This method is the same as ``integrate``, except that it does not
multiply by a path length when performing the integration, and is just a
straight summation of the field along the given axis. The units of the
projected field will be the same as those of the unprojected field. This
method is typically only useful for datasets such as 3D FITS cubes where
the third axis of the dataset is something like velocity or frequency, and
should *only* be used with fixed-resolution grid-based datasets.
.. _off-axis-projections:
Off Axis Projection Plots
~~~~~~~~~~~~~~~~~~~~~~~~~
Internally, off axis projections are created using :ref:`camera`
by applying the
:class:`~yt.visualization.volume_rendering.transfer_functions.ProjectionTransferFunction`.
In this use case, the volume renderer casts a set of plane parallel rays, one
for each pixel in the image. The data values along each ray are summed,
creating the final image buffer.
.. _off-axis-projection-function:
To avoid manually creating a camera and setting the transfer
function, yt provides the
:func:`~yt.visualization.volume_rendering.off_axis_projection.off_axis_projection`
function, which wraps the camera interface to create an off axis
projection image buffer. These images can be saved to disk or
used in custom plots. This snippet creates an off axis
projection through a simulation.
.. python-script::
import numpy as np
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
L = [1, 1, 0] # vector normal to cutting plane
north_vector = [-1, 1, 0]
W = [0.02, 0.02, 0.02]
c = [0.5, 0.5, 0.5]
N = 512
image = yt.off_axis_projection(ds, c, L, W, N, ("gas", "density"))
yt.write_image(np.log10(image), "%s_offaxis_projection.png" % ds)
Here, ``W`` is the width of the projection in the x, y, *and* z
directions.
One can also generate annotated off axis projections using
:class:`~yt.visualization.plot_window.ProjectionPlot`. These
plots can be created in much the same way as an
``OffAxisSlicePlot``, requiring only an open dataset, a direction
to project along, and a field to project. For example:
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
L = [1, 1, 0] # vector normal to cutting plane
north_vector = [-1, 1, 0]
prj = yt.ProjectionPlot(
ds, L, ("gas", "density"), width=(25, "kpc"), north_vector=north_vector
)
prj.save()
``OffAxisProjectionPlot`` objects can also be created with a number of
keyword arguments, as described in
:class:`~yt.visualization.plot_window.OffAxisProjectionPlot`.
Like on-axis projections, the projection of the standard deviation
of a weighted field can be created by setting ``moment=2`` in the call
to :class:`~yt.visualization.plot_window.ProjectionPlot`.
.. _slices-and-projections-in-spherical-geometry:
Slice Plots and Projection Plots in Spherical Geometry
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
What to expect when plotting data in spherical geometry? Here we explain
the notations and projections system yt uses for to render 2D images of
spherical data.
The native spherical coordinates are
- the spherical radius :math:`r`
- the colatitude :math:`\theta`, defined between :math:`0` and :math:`\pi`
- the azimuth :math:`\varphi`, defined between :math:`0` and :math:`2\pi`
:math:`\varphi`-normal slices are represented in the poloidal plane, with axes :math:`R, z`, where
- :math:`R = r \sin \theta` is the cylindrical radius
- :math:`z = r \cos \theta` is the elevation
.. python-script::
import yt
ds = yt.load_sample("KeplerianDisk", unit_system="cgs")
slc = yt.SlicePlot(ds, "phi", ("gas", "density"))
slc.save()
:math:`\theta`-normal slices are represented in a
:math:`x/\sin(\theta)` VS :math:`y/\sin(\theta)` plane, where
- :math:`x = R \cos \varphi`
- :math:`y = R \sin \varphi`
are the cartesian plane coordinates
.. python-script::
import yt
ds = yt.load_sample("KeplerianDisk", unit_system="cgs")
slc = yt.SlicePlot(ds, "theta", ("gas", "density"))
slc.save()
Finally, :math:`r`-normal slices are represented following a
`Aitoff-Hammer projection <http://paulbourke.net/geometry/transformationprojection/>`_
We denote
- the latitude :math:`\bar\theta = \frac{\pi}{2} - \theta`
- the longitude :math:`\lambda = \varphi - \pi`
.. python-script::
import yt
ds = yt.load_sample("KeplerianDisk", unit_system="cgs")
slc = yt.SlicePlot(ds, "r", ("gas", "density"))
slc.save()
.. _unstructured-mesh-slices:
Unstructured Mesh Slices
------------------------
Unstructured Mesh datasets can be sliced using the same syntax as above.
Here is an example script using a publicly available MOOSE dataset:
.. python-script::
import yt
ds = yt.load("MOOSE_sample_data/out.e-s010")
sl = yt.SlicePlot(ds, "x", ("connect1", "diffused"))
sl.zoom(0.75)
sl.save()
Here, we plot the ``'diffused'`` variable, using a slice normal to the ``'x'`` direction,
through the meshed labelled by ``'connect1'``. By default, the slice goes through the
center of the domain. We have also zoomed out a bit to get a better view of the
resulting structure. To instead plot the ``'convected'`` variable, using a slice normal
to the ``'z'`` direction through the mesh labelled by ``'connect2'``, we do:
.. python-script::
import yt
ds = yt.load("MOOSE_sample_data/out.e-s010")
sl = yt.SlicePlot(ds, "z", ("connect2", "convected"))
sl.zoom(0.75)
sl.save()
These slices are made by sampling the finite element solution at the points corresponding
to each pixel of the image. The ``'convected'`` and ``'diffused'`` variables are node-centered,
so this interpolation is performed by converting the sample point the reference coordinate
system of the element and evaluating the appropriate shape functions. You can also
plot element-centered fields:
.. python-script::
import yt
ds = yt.load("MOOSE_sample_data/out.e-s010")
sl = yt.SlicePlot(ds, "y", ("connect1", "conv_indicator"))
sl.zoom(0.75)
sl.save()
We can also annotate the mesh lines, as follows:
.. python-script::
import yt
ds = yt.load("MOOSE_sample_data/out.e-s010")
sl = yt.SlicePlot(ds, "z", ("connect1", "diffused"))
sl.annotate_mesh_lines(color="black")
sl.zoom(0.75)
sl.save()
The ``plot_args`` parameter is a dictionary of keyword arguments that will be passed
to matplotlib. It can be used to control the mesh line color, thickness, etc...
The above examples all involve 8-node hexahedral mesh elements. Here is another example from
a dataset that uses 6-node wedge elements:
.. python-script::
import yt
ds = yt.load("MOOSE_sample_data/wedge_out.e")
sl = yt.SlicePlot(ds, "z", ("connect2", "diffused"))
sl.save()
Slices can also be used to examine 2D unstructured mesh datasets, but the
slices must be taken to be normal to the ``'z'`` axis, or you'll get an error. Here is
an example using another MOOSE dataset that uses triangular mesh elements:
.. python-script::
import yt
ds = yt.load("MOOSE_sample_data/out.e")
sl = yt.SlicePlot(ds, "z", ("connect1", "nodal_aux"))
sl.save()
You may run into situations where you have a variable you want to visualize that
exists on multiple mesh blocks. To view the variable on ``all`` mesh blocks,
simply pass ``all`` as the first argument of the field tuple:
.. python-script::
import yt
ds = yt.load("MultiRegion/two_region_example_out.e", step=-1)
sl = yt.SlicePlot(ds, "z", ("all", "diffused"))
sl.save()
.. _particle-plotting-workarounds:
Additional Notes for Plotting Particle Data
-------------------------------------------
An important caveat when visualizing particle data is that off-axis slice plotting is
not available for any particle data. However, axis-aligned slice plots (as described in
:ref:`slice-plots`) will work.
Since version 4.2.0, off-axis projections ares supported for non-SPH particle data.
Previous to that, this operation was only supported for SPH particles. Two historical
workaround methods were available for plotting non-SPH particles with off-axis
projections.
1. :ref:`smooth-non-sph` - this method involves extracting particle data to be
reloaded with :class:`~yt.loaders.load_particles` and using the
:class:`~yt.frontends.stream.data_structures.StreamParticlesDataset.add_sph_fields`
function to create smoothing lengths. This works well for relatively small datasets,
but is not parallelized and may take too long for larger data.
2. Plot from a saved
:class:`~yt.data_objects.construction_data_containers.YTCoveringGrid`,
:class:`~yt.data_objects.construction_data_containers.YTSmoothedCoveringGrid`,
or :class:`~yt.data_objects.construction_data_containers.YTArbitraryGrid`
dataset.
This second method is illustrated below. First, construct one of the grid data
objects listed above. Then, use the
:class:`~yt.data_objects.data_containers.YTDataContainer.save_as_dataset`
function (see :ref:`saving_data`) to save a deposited particle field
(see :ref:`deposited-particle-fields`) as a reloadable dataset. This dataset
can then be loaded and visualized using both off-axis projections and slices.
Note, the change in the field name from ``("deposit", "nbody_mass")`` to
``("grid", "nbody_mass")`` after reloading.
.. python-script::
import yt
ds = yt.load("gizmo_cosmology_plus/snap_N128L16_132.hdf5")
# create a 128^3 covering grid over the entire domain
L = 7
cg = ds.covering_grid(level=L, left_edge=ds.domain_left_edge, dims=[2**L]*3)
fn = cg.save_as_dataset(fields=[("deposit", "nbody_mass")])
ds_grid = yt.load(fn)
p = yt.ProjectionPlot(ds_grid, [1, 1, 1], ("grid", "nbody_mass"))
p.save()
Plot Customization: Recentering, Resizing, Colormaps, and More
--------------------------------------------------------------
You can customize each of the four plot types above in identical ways. We'll go
over each of the customizations methods below. For each of the examples below we
will modify the following plot.
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc"))
slc.save()
Panning and zooming
~~~~~~~~~~~~~~~~~~~
There are three methods to dynamically pan around the data.
:meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.pan` accepts x and y
deltas.
.. python-script::
import yt
from yt.units import kpc
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc"))
slc.pan((2 * kpc, 2 * kpc))
slc.save()
:meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.pan_rel` accepts deltas
in units relative to the field of view of the plot.
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc"))
slc.pan_rel((0.1, -0.1))
slc.save()
:meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.zoom` accepts a factor to zoom in by.
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc"))
slc.zoom(2)
slc.save()
Set axes units
~~~~~~~~~~~~~~
:meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.set_axes_unit` allows the customization of
the axes unit labels.
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc"))
slc.set_axes_unit("Mpc")
slc.save()
The same result could have been accomplished by explicitly setting the ``width``
to ``(.01, 'Mpc')``.
.. _set-image-units:
Set image units
~~~~~~~~~~~~~~~
:meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.set_axes_unit` allows
the customization of the units used for the image and colorbar.
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc"))
slc.set_unit(("gas", "density"), "Msun/pc**3")
slc.save()
If the unit you would like to convert to needs an equivalency, this can be
specified via the ``equivalency`` keyword argument of ``set_unit``. For
example, let's make a plot of the temperature field, but present it using
an energy unit instead of a temperature unit:
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
slc = yt.SlicePlot(ds, "z", ("gas", "temperature"), width=(10, "kpc"))
slc.set_unit(("gas", "temperature"), "keV", equivalency="thermal")
slc.save()
Set the plot center
~~~~~~~~~~~~~~~~~~~
The :meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.set_center`
function accepts a new center for the plot, in code units. New centers must be
two element tuples.
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc"))
slc.set_center((0.5, 0.503))
slc.save()
Adjusting the plot view axes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There are a number of ways in which the initial orientation of a :class:`~yt.visualization.plot_window.PlotWindow`
object can be adjusted.
The first two axis orientation modifications,
:meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.flip_horizontal`
and :meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.flip_vertical`, are
equivalent to the ``invert_xaxis`` and ``invert_yaxis`` of matplotlib ``Axes``
objects. ``flip_horizontal`` will invert the plot's x-axis while the :meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.flip_vertical` method
will invert the plot's y-axis
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
# slicing with standard view (right-handed)
slc = yt.SlicePlot(ds, "z", ("gas", "velocity_x"), width=(20, 'kpc'))
slc.annotate_title("Standard Horizontal (Right Handed)")
slc.save("Standard.png")
# flip the horizontal axis (not right handed)
slc.flip_horizontal()
slc.annotate_title("Horizontal Flipped (Not Right Handed)")
slc.save("NotRightHanded.png")
# flip the vertical axis
slc = yt.SlicePlot(ds, "z", ("gas", "velocity_x"), width=(20, 'kpc'))
slc.flip_vertical()
slc.annotate_title("Flipped vertical")
slc.save("FlippedVertical.png")
In addition to inverting the direction of each axis,
:meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.swap_axes` will exchange
the plot's vertical and horizontal axes:
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
# slicing with non right-handed coordinates
slc = yt.SlicePlot(ds, "z", ("gas", "velocity_x"), width=(20, 'kpc'))
slc.swap_axes()
slc.annotate_title("Swapped axes")
slc.save("SwappedAxes.png")
# toggle swap_axes (return to standard view)
slc.swap_axes()
slc.annotate_title("Standard Axes")
slc.save("StandardAxes.png")
When using the ``flip_horizontal`` and ``flip_vertical`` with ``swap_axes``, it
is important to remember that any ``flip_horizontal`` and ``flip_vertical``
operations are applied to the image axes (not underlying dataset coordinates)
after any ``swap_axes`` calls, regardless of the order in which the callbacks
are added. Also note that when using ``swap_axes``, any plot modifications
relating to limits, image width or resolution should still be supplied in reference
to the standard (unswapped) orientation rather than the swapped view.
Finally, it's worth mentioning that these three methods can be used in combination
to rotate the view:
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
# initial view
slc = yt.SlicePlot(ds, "z", ("gas", "velocity_x"), width=(20, 'kpc'))
slc.save("InitialOrientation.png")
slc.annotate_title("Initial View")
# swap + vertical flip = rotate 90 degree rotation (clockwise)
slc.swap_axes()
slc.flip_vertical()
slc.annotate_title("90 Degree Clockwise Rotation")
slc.save("SwappedAxes90CW.png")
# vertical flip + horizontal flip = rotate 180 degree rotation
slc = yt.SlicePlot(ds, "z", ("gas", "velocity_x"), width=(20, 'kpc'))
slc.flip_horizontal()
slc.flip_vertical()
slc.annotate_title("180 Degree Rotation")
slc.save("FlipAxes180.png")
# swap + horizontal flip = rotate 90 degree rotation (counter clockwise)
slc = yt.SlicePlot(ds, "z", ("gas", "velocity_x"), width=(20, 'kpc'))
slc.swap_axes()
slc.flip_horizontal()
slc.annotate_title("90 Degree Counter Clockwise Rotation")
slc.save("SwappedAxes90CCW.png")
.. _hiding-colorbar-and-axes:
Hiding the Colorbar and Axis Labels
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The :class:`~yt.visualization.plot_window.PlotWindow` class has functions
attached for hiding/showing the colorbar and axes. This allows for making
minimal plots that focus on the data:
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc"))
slc.hide_colorbar()
slc.hide_axes()
slc.save()
See the cookbook recipe :ref:`show-hide-axes-colorbar` and the full function
description :class:`~yt.visualization.plot_window.PlotWindow` for more
information.
Fonts
~~~~~
:meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.set_font` allows font
customization.
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc"))
slc.set_font({"family": "sans-serif", "style": "italic", "weight": "bold", "size": 24})
slc.save()
Colormaps
~~~~~~~~~
Each of these functions accepts at least two arguments. In all cases the first argument
is a field name. This makes it possible to use different custom colormaps for
different fields tracked by the plot object.
To change the colormap for the plot, call the
:meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.set_cmap` function.
Use any of the colormaps listed in the :ref:`colormaps` section.
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc"))
slc.set_cmap(("gas", "density"), "RdBu_r")
slc.save()
Colorbar Normalization / Scaling
""""""""""""""""""""""""""""""""
For a general introduction to the topic of colorbar scaling, see
`<https://matplotlib.org/stable/tutorials/colors/colormapnorms.html>`_. Here we
will focus on the defaults, and the ways to customize them, of yt plot classes.
In this section, "norm" is used as short for "normalization", and is
interchangeable with "scaling".
Map-like plots e.g., ``SlicePlot``, ``ProjectionPlot`` and ``PhasePlot``,
default to `logarithmic (log)
<https://matplotlib.org/stable/tutorials/colors/colormapnorms.html#logarithmic>`_
normalization when all values are strictly positive, and `symmetric log (symlog)
<https://matplotlib.org/stable/tutorials/colors/colormapnorms.html#symmetric-logarithmic>`_
otherwise. yt supports two different interfaces to move away from the defaults.
See **constrained norms** and **arbitrary norm** hereafter.
.. note:: defaults can be configured on a per-field basis, see :ref:`per-field-plotconfig`
**Constrained norms**
The standard way to change colorbar scalings between linear, log, and symmetric
log (symlog). Colorbar properties can be constrained via two methods:
- :meth:`~yt.visualization.plot_container.PlotContainer.set_zlim` controls the limits
of the colorbar range: ``zmin`` and ``zmax``.
- :meth:`~yt.visualization.plot_container.ImagePlotContainer.set_log` allows switching to
linear or symlog normalization. With symlog, the linear threshold can be set
explicitly. Otherwise, yt will dynamically determine a reasonable value.
Use the :meth:`~yt.visualization.plot_container.PlotContainer.set_zlim`
method to set a custom colormap range.
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc"))
slc.set_zlim(("gas", "density"), zmin=(1e-30, "g/cm**3"), zmax=(1e-25, "g/cm**3"))
slc.save()
Units can be left out, in which case they implicitly match the current display
units of the colorbar (controlled with the ``set_unit`` method, see
:ref:`set-image-units`).
It is not required to specify both ``zmin`` and ``zmax``. Left unset, they will
default to the extreme values in the current view. This default behavior can be
enforced or restored by passing ``zmin="min"`` (reps. ``zmax="max"``)
explicitly.
:meth:`~yt.visualization.plot_container.ImagePlotContainer.set_log` takes a boolean argument
to select log (``True``) or linear (``False``) scalings.
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc"))
slc.set_log(("gas", "density"), False) # switch to linear scaling
slc.save()
One can switch to `symlog
<https://matplotlib.org/stable/api/_as_gen/matplotlib.colors.SymLogNorm.html?highlight=symlog#matplotlib.colors.SymLogNorm>`_
by providing a "linear threshold" (``linthresh``) value.
With ``linthresh="auto"`` yt will switch to symlog norm and guess an appropriate value
automatically, with different behavior depending on the dynamic range of the data.
When the dynamic range of the symlog scale is less than 15 orders of magnitude, the
linthresh value will be the minimum absolute nonzero value, as in
.. python-script::
import yt
ds = yt.load_sample("IsolatedGalaxy")
slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc"))
slc.set_log(("gas", "density"), linthresh="auto")
slc.save()
When the dynamic range of the symlog scale exceeds 15 orders of magnitude, the
linthresh value is calculated as 1/10\ :sup:`15` of the maximum nonzero value in
order to avoid possible floating point precision issues. The following plot
triggers the dynamic range cutoff
.. python-script::
import yt
ds = yt.load_sample("FIRE_M12i_ref11")
p = yt.ProjectionPlot(ds, "x", ("gas", "density"), width=(30, "Mpc"))
p.set_log(("gas", "density"), linthresh="auto")
p.save()
In the previous example, it is actually safe to expand the dynamic range and in
other cases you may find that the selected linear threshold is not well suited to
your dataset. To pass an explicit value instead
.. python-script::
import yt
ds = yt.load_sample("FIRE_M12i_ref11")
p = yt.ProjectionPlot(ds, "x", ("gas", "density"), width=(30, "Mpc"))
p.set_log(("gas", "density"), linthresh=(1e-22, "g/cm**2"))
p.save()
Similar to the ``zmin`` and ``zmax`` arguments of the ``set_zlim`` method, units
can be left out in ``linthresh``.
**Arbitrary norms**
Alternatively, arbitrary `matplotlib norms
<https://matplotlib.org/stable/tutorials/colors/colormapnorms.html>`_ can be
passed via the :meth:`~yt.visualization.plot_container.PlotContainer.set_norm`
method. In that case, any numeric value is treated as having implicit units,
matching the current display units. This alternative interface is more flexible,
but considered experimental as of yt 4.1. Don't forget that with great power
comes great responsibility.
.. python-script::
import yt
from matplotlib.colors import TwoSlopeNorm
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
slc = yt.SlicePlot(ds, "z", ("gas", "velocity_x"), width=(30, "kpc"))
slc.set_norm(("gas", "velocity_x"), TwoSlopeNorm(vcenter=0))
# using a diverging colormap to emphasize that vcenter corresponds to the
# middle value in the color range
slc.set_cmap(("gas", "velocity_x"), "RdBu")
slc.save()
.. note:: When calling
:meth:`~yt.visualization.plot_container.PlotContainer.set_norm`, any constraints
previously set with
:meth:`~yt.visualization.plot_container.PlotContainer.set_log` or
:meth:`~yt.visualization.plot_container.PlotContainer.set_zlim` will be dropped.
Conversely, calling ``set_log`` or ``set_zlim`` will have the
effect of dropping any norm previously set via ``set_norm``.
The :meth:`~yt.visualization.plot_container.ImagePlotContainer.set_background_color`
function accepts a field name and a color (optional). If color is given, the function
will set the plot's background color to that. If not, it will set it to the bottom
value of the color map.
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(1.5, "Mpc"))
slc.set_background_color(("gas", "density"))
slc.save("bottom_colormap_background")
slc.set_background_color(("gas", "density"), color="black")
slc.save("black_background")
Annotations
~~~~~~~~~~~
A slice object can also add annotations like a title, an overlying
quiver plot, the location of grid boundaries, halo-finder annotations,
and many other annotations, including user-customizable annotations.
For example:
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc"))
slc.annotate_grids()
slc.save()
will plot the density field in a 10 kiloparsec slice through the
z-axis centered on the highest density point in the simulation domain.
Before saving the plot, the script annotates it with the grid
boundaries, which are drawn as lines in the plot, with colors going
from black to white depending on the AMR level of the grid.
Annotations are described in :ref:`callbacks`.
Set the size and resolution of the plot
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To set the size of the plot, use the
:meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.set_figure_size` function. The argument
is the size of the longest edge of the plot in inches. View the full resolution
image to see the difference more clearly.
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc"))
slc.set_figure_size(10)
slc.save()
To change the resolution of the image, call the
:meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.set_buff_size` function.
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc"))
slc.set_buff_size(1600)
slc.save()
Also see cookbook recipe :ref:`image-resolution-primer` for more information
about the parameters that determine the resolution of your images.
Turning off minorticks
~~~~~~~~~~~~~~~~~~~~~~
By default minorticks for the x and y axes are turned on.
The minorticks may be removed using the
:meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.set_minorticks`
function, which either accepts a specific field name including the 'all' alias
and the desired state for the plot as 'on' or 'off'. There is also an analogous
:meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.set_colorbar_minorticks`
function for the colorbar axis.
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
slc = yt.SlicePlot(ds, "z", ("gas", "density"), width=(10, "kpc"))
slc.set_minorticks("all", False)
slc.set_colorbar_minorticks("all", False)
slc.save()
.. _matplotlib-customization:
Further customization via matplotlib
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Each :class:`~yt.visualization.plot_window.PlotWindow` object is really a
container for plots - one plot for each field specified in the list of fields
supplied when the plot object is created. The individual plots can be
accessed via the ``plots`` dictionary attached to each
:class:`~yt.visualization.plot_window.PlotWindow` object:
.. code-block:: python
slc = SlicePlot(ds, 2, [("gas", "density"), ("gas", "temperature")])
dens_plot = slc.plots["gas", "density"]
In this example ``dens_plot`` is an instance of
:class:`~yt.visualization.plot_window.WindowPlotMPL`, an object that wraps the
matplotlib
`figure <https://matplotlib.org/stable/api/_as_gen/matplotlib.figure.Figure.html#matplotlib.figure.Figure>`_
and `axes <https://matplotlib.org/stable/api/axes_api.html#matplotlib.axes.Axes>`_
objects. We can access these matplotlib primitives via attributes of
``dens_plot``.
.. code-block:: python
figure = dens_plot.figure
axes = dens_plot.axes
colorbar_axes = dens_plot.cax
These are the
`figure <https://matplotlib.org/stable/api/_as_gen/matplotlib.figure.Figure.html#matplotlib.figure.Figure>`_
and `axes <https://matplotlib.org/stable/api/axes_api.html#matplotlib.axes.Axes>`_
objects that control the actual drawing of the plot. Arbitrary plot
customizations are possible by manipulating these objects. See
:ref:`matplotlib-primitives` for an example.
.. _how-to-make-1d-profiles:
1D Profile Plots
----------------
1D profiles are used to calculate the average or the sum of a given quantity
with respect to a second quantity. Two common examples are the "average density
as a function of radius" or "the total mass within a given set of density bins."
When created, they default to the average: in fact, they default to the average
as weighted by the total cell mass. However, this can be modified to take
either the total value or the average with respect to a different quantity.
Profiles operate on :ref:`data objects <data-objects>`; they will take the
entire data contained in a sphere, a prism, an extracted region and so on, and
they will calculate and use that as input to their calculation. To make a 1D
profile plot, create a (:class:`~yt.visualization.profile_plotter.ProfilePlot`)
object, supplying the data object, the field for binning, and a list of fields
to be profiled.
.. python-script::
import yt
from yt.units import kpc
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
my_galaxy = ds.disk(ds.domain_center, [0.0, 0.0, 1.0], 10 * kpc, 3 * kpc)
plot = yt.ProfilePlot(my_galaxy, ("gas", "density"), [("gas", "temperature")])
plot.save()
This will create a :class:`~yt.data_objects.selection_data_containers.YTDisk`
centered at [0.5, 0.5, 0.5], with a normal vector of [0.0, 0.0, 1.0], radius of
10 kiloparsecs and height of 3 kiloparsecs and will then make a plot of the
mass-weighted average temperature as a function of density for all of the gas
contained in the cylinder.
We could also have made a profile considering only the gas in a sphere.
For instance:
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
my_sphere = ds.sphere([0.5, 0.5, 0.5], (100, "kpc"))
plot = yt.ProfilePlot(my_sphere, ("gas", "temperature"), [("gas", "mass")], weight_field=None)
plot.save()
Note that because we have specified the weighting field to be ``None``, the
profile plot will display the accumulated cell mass as a function of temperature
rather than the average. Also note the use of a ``(value, unit)`` tuple. These
can be used interchangeably with units explicitly imported from ``yt.units`` when
creating yt plots.
We can also accumulate along the bin field of a ``ProfilePlot`` (the bin field
is the x-axis in a ``ProfilePlot``, in the last example the bin field is
``Temperature``) by setting the ``accumulation`` keyword argument to ``True``.
The following example uses ``weight_field = None`` and ``accumulation = True`` to
generate a plot of the enclosed mass in a sphere:
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
my_sphere = ds.sphere([0.5, 0.5, 0.5], (100, "kpc"))
plot = yt.ProfilePlot(
my_sphere, "radius", [("gas", "mass")], weight_field=None, accumulation=True
)
plot.save()
Notably, above we have specified the field tuple for the mass, but not for the
``radius`` field. The ``radius`` field will not be ambiguous, but if you want
to ensure that it refers to the radius of the cells on which the "gas" field
type is defined, you can specify it using the field tuple ``("index",
"radius")``.
You can also access the data generated by profiles directly, which can be
useful for overplotting average quantities on top of phase plots, or for
exporting and plotting multiple profiles simultaneously from a time series.
The ``profiles`` attribute contains a list of all profiles that have been
made. For each item in the list, the x field data can be accessed with ``x``.
The profiled fields can be accessed from the dictionary ``field_data``.
.. code-block:: python
plot = ProfilePlot(
my_sphere, ("gas", "temperature"), [("gas", "mass")], weight_field=None
)
profile = plot.profiles[0]
# print the bin field, in this case temperature
print(profile.x)
# print the profiled mass field
print(profile["gas", "mass"])
Other options, such as the number of bins, are also configurable. See the
documentation for :class:`~yt.visualization.profile_plotter.ProfilePlot` for
more information.
Overplotting Multiple 1D Profiles
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It is often desirable to overplot multiple 1D profile to show evolution
with time. This is supported with the ``from_profiles`` class method.
1D profiles are created with the :func:`~yt.data_objects.profiles.create_profile`
method and then given to the ProfilePlot object.
.. python-script::
import yt
# Create a time-series object.
es = yt.load_simulation("enzo_tiny_cosmology/32Mpc_32.enzo", "Enzo")
es.get_time_series(redshifts=[5, 4, 3, 2, 1, 0])
# Lists to hold profiles, labels, and plot specifications.
profiles = []
labels = []
# Loop over each dataset in the time-series.
for ds in es:
# Create a data container to hold the whole dataset.
ad = ds.all_data()
# Create a 1d profile of density vs. temperature.
profiles.append(
yt.create_profile(
ad,
[("gas", "temperature")],
fields=[("gas", "mass")],
weight_field=None,
accumulation=True,
)
)
# Add labels
labels.append("z = %.2f" % ds.current_redshift)
# Create the profile plot from the list of profiles.
plot = yt.ProfilePlot.from_profiles(profiles, labels=labels)
# Save the image.
plot.save()
Customizing axis limits
~~~~~~~~~~~~~~~~~~~~~~~
By default the x and y limits for ``ProfilePlot`` are determined using the
:class:`~yt.data_objects.derived_quantities.Extrema` derived quantity. If you
want to create a plot with custom axis limits, you have two options.
First, you can create a custom profile object using
:func:`~yt.data_objects.profiles.create_profile`.
This function accepts a dictionary of ``(max, min)`` tuples keyed to field names.
.. python-script::
import yt
import yt.units as u
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
sp = ds.sphere("m", 10 * u.kpc)
profiles = yt.create_profile(
sp,
("gas", "temperature"),
("gas", "density"),
weight_field=None,
extrema={("gas", "temperature"): (1e3, 1e7), ("gas", "density"): (1e-26, 1e-22)},
)
plot = yt.ProfilePlot.from_profiles(profiles)
plot.save()
You can also make use of the
:meth:`~yt.visualization.profile_plotter.ProfilePlot.set_xlim` and
:meth:`~yt.visualization.profile_plotter.ProfilePlot.set_ylim` functions to
customize the axes limits of a plot that has already been created. Note that
calling ``set_xlim`` is much slower than calling ``set_ylim``. This is because
``set_xlim`` must recreate the profile object using the specified extrema.
Creating a profile directly via :func:`~yt.data_objects.profiles.create_profile`
might be significantly faster.
Note that since there is only one bin field, ``set_xlim``
does not accept a field name as the first argument.
.. python-script::
import yt
import yt.units as u
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
sp = ds.sphere("m", 10 * u.kpc)
plot = yt.ProfilePlot(sp, ("gas", "temperature"), ("gas", "density"), weight_field=None)
plot.set_xlim(1e3, 1e7)
plot.set_ylim(("gas", "density"), 1e-26, 1e-22)
plot.save()
Customizing Units
~~~~~~~~~~~~~~~~~
Units for both the x and y axis can be controlled via the
:meth:`~yt.visualization.profile_plotter.ProfilePlot.set_unit` method.
Adjusting the plot units does not require recreating the histogram, so adjusting
units will always be inexpensive, requiring only an in-place unit conversion.
In the following example we create a plot of the average density in solar
masses per cubic parsec as a function of radius in kiloparsecs.
.. python-script::
import yt
import yt.units as u
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
sp = ds.sphere("m", 10 * u.kpc)
plot = yt.ProfilePlot(sp, "radius", ("gas", "density"), weight_field=None)
plot.set_unit(("gas", "density"), "msun/pc**3")
plot.set_unit("radius", "kpc")
plot.save()
Linear and Logarithmic Scaling
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The axis scaling can be manipulated via the
:meth:`~yt.visualization.profile_plotter.ProfilePlot.set_log` function. This
function accepts a field name and a boolean. If the boolean is ``True``, the
field is plotted in log scale. If ``False``, the field is plotted in linear
scale.
In the following example we create a plot of the average x velocity as a
function of radius. Since the x component of the velocity vector can be
negative, we set the scaling to be linear for this field.
.. python-script::
import yt
import yt.units as u
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
sp = ds.sphere("m", 10 * u.kpc)
plot = yt.ProfilePlot(sp, "radius", ("gas", "velocity_x"), weight_field=None)
plot.set_log(("gas", "velocity_x"), False)
plot.save()
Setting axis labels
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The axis labels can be manipulated via the
:meth:`~yt.visualization.profile_plotter.ProfilePlot.set_ylabel` and
:meth:`~yt.visualization.profile_plotter.ProfilePlot.set_xlabel` functions. The
:meth:`~yt.visualization.profile_plotter.ProfilePlot.set_ylabel` function accepts a field name
and a string with the desired label. The :meth:`~yt.visualization.profile_plotter.ProfilePlot.set_xlabel`
function just accepts the desired label and applies this to all of the plots.
In the following example we create a plot of the average x-velocity and density as a
function of radius. The xlabel is set to "Radius", for all plots, and the ylabel is set to
"velocity in x direction" for the x-velocity plot.
.. python-script::
import yt
ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046")
ad = ds.all_data()
plot = yt.ProfilePlot(ad, "radius", [("gas", "temperature"), ("gas", "velocity_x")], weight_field=None)
plot.set_xlabel("Radius")
plot.set_ylabel(("gas", "velocity_x"), "velocity in x direction")
plot.save()
Adding plot title
~~~~~~~~~~~~~~~~~
Plot title can be set via the
:meth:`~yt.visualization.profile_plotter.ProfilePlot.annotate_title` function.
It accepts a string argument which is the plot title and an optional ``field`` parameter which specifies
the field for which plot title should be added. ``field`` could be a string or a list of string.
If ``field`` is not passed, plot title will be added for the fields.
In the following example we create a plot and set the plot title.
.. python-script::
import yt
ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046")
ad = ds.all_data()
plot = yt.ProfilePlot(ad, ("gas", "density"), [("gas", "temperature")], weight_field=None)
plot.annotate_title("Temperature vs Density Plot")
plot.save()
Another example where we create plots from profile. By specifying the fields we can add plot title to a
specific plot.
.. python-script::
import yt
ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046")
sphere = ds.sphere("max", (1.0, "Mpc"))
profiles = []
profiles.append(yt.create_profile(sphere, ["radius"], fields=[("gas", "density")], n_bins=64))
profiles.append(
yt.create_profile(sphere, ["radius"], fields=["dark_matter_density"], n_bins=64)
)
plot = yt.ProfilePlot.from_profiles(profiles)
plot.annotate_title("Plot Title: Density", ("gas", "density"))
plot.annotate_title("Plot Title: Dark Matter Density", "dark_matter_density")
plot.save()
Here, ``plot.annotate_title("Plot Title: Density", ("gas", "density"))`` will only set the plot title for the ``"density"``
field. Thus, allowing us the option to have different plot titles for different fields.
Annotating plot with text
~~~~~~~~~~~~~~~~~~~~~~~~~
Plots can be annotated at a desired (x,y) coordinate using :meth:`~yt.visualization.profile_plotter.ProfilePlot.annotate_text` function.
This function accepts the x-position, y-position, a text string to
be annotated in the plot area, and an optional list of fields for annotating plots with the specified field.
Furthermore, any keyword argument accepted by the matplotlib ``axes.text`` function could also be passed which will can be useful to change fontsize, text-alignment, text-color or other such properties of annotated text.
In the following example we create a plot and add a simple annotation.
.. python-script::
import yt
ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046")
ad = ds.all_data()
plot = yt.ProfilePlot(ad, ("gas", "density"), [("gas", "temperature")], weight_field=None)
plot.annotate_text(1e-30, 1e7, "Annotated Text")
plot.save()
To add annotations to a particular set of fields we need to pass in the list of fields as follows,
where ``"ftype1"`` and ``"ftype2"`` are the field types (and may be the same):
.. code-block:: python
plot.annotate_text(
1e-30, 1e7, "Annotation", [("ftype1", "field1"), ("ftype2", "field2")]
)
To change the text annotated text properties, we need to pass the matplotlib ``axes.text`` arguments as follows:
.. code-block:: python
plot.annotate_text(
1e-30,
1e7,
"Annotation",
fontsize=20,
bbox=dict(facecolor="red", alpha=0.5),
horizontalalignment="center",
verticalalignment="center",
)
The above example will set the fontsize of annotation to 20, add a bounding box of red color and center align
horizontally and vertically. The is just an example to modify the text properties, for further options please check
`matplotlib.axes.Axes.text <https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.text.html>`_.
Altering Line Properties
~~~~~~~~~~~~~~~~~~~~~~~~
Line properties for any and all of the profiles can be changed with the
:func:`~yt.visualization.profile_plotter.set_line_property` function.
The two arguments given are the line property and desired value.
.. code-block:: python
plot.set_line_property("linestyle", "--")
With no additional arguments, all of the lines plotted will be altered. To
change the property of a single line, give also the index of the profile.
.. code-block:: python
# change only the first line
plot.set_line_property("linestyle", "--", 0)
.. _how-to-1d-unstructured-mesh:
1D Line Sampling
----------------
YT has the ability to sample datasets along arbitrary lines
and plot the result. You must supply five arguments to the ``LinePlot``
class. They are enumerated below:
1. Dataset
2. A list of fields or a single field you wish to plot
3. The starting point of the sampling line. This should be an n-element list, tuple,
ndarray, or YTArray with the elements corresponding to the coordinates of the
starting point. (n should equal the dimension of the dataset)
4. The ending point of the sampling line. This should also be an n-element list, tuple,
ndarray, or YTArray with the elements corresponding to the coordinates of the
ending point.
5. The number of sampling points along the line, e.g. if 1000 is specified, then
data will be sampled at 1000 points evenly spaced between the starting and
ending points.
The below code snippet illustrates how this is done:
.. code-block:: python
ds = yt.load("SecondOrderTris/RZ_p_no_parts_do_nothing_bcs_cone_out.e", step=-1)
plot = yt.LinePlot(ds, [("all", "v"), ("all", "u")], (0, 0, 0), (0, 1, 0), 1000)
plot.save()
If working in a Jupyter Notebook, ``LinePlot`` also has the ``show()`` method.
You can add a legend to a 1D sampling plot. The legend process takes two steps:
1. When instantiating the ``LinePlot``, pass a dictionary of
labels with keys corresponding to the field names
2. Call the ``LinePlot`` ``annotate_legend`` method
X- and Y- axis units can be set with ``set_x_unit`` and ``set_unit`` methods
respectively. The below code snippet combines all the features we've discussed:
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
plot = yt.LinePlot(ds, ("gas", "density"), [0, 0, 0], [1, 1, 1], 512)
plot.annotate_legend(("gas", "density"))
plot.set_x_unit("cm")
plot.set_unit(("gas", "density"), "kg/cm**3")
plot.save()
If a list of fields is passed to ``LinePlot``, yt will create a number of
individual figures equal to the number of different dimensional
quantities. E.g. if ``LinePlot`` receives two fields with units of "length/time"
and a field with units of "temperature", two different figures will be created,
one with plots of the "length/time" fields and another with the plot of the
"temperature" field. It is only necessary to call ``annotate_legend``
for one field of a multi-field plot to produce a legend containing all the
labels passed in the initial construction of the ``LinePlot`` instance. Example:
.. python-script::
import yt
ds = yt.load("SecondOrderTris/RZ_p_no_parts_do_nothing_bcs_cone_out.e", step=-1)
plot = yt.LinePlot(
ds,
[("all", "v"), ("all", "u")],
[0, 0, 0],
[0, 1, 0],
100,
field_labels={("all", "u"): r"v$_x$", ("all", "v"): r"v$_y$"},
)
plot.annotate_legend(("all", "u"))
plot.save()
``LinePlot`` is a bit different from yt ray objects which are data
containers. ``LinePlot`` is a plotting class that may use yt ray objects to
supply field plotting information. However, perhaps the most important
difference to highlight between rays and ``LinePlot`` is that rays return data
elements that intersect with the ray and make no guarantee about the spacing
between data elements. ``LinePlot`` sampling points are guaranteed to be evenly
spaced. In the case of cell data where multiple points fall within the same
cell, the ``LinePlot`` object will show the same field value for each sampling
point that falls within the same cell.
.. _how-to-make-2d-profiles:
2D Phase Plots
--------------
2D phase plots function in much the same was as 1D phase plots, but with a
:class:`~yt.visualization.profile_plotter.PhasePlot` object. Much like 1D
profiles, 2D profiles (phase plots) are best thought of as plotting a
distribution of points, either taking the average or the accumulation in a bin.
The default behavior is to average, using the cell mass as the weighting,
but this behavior can be controlled through the ``weight_field`` parameter.
For example, to generate a 2D distribution of mass enclosed in density and
temperature bins, you can do:
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
my_sphere = ds.sphere("c", (50, "kpc"))
plot = yt.PhasePlot(
my_sphere, ("gas", "density"), ("gas", "temperature"), [("gas", "mass")], weight_field=None
)
plot.save()
If you would rather see the average value of a field as a function of two other
fields, leave off the ``weight_field`` argument, and it will average by
the cell mass. This would look
something like:
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
my_sphere = ds.sphere("c", (50, "kpc"))
plot = yt.PhasePlot(my_sphere, ("gas", "density"), ("gas", "temperature"), [("gas", "H_p0_fraction")])
plot.save()
Customizing Phase Plots
~~~~~~~~~~~~~~~~~~~~~~~
Similarly to 1D profile plots, :class:`~yt.visualization.profile_plotter.PhasePlot`
can be customized via ``set_unit``,
``set_xlim``, ``set_ylim``, and ``set_zlim``. The following example illustrates
how to manipulate these functions. :class:`~yt.visualization.profile_plotter.PhasePlot`
can also be customized in a similar manner as
:class:`~yt.visualization.plot_window.SlicePlot`, such as with ``hide_colorbar``
and ``show_colorbar``.
.. python-script::
import yt
ds = yt.load("sizmbhloz-clref04SNth-rs9_a0.9011/sizmbhloz-clref04SNth-rs9_a0.9011.art")
center = ds.arr([64.0, 64.0, 64.0], "code_length")
rvir = ds.quan(1e-1, "Mpccm/h")
sph = ds.sphere(center, rvir)
plot = yt.PhasePlot(sph, ("gas", "density"), ("gas", "temperature"), ("gas", "mass"), weight_field=None)
plot.set_unit(("gas", "density"), "Msun/pc**3")
plot.set_unit(("gas", "mass"), "Msun")
plot.set_xlim(1e-5, 1e1)
plot.set_ylim(1, 1e7)
plot.save()
It is also possible to construct a custom 2D profile object and then use the
:meth:`~yt.visualization.profile_plotter.PhasePlot.from_profile` function to
create a ``PhasePlot`` using the profile object.
This will sometimes be faster, especially if you need custom x and y axes
limits. The following example illustrates this workflow:
.. python-script::
import yt
ds = yt.load("sizmbhloz-clref04SNth-rs9_a0.9011/sizmbhloz-clref04SNth-rs9_a0.9011.art")
center = ds.arr([64.0, 64.0, 64.0], "code_length")
rvir = ds.quan(1e-1, "Mpccm/h")
sph = ds.sphere(center, rvir)
units = {("gas", "density"): "Msun/pc**3", ("gas", "mass"): "Msun"}
extrema = {("gas", "density"): (1e-5, 1e1), ("gas", "temperature"): (1, 1e7)}
profile = yt.create_profile(
sph,
[("gas", "density"), ("gas", "temperature")],
n_bins=[128, 128],
fields=[("gas", "mass")],
weight_field=None,
units=units,
extrema=extrema,
)
plot = yt.PhasePlot.from_profile(profile)
plot.save()
Probability Distribution Functions and Accumulation
---------------------------------------------------
Both 1D and 2D profiles which show the total of amount of some field, such as
mass, in a bin (done by setting the ``weight_field`` keyword to ``None``) can be
turned into probability distribution functions (PDFs) by setting the
``fractional`` keyword to ``True``. When set to ``True``, the value in each bin
is divided by the sum total from all bins. These can be turned into cumulative
distribution functions (CDFs) by setting the ``accumulation`` keyword to
``True``. This will make it so that the value in any bin N is the cumulative
sum of all bins from 0 to N. The direction of the summation can be reversed by
setting ``accumulation`` to ``-True``. For ``PhasePlot``, the accumulation can
be set independently for each axis by setting ``accumulation`` to a list of
``True``/ ``-True`` /``False`` values.
.. _particle-plots:
Particle Plots
--------------
Slice and projection plots both provide a callback for over-plotting particle
positions onto gas fields. However, sometimes you want to plot the particle
quantities by themselves, perhaps because the gas fields are not relevant to
your use case, or perhaps because your dataset doesn't contain any gas fields
in the first place. Additionally, you may want to plot your particles with a
third field, such as particle mass or age, mapped to a colorbar.
:class:`~yt.visualization.particle_plots.ParticlePlot` provides a convenient
way to do this in yt.
The easiest way to make a :class:`~yt.visualization.particle_plots.ParticlePlot`
is to use the convenience routine. This has the syntax:
.. code-block:: python
p = yt.ParticlePlot(ds, ("all", "particle_position_x"), ("all", "particle_position_y"))
p.save()
Here, ``ds`` is a dataset we've previously opened. The commands create a particle
plot that shows the x and y positions of all the particles in ``ds`` and save the
result to a file on the disk. The type of plot returned depends on the fields you
pass in; in this case, ``p`` will be an :class:`~yt.visualization.particle_plots.ParticleProjectionPlot`,
because the fields are aligned to the coordinate system of the simulation.
The above example is equivalent to the following:
.. code-block:: python
p = yt.ParticleProjectionPlot(ds, "z")
p.save()
Most of the callbacks the work for slice and projection plots also work for
:class:`~yt.visualization.particle_plots.ParticleProjectionPlot`.
For instance, we can zoom in:
.. code-block:: python
p = yt.ParticlePlot(ds, ("all", "particle_position_x"), ("all", "particle_position_y"))
p.zoom(10)
p.save("zoom")
change the width:
.. code-block:: python
p.set_width((500, "kpc"))
or change the axis units:
.. code-block:: python
p.set_unit(("all", "particle_position_x"), "Mpc")
Here is a full example that shows the simplest way to use
:class:`~yt.visualization.particle_plots.ParticlePlot`:
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
p = yt.ParticlePlot(ds, ("all", "particle_position_x"), ("all", "particle_position_y"))
p.save()
In the above examples, we are simply splatting particle x and y positions onto
a plot using some color. Colors can be applied to the plotted particles by
providing a ``z_field``, which will be summed along the line of sight in a manner
similar to a projection.
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
p = yt.ParticlePlot(ds, ("all", "particle_position_x"), ("all", "particle_position_y"), ("all", "particle_mass"))
p.set_unit(("all", "particle_mass"), "Msun")
p.zoom(32)
p.save()
Additionally, a ``weight_field`` can be given such that the value in each
pixel is the weighted average along the line of sight.
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
p = yt.ParticlePlot(
ds,
("all", "particle_position_x"),
("all", "particle_position_y"),
("all", "particle_mass"),
weight_field=("all", "particle_ones"),
)
p.set_unit(("all", "particle_mass"), "Msun")
p.zoom(32)
p.save()
Note the difference in the above two plots. The first shows the
total mass along the line of sight. The density is higher in the
inner regions, and hence there are more particles and more mass along
the line of sight. The second plot shows the average mass per particle
along the line of sight. The inner region is dominated by low mass
star particles, whereas the outer region is comprised of higher mass
dark matter particles.
Both :class:`~yt.visualization.particle_plots.ParticleProjectionPlot` and
:class:`~yt.visualization.particle_plots.ParticlePhasePlot` objects
accept a ``deposition`` argument which controls the order of the "splatting"
of the particles onto the pixels in the plot. The default option, ``"ngp"``,
corresponds to the "Nearest-Grid-Point" (0th-order) method, which simply
finds the pixel the particle is located in and deposits 100% of the particle
or its plotted quantity into that pixel. The other option, ``"cic"``,
corresponds to the "Cloud-In-Cell" (1st-order) method, which linearly
interpolates the particle or its plotted quantity into the four nearest
pixels in the plot.
Here is a complete example that uses the ``particle_mass`` field
to set the colorbar and shows off some of the modification functions for
:class:`~yt.visualization.particle_plots.ParticleProjectionPlot`:
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
p = yt.ParticlePlot(
ds,
("all", "particle_position_x"),
("all", "particle_position_y"),
("all", "particle_mass"),
width=(0.5, 0.5),
)
p.set_unit(("all", "particle_mass"), "Msun")
p.zoom(32)
p.annotate_title("Zoomed-in Particle Plot")
p.save()
If the fields passed in to :class:`~yt.visualization.particle_plots.ParticlePlot`
do not correspond to a valid :class:`~yt.visualization.particle_plots.ParticleProjectionPlot`,
a :class:`~yt.visualization.particle_plots.ParticlePhasePlot` will be returned instead.
:class:`~yt.visualization.particle_plots.ParticlePhasePlot` is used to plot arbitrary particle
fields against each other, and do not support some of the callbacks available in
:class:`~yt.visualization.particle_plots.ParticleProjectionPlot` -
for instance, :meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.pan` and
:meth:`~yt.visualization.plot_window.AxisAlignedSlicePlot.zoom` don't make much sense when of your axes is a position
and the other is a velocity. The modification functions defined for :class:`~yt.visualization.profile_plotter.PhasePlot`
should all work, however.
Here is an example of making a :class:`~yt.visualization.particle_plots.ParticlePhasePlot`
of ``particle_position_x`` versus ``particle_velocity_z``, with the ``particle_mass`` on the colorbar:
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
p = yt.ParticlePlot(ds, ("all", "particle_position_x"), ("all", "particle_velocity_z"), ("all", "particle_mass"))
p.set_unit(("all", "particle_position_x"), "Mpc")
p.set_unit(("all", "particle_velocity_z"), "km/s")
p.set_unit(("all", "particle_mass"), "Msun")
p.save()
and here is one with the particle x and y velocities on the plot axes:
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
p = yt.ParticlePlot(ds, ("all", "particle_velocity_x"), ("all", "particle_velocity_y"), ("all", "particle_mass"))
p.set_unit(("all", "particle_velocity_x"), "km/s")
p.set_unit(("all", "particle_velocity_y"), "km/s")
p.set_unit(("all", "particle_mass"), "Msun")
p.set_ylim(-400, 400)
p.set_xlim(-400, 400)
p.save()
If you want more control over the details of the :class:`~yt.visualization.particle_plots.ParticleProjectionPlot` or
:class:`~yt.visualization.particle_plots.ParticlePhasePlot`, you can always use these classes directly. For instance,
here is an example of using the ``depth`` argument to :class:`~yt.visualization.particle_plots.ParticleProjectionPlot`
to only plot the particles that live in a thin slice around the center of the
domain:
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
p = yt.ParticleProjectionPlot(ds, 2, [("all", "particle_mass")], width=(0.5, 0.5), depth=0.01)
p.set_unit(("all", "particle_mass"), "Msun")
p.save()
Using :class:`~yt.visualization.particle_plots.ParticleProjectionPlot`, you can also plot particles
along an off-axis direction:
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
L = [1, 1, 1] # normal or "line of sight" vector
N = [0, 1, 0] # north or "up" vector
p = yt.ParticleProjectionPlot(
ds, L, [("all", "particle_mass")], width=(0.05, 0.05), depth=0.3, north_vector=N
)
p.set_unit(("all", "particle_mass"), "Msun")
p.save()
Here is an example of using the ``data_source`` argument to :class:`~yt.visualization.particle_plots.ParticlePhasePlot`
to only consider the particles that lie within a 50 kpc sphere around the domain center:
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
my_sphere = ds.sphere("c", (50.0, "kpc"))
p = yt.ParticlePhasePlot(
my_sphere,
("all", "particle_velocity_x"),
("all", "particle_velocity_y"),
("all", "particle_mass")
)
p.set_unit(("all", "particle_velocity_x"), "km/s")
p.set_unit(("all", "particle_velocity_y"), "km/s")
p.set_unit(("all", "particle_mass"), "Msun")
p.set_ylim(-400, 400)
p.set_xlim(-400, 400)
p.save()
:class:`~yt.visualization.particle_plots.ParticleProjectionPlot` objects also admit a ``density``
flag, which allows one to plot the surface density of a projected quantity. This simply divides
the quantity in each pixel of the plot by the area of that pixel. It also changes the label on the
colorbar to reflect the new units and the fact that it is a density. This may make most sense in
the case of plotting the projected particle mass, in which case you can plot the projected particle
mass density:
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
p = yt.ParticleProjectionPlot(ds, 2, [("all", "particle_mass")], width=(0.5, 0.5), density=True)
p.set_unit(("all", "particle_mass"), "Msun/kpc**2") # Note that the dimensions reflect the density flag
p.save()
Finally, with 1D and 2D Profiles, you can create a :class:`~yt.data_objects.profiles.ParticleProfile`
object separately using the :func:`~yt.data_objects.profiles.create_profile` function, and then use it
create a :class:`~yt.visualization.particle_plots.ParticlePhasePlot` object using the
:meth:`~yt.visualization.particle_plots.ParticlePhasePlot.from_profile` method. In this example,
we have also used the ``weight_field`` argument to compute the average ``particle_mass`` in each
pixel, instead of the total:
.. python-script::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
ad = ds.all_data()
profile = yt.create_profile(
ad,
[("all", "particle_velocity_x"), ("all", "particle_velocity_y")],
[("all", "particle_mass")],
n_bins=800,
weight_field=("all", "particle_ones"),
)
p = yt.ParticlePhasePlot.from_profile(profile)
p.set_unit(("all", "particle_velocity_x"), "km/s")
p.set_unit(("all", "particle_velocity_y"), "km/s")
p.set_unit(("all", "particle_mass"), "Msun")
p.set_ylim(-400, 400)
p.set_xlim(-400, 400)
p.save()
Under the hood, the :class:`~yt.data_objects.profiles.ParticleProfile` class works a lot like a
:class:`~yt.data_objects.profiles.Profile2D` object, except that instead of just binning the
particle field, you can also use higher-order deposition functions like the cloud-in-cell
interpolant to spread out the particle quantities over a few cells in the profile. The
:func:`~yt.data_objects.profiles.create_profile` will automatically detect when all the fields
you pass in are particle fields, and return a :class:`~yt.data_objects.profiles.ParticleProfile`
if that is the case. For a complete description of the :class:`~yt.data_objects.profiles.ParticleProfile`
class please consult the reference documentation.
.. _interactive-plotting:
Interactive Plotting
--------------------
The best way to interactively plot data is through the IPython notebook. Many
detailed tutorials on using the IPython notebook can be found at
:ref:`notebook-tutorial`. The simplest way to launch the notebook it is to
type:
.. code-block:: bash
yt notebook
at the command line. This will prompt you for a password (so that if you're on
a shared user machine no one else can pretend to be you!) and then spawn an
IPython notebook you can connect to.
If you want to see yt plots inline inside your notebook, you need only create a
plot and then call ``.show()`` and the image will appear inline:
.. notebook-cell::
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
p = yt.ProjectionPlot(ds, "z", ("gas", "density"), center='m', width=(10,'kpc'),
weight_field=("gas", "density"))
p.set_figure_size(5)
p.show()
.. _saving_plots:
Saving Plots
------------
If you want to save your yt plots, you have a couple of options for customizing
the plot filenames. If you don't care what the filenames are, just calling the
``save`` method with no additional arguments usually suffices:
.. code-block:: python
import yt
ds = yt.load("GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0100")
slc = yt.SlicePlot(ds, "z", [("gas", "kT"), ("gas", "density")], width=(500.0, "kpc"))
slc.save()
which will yield PNG plots with the filenames
.. code-block:: bash
$ ls \*.png
sloshing_nomag2_hdf5_plt_cnt_0100_Slice_z_density.png
sloshing_nomag2_hdf5_plt_cnt_0100_Slice_z_kT.png
which has a general form of
.. code-block:: bash
[dataset name]_[plot type]_[axis]_[field name].[suffix]
Calling ``save`` with a single argument or the ``name`` keyword argument
specifies an alternative name for the plot:
.. code-block:: python
slc.save("bananas")
or
.. code-block:: python
slc.save(name="bananas")
yields
.. code-block:: bash
$ ls \*.png
bananas_Slice_z_kT.png
bananas_Slice_z_density.png
If you call ``save`` with a full filename with a file suffix, the plot
will be saved with that filename:
.. code-block:: python
slc.save("sloshing.png")
since this will take any field and plot it with this filename, it is
typically only useful if you are plotting one field. If you want to
simply change the image format of the plotted file, use the ``suffix``
keyword:
.. code-block:: python
slc.save(name="bananas", suffix="eps")
yielding
.. code-block:: bash
$ ls *.eps
bananas_Slice_z_kT.eps
bananas_Slice_z_density.eps
.. _remaking-plots:
Remaking Figures from Plot Datasets
-----------------------------------
When working with datasets that are too large to be stored locally,
making figures just right can be cumbersome as it requires continuously
moving images somewhere they can be viewed. However, image creation is
actually a two step process of first creating the projection, slice,
or profile object, and then converting that object into an actual image.
Fortunately, the hard part (creating slices, projections, profiles) can
be separated from the easy part (generating images). The intermediate
slice, projection, and profile objects can be saved as reloadable
datasets, then handed back to the plotting machinery discussed here.
For slices and projections, the saveable object is associated with the
plot object as ``data_source``. This can be saved with the
:func:`~yt.data_objects.data_containers.YTDataContainer.save_as_dataset` function. For
more information, see :ref:`saving_data`.
.. code-block:: python
p = yt.ProjectionPlot(ds, "x", ("gas", "density"), weight_field=("gas", "density"))
fn = p.data_source.save_as_dataset()
This function will optionally take a ``filename`` keyword that follows
the same logic as discussed above in :ref:`saving_plots`. The filename
to which the dataset was written will be returned.
Once saved, this file can be reloaded completely independently of the
original dataset and given back to the plot function with the same
arguments. One can now continue to tweak the figure to one's liking.
.. code-block:: python
new_ds = yt.load(fn)
new_p = yt.ProjectionPlot(
new_ds, "x", ("gas", "density"), weight_field=("gas", "density")
)
new_p.save()
The same functionality is available for profile and phase plots. In
each case, a special data container, ``data``, is given to the plotting
functions.
For ``ProfilePlot``:
.. code-block:: python
ad = ds.all_data()
p1 = yt.ProfilePlot(
ad, ("gas", "density"), ("gas", "temperature"), weight_field=("gas", "mass")
)
# note that ProfilePlots can hold a list of profiles
fn = p1.profiles[0].save_as_dataset()
new_ds = yt.load(fn)
p2 = yt.ProfilePlot(
new_ds.data,
("gas", "density"),
("gas", "temperature"),
weight_field=("gas", "mass"),
)
p2.save()
For ``PhasePlot``:
.. code-block:: python
ad = ds.all_data()
p1 = yt.PhasePlot(
ad, ("gas", "density"), ("gas", "temperature"), ("gas", "mass"), weight_field=None
)
fn = p1.profile.save_as_dataset()
new_ds = yt.load(fn)
p2 = yt.PhasePlot(
new_ds.data,
("gas", "density"),
("gas", "temperature"),
("gas", "mass"),
weight_field=None,
)
p2.save()
.. _eps-writer:
Publication-ready Figures
-------------------------
While the routines above give a convenient method to inspect and
visualize your data, publishers often require figures to be in PDF or
EPS format. While the matplotlib supports vector graphics and image
compression in PDF formats, it does not support compression in EPS
formats. The :class:`~yt.visualization.eps_writer.DualEPS` module
provides an interface with the `PyX <https://pyx-project.org/>`_,
which is a Python abstraction of the PostScript drawing model with a
LaTeX interface. It is optimal for publications to provide figures
with vector graphics to avoid rasterization of the lines and text,
along with compression to produce figures that do not have a large
filesize.
.. note::
PyX must be installed, which can be accomplished either manually
with ``python -m pip install pyx``.
This module can take any of the plots mentioned above and create an
EPS or PDF figure. For example,
.. code-block:: python
import yt.visualization.eps_writer as eps
slc = yt.SlicePlot(ds, "z", ("gas", "density"))
slc.set_width(25, "kpc")
eps_fig = eps.single_plot(slc)
eps_fig.save_fig("zoom", format="eps")
eps_fig.save_fig("zoom-pdf", format="pdf")
The ``eps_fig`` object exposes all of the low-level functionality of
``PyX`` for further customization (see the `PyX documentation
<https://pyx-project.org/manual/>`_). There are a few
convenience routines in ``eps_writer``, such as drawing a circle,
.. code-block:: python
eps_fig.circle(radius=0.2, loc=(0.5, 0.5))
eps_fig.sav_fig("zoom-circle", format="eps")
with a radius of 0.2 at a center of (0.5, 0.5), both of which are in
units of the figure's field of view. The
:func:`~yt.visualization.eps_writer.multiplot_yt` routine also
provides a convenient method to produce multi-panel figures
from a PlotWindow. For example,
.. code-block:: python
import yt
import yt.visualization.eps_writer as eps
slc = yt.SlicePlot(
ds,
"z",
[
("gas", "density"),
("gas", "temperature"),
("gas", "pressure"),
("gas", "velocity_magnitude"),
],
)
slc.set_width(25, "kpc")
eps_fig = eps.multiplot_yt(2, 2, slc, bare_axes=True)
eps_fig.scale_line(0.2, "5 kpc")
eps_fig.save_fig("multi", format="eps")
will produce a 2x2 panel figure with a scale bar indicating 5 kpc.
The routine will try its best to place the colorbars in the optimal
margin, but it can be overridden by providing the keyword
``cb_location`` with a dict of either ``right, left, top, bottom``
with the fields as the keys.
You can also combine slices, projections, and phase plots. Here is
an example that includes slices and phase plots:
.. code-block:: python
from yt import PhasePlot, SlicePlot
from yt.visualization.eps_writer import multiplot_yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
p1 = SlicePlot(ds, "x", ("gas", "density"))
p1.set_width(10, "kpc")
p2 = SlicePlot(ds, "x", ("gas", "temperature"))
p2.set_width(10, "kpc")
p2.set_cmap(("gas", "temperature"), "hot")
sph = ds.sphere(ds.domain_center, (10, "kpc"))
p3 = PhasePlot(
sph,
"radius",
("gas", "density"),
("gas", "temperature"),
weight_field=("gas", "mass"),
)
p4 = PhasePlot(
sph, "radius", ("gas", "density"), ("gas", "pressure"), weight_field=("gas", "mass")
)
mp = multiplot_yt(
2,
2,
[p1, p2, p3, p4],
savefig="yt",
shrink_cb=0.9,
bare_axes=False,
yt_nocbar=False,
margins=(0.5, 0.5),
)
mp.save_fig("multi_slice_phase")
Using yt's style with matplotlib
--------------------------------
It is possible to use yt's plot style in outside of yt itself, with the
:func:`~yt.funcs.matplotlib_style_context` context manager
.. code-block:: python
import matplotlib.pyplot as plt
import numpy as np
import yt
plt.rcParams["font.size"] = 14
x = np.linspace(-np.pi, np.pi, 100)
y = np.sin(x)
with yt.funcs.matplotlib_style_context():
fig, ax = plt.subplots()
ax.plot(x, y)
ax.set(
xlabel=r"$x$",
ylabel=r"$y$",
title="A yt-styled matplotlib figure",
)
Note that :func:`~yt.funcs.matplotlib_style_context` doesn't control the font
size, so we adjust it manually in the preamble.
With matplotlib 3.7 and newer, you can avoid importing yt altogether
.. code-block:: python
# requires matplotlib>=3.7
import matplotlib.pyplot as plt
import numpy as np
plt.rcParams["font.size"] = 14
x = np.linspace(-np.pi, np.pi, 100)
y = np.sin(x)
with plt.style.context("yt.default"):
fig, ax = plt.subplots()
ax.plot(x, y)
ax.set(
xlabel=r"$x$",
ylabel=r"$y$",
title="A yt-styled matplotlib figure",
)
and you can also enable yt's style without a context manager as
.. code-block:: python
# requires matplotlib>=3.7
import matplotlib.pyplot as plt
import numpy as np
plt.style.use("yt.default")
plt.rcParams["font.size"] = 14
x = np.linspace(-np.pi, np.pi, 100)
y = np.sin(x)
fig, ax = plt.subplots()
ax.plot(x, y)
ax.set(
xlabel=r"$x$",
ylabel=r"$y$",
title="A yt-styled matplotlib figure",
)
For more details, see `matplotlib's documentation
<https://matplotlib.org/stable/tutorials/introductory/customizing.html#customizing-with-style-sheets>_`
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/visualizing/plots.rst | plots.rst |
.. _manual-plotting:
Using the Manual Plotting Interface
===================================
Sometimes you need a lot of flexibility in creating plots. While the
:class:`~yt.visualization.plot_window.PlotWindow` provides an easy to
use object that can create nice looking, publication quality plots with a
minimum of effort, there are often times when its ease of use conflicts with
your need to change the font only on the x-axis, or whatever your
need/desire/annoying coauthor requires. To that end, yt provides a number of
ways of getting the raw data that goes into a plot to you in the form of a one
or two dimensional dataset that you can plot using any plotting method you like.
matplotlib or another python library are easiest, but these methods allow you to
take your data and plot it in gnuplot, or any unnamed commercial plotting
packages.
Note that the index object associated with your snapshot file contains a
list of plots you've made in ``ds.plots``.
.. _fixed-resolution-buffers:
Slice, Projections, and other Images: The Fixed Resolution Buffer
-----------------------------------------------------------------
For slices and projects, yt provides a manual plotting interface based on
the :class:`~yt.visualization.fixed_resolution.FixedResolutionBuffer` (hereafter
referred to as FRB) object. Despite its somewhat unwieldy name, at its heart, an
FRB is a very simple object: it's essentially a window into your data: you give
it a center and a width or a left and right edge, and an image resolution, and
the FRB returns a fully pixelized image. The simplest way to
generate an FRB is to use the ``.to_frb(width, resolution, center=None)`` method
of any data two-dimensional data object:
.. python-script::
import matplotlib
matplotlib.use("Agg")
import numpy as np
from matplotlib import pyplot as plt
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
_, c = ds.find_max(("gas", "density"))
proj = ds.proj(("gas", "density"), 0)
width = (10, "kpc") # we want a 1.5 mpc view
res = [1000, 1000] # create an image with 1000x1000 pixels
frb = proj.to_frb(width, res, center=c)
plt.imshow(np.array(frb["gas", "density"]))
plt.savefig("my_perfect_figure.png")
Note that in the above example the axes tick marks indicate pixel indices. If you
want to represent physical distances on your plot axes, you will need to use the
``extent`` keyword of the ``imshow`` function.
The FRB is a very small object that can be deleted and recreated quickly (in
fact, this is how ``PlotWindow`` plots work behind the scenes). Furthermore, you
can add new fields in the same "window", and each of them can be plotted with
their own zlimit. This is quite useful for creating a mosaic of the same region
in space with Density, Temperature, and x-velocity, for example. Each of these
quantities requires a substantially different set of limits.
A more complex example, showing a few yt helper functions that can make
setting up multiple axes with colorbars easier than it would be using only
matplotlib can be found in the :ref:`advanced-multi-panel` cookbook recipe.
.. _frb-filters:
Fixed Resolution Buffer Filters
-------------------------------
The FRB can be modified by using set of predefined filters in order to e.g.
create realistically looking, mock observation images out of simulation data.
Applying filter is an irreversible operation, hence the order in which you are
using them matters.
.. python-script::
import matplotlib
matplotlib.use("Agg")
from matplotlib import pyplot as plt
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
slc = ds.slice("z", 0.5)
frb = slc.to_frb((20, "kpc"), 512)
frb.apply_gauss_beam(nbeam=30, sigma=2.0)
frb.apply_white_noise(5e-23)
plt.imshow(frb["gas", "density"].d)
plt.savefig("frb_filters.png")
Currently available filters:
Gaussian Smoothing
~~~~~~~~~~~~~~~~~~
.. function:: apply_gauss_beam(self, nbeam=30, sigma=2.0)
(This is a proxy for
:class:`~yt.visualization.fixed_resolution_filters.FixedResolutionBufferGaussBeamFilter`.)
This filter convolves the FRB with 2d Gaussian that is "nbeam" pixel wide
and has standard deviation "sigma".
White Noise
~~~~~~~~~~~
.. function:: apply_white_noise(self, bg_lvl=None)
(This is a proxy for
:class:`~yt.visualization.fixed_resolution_filters.FixedResolutionBufferWhiteNoiseFilter`.)
This filter adds white noise with the amplitude "bg_lvl" to the FRB.
If "bg_lvl" is not present, 10th percentile of the FRB's values is used
instead.
.. _manual-line-plots:
Line Plots
----------
This is perhaps the simplest thing to do. yt provides a number of one
dimensional objects, and these return a 1-D numpy array of their contents with
direct dictionary access. As a simple example, take a
:class:`~yt.data_objects.selection_data_containers.YTOrthoRay` object, which can be
created from a index by calling ``ds.ortho_ray(axis, center)``.
.. python-script::
import matplotlib
matplotlib.use("Agg")
import numpy as np
from matplotlib import pyplot as plt
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
_, c = ds.find_max(("gas", "density"))
ax = 0 # take a line cut along the x axis
# cutting through the y0,z0 such that we hit the max density
ray = ds.ortho_ray(ax, (c[1], c[2]))
# Sort the ray values by 'x' so there are no discontinuities
# in the line plot
srt = np.argsort(ray["index", "x"])
plt.subplot(211)
plt.semilogy(np.array(ray["index", "x"][srt]), np.array(ray["gas", "density"][srt]))
plt.ylabel("density")
plt.subplot(212)
plt.semilogy(np.array(ray["index", "x"][srt]), np.array(ray["gas", "temperature"][srt]))
plt.xlabel("x")
plt.ylabel("temperature")
plt.savefig("den_temp_xsweep.png")
Of course, you'll likely want to do something more sophisticated than using the
matplotlib defaults, but this gives the general idea.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/visualizing/manual_plotting.rst | manual_plotting.rst |
.. _extracting-isocontour-information:
.. _surfaces:
3D Surfaces and Sketchfab
=========================
.. sectionauthor:: Jill Naiman and Matthew Turk
Surface Objects and Extracting Isocontour Information
-----------------------------------------------------
yt contains an implementation of the `Marching Cubes
<https://en.wikipedia.org/wiki/Marching_cubes>`__ algorithm, which can operate on
3D data objects. This provides two things. The first is to identify
isocontours and return either the geometry of those isocontours or to return
another field value sampled along that isocontour. The second piece of
functionality is to calculate the flux of a field over an isocontour.
Note that these isocontours are not guaranteed to be topologically connected.
In fact, inside a given data object, the marching cubes algorithm will return
all isocontours, not just a single connected one. This means if you encompass
two clumps of a given density in your data object and extract an isocontour at
that density, it will include both of the clumps.
This means that with adaptive mesh refinement
data, you *will* see cracks across refinement boundaries unless a
"crack-fixing" step is applied to match up these boundaries. yt does not
perform such an operation, and so there will be seams visible in 3D views of
your isosurfaces.
Surfaces can be exported in `OBJ format
<https://en.wikipedia.org/wiki/Wavefront_.obj_file>`_, values can be samples
at the center of each face of the surface, and flux of a given field could be
calculated over the surface. This means you can, for instance, extract an
isocontour in density and calculate the mass flux over that isocontour. It
also means you can export a surface from yt and view it in something like
`Blender <https://www.blender.org/>`__, `MeshLab
<http://www.meshlab.net>`__, or even on your Android or iOS device in
`MeshPad <http://www.meshpad.org/>`__.
To extract geometry or sample a field, call
:meth:`~yt.data_objects.data_containers.YTSelectionContainer3D.extract_isocontours`. To
calculate a flux, call
:meth:`~yt.data_objects.data_containers.YTSelectionContainer3D.calculate_isocontour_flux`.
both of these operations will run in parallel. For more information on enabling
parallelism in yt, see :ref:`parallel-computation`.
Alternatively, you can make an object called ``YTSurface`` that makes
this process much easier. You can create one of these objects by specifying a
source data object and a field over which to identify a surface at a given
value. For example:
.. code-block:: python
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
sphere = ds.sphere("max", (1.0, "Mpc"))
surface = ds.surface(sphere, ("gas", "density"), 1e-27)
This object, ``surface``, can be queried for values on the surface. For
instance:
.. code-block:: python
print(surface["gas", "temperature"].min(), surface["gas", "temperature"].max())
will return the values 11850.7476943 and 13641.0663899. These values are
interpolated to the face centers of every triangle that constitutes a portion
of the surface. Note that reading a new field requires re-calculating the
entire surface, so it's not the fastest operation. You can get the vertices of
the triangle by looking at the property ``.vertices``.
Exporting to a File
-------------------
If you want to export this to a `PLY file
<https://en.wikipedia.org/wiki/PLY_(file_format)>`_ you can call the routine
``export_ply``, which will write to a file and optionally sample a field at
every face or vertex, outputting a color value to the file as well. This file
can then be viewed in MeshLab, Blender or on the website `Sketchfab.com
<https://sketchfab.com>`__. But if you want to view it on Sketchfab, there's an
even easier way!
Exporting to Sketchfab
----------------------
`Sketchfab <https://sketchfab.com>`__ is a website that uses WebGL, a relatively
new technology for displaying 3D graphics in any browser. It's very fast and
typically requires no plugins. Plus, it means that you can share data with
anyone and they can view it immersively without having to download the data or
any software packages! Sketchfab provides a free tier for up to 10 models, and
these models can be embedded in websites.
There are lots of reasons to want to export to Sketchfab. For instance, if
you're looking at a galaxy formation simulation and you publish a paper, you
can include a link to the model in that paper (or in the arXiv listing) so that
people can explore and see what the data looks like. You can also embed a
model in a website with other supplemental data, or you can use Sketchfab to
discuss morphological properties of a dataset with collaborators. It's also
just plain cool.
The ``YTSurface`` object includes a method to upload directly to Sketchfab,
but it requires that you get an API key first. You can get this API key by
creating an account and then going to your "dashboard," where it will be listed
on the right hand side. Once you've obtained it, put it into your
``~/.config/yt/yt.toml`` file under the heading ``[yt]`` as the variable
``sketchfab_api_key``. If you don't want to do this, you can also supply it as
an argument to the function ``export_sketchfab``.
Now you can run a script like this:
.. code-block:: python
import yt
from yt.units import kpc
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
dd = ds.sphere(ds.domain_center, (500, "kpc"))
rho = 1e-28
bounds = [[dd.center[i] - 250 * kpc, dd.center[i] + 250 * kpc] for i in range(3)]
surf = ds.surface(dd, ("gas", "density"), rho)
upload_id = surf.export_sketchfab(
title="galaxy0030 - 1e-28",
description="Extraction of Density (colored by temperature) at 1e-28 g/cc",
color_field=("gas", "temperature"),
color_map="hot",
color_log=True,
bounds=bounds,
)
and yt will extract a surface, convert to a format that Sketchfab.com
understands (PLY, in a zip file) and then upload it using your API key. For
this demo, I've used data kindly provided by Ryan Joung from a simulation of
galaxy formation. Here's what my newly-uploaded model looks like, using the
embed code from Sketchfab:
.. raw:: html
<iframe width="640" height="480" src="https://sketchfab.com/models/ff59dacd55824110ad5bcc292371a514/embed" frameborder="0" allowfullscreen mozallowfullscreen="true" webkitallowfullscreen="true" onmousewheel=""></iframe>
As a note, Sketchfab has a maximum model size of 50MB for the free account.
50MB is pretty hefty, though, so it shouldn't be a problem for most
needs. Additionally, if you have an eligible e-mail address associated with a
school or university, you can request a free professional account, which allows
models up to 200MB. See https://sketchfab.com/education for details.
OBJ and MTL Files
-----------------
If the ability to maneuver around an isosurface of your 3D simulation in
`Sketchfab <https://sketchfab.com>`__ cost you half a day of work (let's be
honest, 2 days), prepare to be even less productive. With a new `OBJ file
<https://en.wikipedia.org/wiki/Wavefront_.obj_file>`__ exporter, you can now
upload multiple surfaces of different transparencies in the same file.
The following code snippet produces two files which contain the vertex info
(surfaces.obj) and color/transparency info (surfaces.mtl) for a 3D
galaxy simulation:
.. code-block:: python
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
rho = [2e-27, 1e-27]
trans = [1.0, 0.5]
filename = "./surfaces"
sphere = ds.sphere("max", (1.0, "Mpc"))
for i, r in enumerate(rho):
surf = ds.surface(sphere, ("gas", "density"), r)
surf.export_obj(
filename,
transparency=trans[i],
color_field=("gas", "temperature"),
plot_index=i,
)
The calling sequence is fairly similar to the ``export_ply`` function
`previously used <https://blog.yt-project.org/post/3DSurfacesAndSketchFab/>`__
to export 3D surfaces. However, one can now specify a transparency for each
surface of interest, and each surface is enumerated in the OBJ files with ``plot_index``.
This means one could potentially add surfaces to a previously
created file by setting ``plot_index`` to the number of previously written
surfaces.
One tricky thing: the header of the OBJ file points to the MTL file (with
the header command ``mtllib``). This means if you move one or both of the files
you may have to change the header to reflect their new directory location.
A Few More Options
------------------
There are a few extra inputs for formatting the surface files you may want to use.
(1) Setting ``dist_fac`` will divide all the vertex coordinates by this factor.
Default will scale the vertices by the physical bounds of your sphere.
(2) Setting ``color_field_max`` and/or ``color_field_min`` will scale the colors
of all surfaces between this min and max. Default is to scale the colors of each
surface to their own min and max values.
Uploading to SketchFab
----------------------
To upload to `Sketchfab <https://sketchfab.com>`__ one only needs to zip the
OBJ and MTL files together, and then upload via your dashboard prompts in
the usual way. For example, the above script produces:
.. raw:: html
<iframe frameborder="0" height="480" width="854" allowFullScreen
webkitallowfullscreen="true" mozallowfullscreen="true"
src="https://skfb.ly/5k4j2fdcb?autostart=0&transparent=0&autospin=0&controls=1&watermark=1">
</iframe>
Importing to MeshLab and Blender
--------------------------------
The new OBJ formatting will produce multi-colored surfaces in both
`MeshLab <http://www.meshlab.net/>`__ and `Blender <https://www.blender.org/>`__,
a feature not possible with the
`previous PLY exporter <https://blog.yt-project.org/post/3DSurfacesAndSketchFab/>`__.
To see colors in MeshLab go to the "Render" tab and
select "Color -> Per Face". Note in both MeshLab and Blender, unlike Sketchfab, you can't see
transparencies until you render.
...One More Option
------------------
If you've started poking around the actual code instead of skipping off to
lose a few days running around your own simulations
you may have noticed there are a few more options then those listed above,
specifically, a few related to something called "Emissivity." This allows you
to output one more type of variable on your surfaces. For example:
.. code-block:: python
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
rho = [2e-27, 1e-27]
trans = [1.0, 0.5]
filename = "./surfaces"
def emissivity(field, data):
return data["gas", "density"] ** 2 * np.sqrt(data["gas", "temperature"])
add_field("emissivity", function=_Emissivity, sampling_type="cell", units=r"g*K/cm**6")
sphere = ds.sphere("max", (1.0, "Mpc"))
for i, r in enumerate(rho):
surf = ds.surface(sphere, ("gas", "density"), r)
surf.export_obj(
filename,
transparency=trans[i],
color_field=("gas", "temperature"),
emit_field="emissivity",
plot_index=i,
)
will output the same OBJ and MTL as in our previous example, but it will scale
an emissivity parameter by our new field. Technically, this makes our outputs
not really OBJ files at all, but a new sort of hybrid file, however we needn't worry
too much about that for now.
This parameter is useful if you want to upload your files in Blender and have the
embedded rendering engine do some approximate ray-tracing on your transparencies
and emissivities. This does take some slight modifications to the OBJ importer
scripts in Blender. For example, on a Mac, you would modify the file
"/Applications/Blender/blender.app/Contents/MacOS/2.65/scripts/addons/io_scene_obj/import_obj.py",
in the function "create_materials" with:
.. code-block:: diff
# ...
elif line_lower.startswith(b'tr'): # translucency
context_material.translucency = float_func(line_split[1])
elif line_lower.startswith(b'tf'):
# rgb, filter color, blender has no support for this.
pass
elif line_lower.startswith(b'em'): # MODIFY: ADD THIS LINE
context_material.emit = float_func(line_split[1]) # MODIFY: THIS LINE TOO
elif line_lower.startswith(b'illum'):
illum = int(line_split[1])
# ...
To use this in Blender, you might create a
`Blender script <https://docs.blender.org/manual/en/latest/advanced/scripting/introduction.html>`__
like the following:
.. code-block:: python
from math import radians
import bpy
bpy.ops.import_scene.obj(filepath="./surfaces.obj") # will use new importer
# set up lighting = indirect
bpy.data.worlds["World"].light_settings.use_indirect_light = True
bpy.data.worlds["World"].horizon_color = [0.0, 0.0, 0.0] # background = black
# have to use approximate, not ray tracing for emitting objects ...
# ... for now...
bpy.data.worlds["World"].light_settings.gather_method = "APPROXIMATE"
bpy.data.worlds["World"].light_settings.indirect_factor = 20.0 # turn up all emiss
# set up camera to be on -x axis, facing toward your object
scene = bpy.data.scenes["Scene"]
scene.camera.location = [-0.12, 0.0, 0.0] # location
scene.camera.rotation_euler = [
radians(90.0),
0.0,
radians(-90.0),
] # face to (0,0,0)
# render
scene.render.filepath = "/Users/jillnaiman/surfaces_blender" # needs full path
bpy.ops.render.render(write_still=True)
This above bit of code would produce an image like so:
.. image:: _images/surfaces_blender.png
Note that the hottest stuff is brightly shining, while the cool stuff is less so
(making the inner isodensity contour barely visible from the outside of the surfaces).
If the Blender image caught your fancy, you'll be happy to know there is a greater
integration of Blender and yt in the works, so stay tuned!
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/visualizing/sketchfab.rst | sketchfab.rst |
.. _streamlines:
Streamlines: Tracking the Trajectories of Tracers in your Data
==============================================================
Streamlines, as implemented in yt, are defined as being parallel to a
vector field at all points. While commonly used to follow the
velocity flow or magnetic field lines, they can be defined to follow
any three-dimensional vector field. Once an initial condition and
total length of the streamline are specified, the streamline is
uniquely defined. Relatedly, yt also has the ability to follow
:ref:`particle-trajectories`.
Method
------
Streamlining through a volume is useful for a variety of analysis
tasks. By specifying a set of starting positions, the user is
returned a set of 3D positions that can, in turn, be used to visualize
the 3D path of the streamlines. Additionally, individual streamlines
can be converted into
:class:`~yt.data_objects.construction_data_containers.YTStreamline` objects,
and queried for all the available fields along the streamline.
The implementation of streamlining in yt is described below.
#. Decompose the volume into a set of non-overlapping, fully domain
tiling bricks, using the
:class:`~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree` homogenized
volume.
#. For every streamline starting position:
#. While the length of the streamline is less than the requested
length:
#. Find the brick that contains the current position
#. If not already present, generate vertex-centered data for
the vector fields defining the streamline.
#. While inside the brick
#. Integrate the streamline path using a Runge-Kutta 4th
order method and the vertex centered data.
#. During the intermediate steps of each RK4 step, if the
position is updated to outside the current brick,
interrupt the integration and locate a new brick at the
intermediate position.
#. The set of streamline positions are stored in the
:class:`~yt.visualization.streamlines.Streamlines` object.
Example Script
++++++++++++++
.. python-script::
import matplotlib.pyplot as plt
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
import yt
from yt.units import Mpc
from yt.visualization.api import Streamlines
# Load the dataset
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
# Define c: the center of the box, N: the number of streamlines,
# scale: the spatial scale of the streamlines relative to the boxsize,
# and then pos: the random positions of the streamlines.
c = ds.domain_center
N = 100
scale = ds.domain_width[0]
pos_dx = np.random.random((N, 3)) * scale - scale / 2.0
pos = c + pos_dx
# Create streamlines of the 3D vector velocity and integrate them through
# the box defined above
streamlines = Streamlines(
ds,
pos,
("gas", "velocity_x"),
("gas", "velocity_y"),
("gas", "velocity_z"),
length=1.0 * Mpc,
get_magnitude=True,
)
streamlines.integrate_through_volume()
# Create a 3D plot, trace the streamlines through the 3D volume of the plot
fig = plt.figure()
ax = Axes3D(fig, auto_add_to_figure=False)
fig.add_axes(ax)
for stream in streamlines.streamlines:
stream = stream[np.all(stream != 0.0, axis=1)]
ax.plot3D(stream[:, 0], stream[:, 1], stream[:, 2], alpha=0.1)
# Save the plot to disk.
plt.savefig("streamlines.png")
Data Access Along the Streamline
--------------------------------
.. note::
This functionality has not been implemented yet in the 3.x series of
yt. If you are interested in working on this and have questions, please
let us know on the yt-dev mailing list.
Once the streamlines are found, a
:class:`~yt.data_objects.construction_data_containers.YTStreamline` object can
be created using the
:meth:`~yt.visualization.streamlines.Streamlines.path` function, which
takes as input the index of the streamline requested. This conversion
is done by creating a mask that defines where the streamline is, and
creating 't' and 'dts' fields that define the dimensionless streamline
integration coordinate and integration step size. Once defined, fields
can be accessed in the standard manner.
Example Script
++++++++++++++++
.. code-block:: python
import matplotlib.pyplot as plt
import yt
from yt.visualization.api import Streamlines
ds = yt.load("DD1701") # Load ds
streamlines = Streamlines(ds, ds.domain_center)
streamlines.integrate_through_volume()
stream = streamlines.path(0)
plt.semilogy(stream["t"], stream["gas", "density"], "-x")
Running in Parallel
--------------------
The integration of the streamline paths is "embarrassingly" parallelized by
splitting the streamlines up between the processors. Upon completion,
each processor has access to all of the streamlines through the use of
a reduction operation.
For more information on enabling parallelism in yt, see
:ref:`parallel-computation`.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/visualizing/streamlines.rst | streamlines.rst |
.. _volume_rendering:
3D Visualization and Volume Rendering
=====================================
yt has the ability to create 3D visualizations using a process known as *volume
rendering* (oftentimes abbreviated VR). This volume rendering code differs
from the standard yt infrastructure for generating :ref:`simple-inspection`
in that it evaluates the radiative transfer equations through the volume with
user-defined transfer functions for each ray. Thus it can accommodate both
opaque and transparent structures appropriately. Currently all of the
rendering capabilities are implemented in software, requiring no specialized
hardware. Optimized versions implemented with OpenGL and utilizing graphics
processors are being actively developed.
.. note::
There is a Jupyter notebook containing a volume rendering tutorial available
at :ref:`volume-rendering-tutorial`.
Volume Rendering Introduction
-----------------------------
Constructing a 3D visualization is a process of describing the "scene" that
will be rendered. This includes the location of the viewing point (i.e., where
the "camera" is placed), the method by which a system would be viewed (i.e.,
the "lens," which may be orthographic, perspective, fisheye, spherical, and so
on) and the components that will be rendered (render "sources," such as volume
elements, lines, annotations, and opaque surfaces). The 3D plotting
infrastructure then develops a resultant image from this scene, which can be
saved to a file or viewed inline.
By constructing the scene in this programmatic way, full control can be had
over each component in the scene as well as the method by which the scene is
rendered; this can be used to prototype visualizations, inject annotation such
as grid or continent lines, and then to render a production-quality
visualization. By changing the "lens" used, a single camera path can output
images suitable for planetarium domes, immersive and head tracking systems
(such as the Oculus Rift or recent 360-degree/virtual reality movie viewers
such as the mobile YouTube app), as well as standard screens.
.. image:: _images/scene_diagram.svg
:width: 50%
:align: center
:alt: Diagram of a 3D Scene
.. _scene-description:
Volume Rendering Components
---------------------------
The Scene class and its subcomponents are organized as follows. Indented
objects *hang* off of their parent object.
* :ref:`Scene <scene>` - container object describing a volume and its contents
* :ref:`Sources <render-sources>` - objects to be rendered
* :ref:`VolumeSource <volume-sources>` - simulation volume tied to a dataset
* :ref:`TransferFunction <transfer_functions>` - mapping of simulation field values to color, brightness, and transparency
* :ref:`OpaqueSource <opaque-sources>` - Opaque structures like lines, dots, etc.
* :ref:`Annotations <volume_rendering_annotations>` - Annotated structures like grid cells, simulation boundaries, etc.
* :ref:`Camera <camera>` - object for rendering; consists of a location, focus, orientation, and resolution
* :ref:`Lens <lenses>` - object describing method for distributing rays through Sources
.. _scene:
Scene
^^^^^
The :class:`~yt.visualization.volume_rendering.scene.Scene`
is the container class which encompasses the whole of the volume
rendering interface. At its base level, it describes an infinite volume,
with a series of
:class:`~yt.visualization.volume_rendering.render_source.RenderSource` objects
hanging off of it that describe the contents
of that volume. It also contains a
:class:`~yt.visualization.volume_rendering.camera.Camera` for rendering that
volume. All of its classes can be
accessed and modified as properties hanging off of the scene.
The scene's most important functions are
:meth:`~yt.visualization.volume_rendering.scene.Scene.render` for
casting rays through the scene and
:meth:`~yt.visualization.volume_rendering.scene.Scene.save` for saving the
resulting rendered image to disk (see note on :ref:`when_to_render`).
The easiest way to create a scene with sensible defaults is to use the
functions:
:func:`~yt.visualization.volume_rendering.volume_rendering.create_scene`
(creates the scene) or
:func:`~yt.visualization.volume_rendering.volume_rendering.volume_render`
(creates the scene and then triggers ray tracing to produce an image).
See the :ref:`annotated-vr-example` for details.
.. _render-sources:
RenderSources
^^^^^^^^^^^^^
:class:`~yt.visualization.volume_rendering.render_source.RenderSource` objects
comprise the contents of what is actually *rendered*. One can add several
different RenderSources to a Scene and the ray-tracing step will pass rays
through all of them to produce the final rendered image.
.. _volume-sources:
VolumeSources
+++++++++++++
:class:`~yt.visualization.volume_rendering.render_source.VolumeSource` objects
are 3D :ref:`geometric-objects` of individual datasets placed into the scene
for rendering. Each VolumeSource requires a
:ref:`TransferFunction <transfer_functions>` to describe how the fields in
the VolumeSource dataset produce different colors and brightnesses in the
resulting image.
.. _opaque-sources:
OpaqueSources
+++++++++++++
In addition to semi-transparent objects, fully opaque structures can be added
to a scene as
:class:`~yt.visualization.volume_rendering.render_source.OpaqueSource` objects
including
:class:`~yt.visualization.volume_rendering.render_source.LineSource` objects
and
:class:`~yt.visualization.volume_rendering.render_source.PointSource` objects.
These are useful if you want to annotate locations or particles in an image,
or if you want to draw lines connecting different regions or
vertices. For instance, lines can be used to draw outlines of regions or
continents.
Worked examples of using the ``LineSource`` and ``PointSource`` are available at
:ref:`cookbook-vol-points` and :ref:`cookbook-vol-lines`.
.. _volume_rendering_annotations:
Annotations
+++++++++++
Similar to OpaqueSources, annotations enable the user to highlight
certain information with opaque structures. Examples include
:class:`~yt.visualization.volume_rendering.api.BoxSource`,
:class:`~yt.visualization.volume_rendering.api.GridSource`, and
:class:`~yt.visualization.volume_rendering.api.CoordinateVectorSource`. These
annotations will operate in data space and can draw boxes, grid information,
and also provide a vector orientation within the image.
For example scripts using these features,
see :ref:`cookbook-volume_rendering_annotations`.
.. _transfer_functions:
Transfer Functions
^^^^^^^^^^^^^^^^^^
A transfer function describes how rays that pass through the domain of a
:class:`~yt.visualization.volume_rendering.render_source.VolumeSource` are
mapped from simulation field values to color, brightness, and opacity in the
resulting rendered image. A transfer function consists of an array over
the x and y dimensions. The x dimension typically represents field values in
your underlying dataset to which you want your rendering to be sensitive (e.g.
density from 1e20 to 1e23). The y dimension consists of 4 channels for red,
green, blue, and alpha (opacity). A transfer function starts with all zeros
for its y dimension values, implying that rays traversing the VolumeSource
will not show up at all in the final image. However, you can add features to
the transfer function that will highlight certain field values in your
rendering.
.. _transfer-function-helper:
TransferFunctionHelper
++++++++++++++++++++++
Because good transfer functions can be difficult to generate, the
:class:`~yt.visualization.volume_rendering.transfer_function_helper.TransferFunctionHelper`
exists in order to help create and modify transfer functions with smart
defaults for your datasets.
To ease constructing transfer functions, each ``VolumeSource`` instance has a
``TransferFunctionHelper`` instance associated with it. This is the easiest way
to construct and customize a ``ColorTransferFunction`` for a volume rendering.
In the following example, we make use of the ``TransferFunctionHelper``
associated with a scene's ``VolumeSource`` to create an appealing transfer
function between a physically motivated range of densities in a cosmological
simulation:
.. python-script::
import yt
ds = yt.load("Enzo_64/DD0043/data0043")
sc = yt.create_scene(ds, lens_type="perspective")
# Get a reference to the VolumeSource associated with this scene
# It is the first source associated with the scene, so we can refer to it
# using index 0.
source = sc[0]
# Set the bounds of the transfer function
source.tfh.set_bounds((3e-31, 5e-27))
# set that the transfer function should be evaluated in log space
source.tfh.set_log(True)
# Make underdense regions appear opaque
source.tfh.grey_opacity = True
# Plot the transfer function, along with the CDF of the density field to
# see how the transfer function corresponds to structure in the CDF
source.tfh.plot("transfer_function.png", profile_field=("gas", "density"))
# save the image, flooring especially bright pixels for better contrast
sc.save("rendering.png", sigma_clip=6.0)
For fun, let's make the same volume_rendering, but this time setting
``grey_opacity=False``, which will make overdense regions stand out more:
.. python-script::
import yt
ds = yt.load("Enzo_64/DD0043/data0043")
sc = yt.create_scene(ds, lens_type="perspective")
source = sc[0]
# Set transfer function properties
source.tfh.set_bounds((3e-31, 5e-27))
source.tfh.set_log(True)
source.tfh.grey_opacity = False
source.tfh.plot("transfer_function.png", profile_field=("gas", "density"))
sc.save("rendering.png", sigma_clip=4.0)
To see a full example on how to use the ``TransferFunctionHelper`` interface,
follow the annotated :ref:`transfer-function-helper-tutorial`.
Color Transfer Functions
++++++++++++++++++++++++
A :class:`~yt.visualization.volume_rendering.transfer_functions.ColorTransferFunction`
is the standard way to map dataset field values to colors, brightnesses,
and opacities in the rendered rays. One can add discrete features to the
transfer function, which will render isocontours in the field data and
works well for visualizing nested structures in a simulation. Alternatively,
one can also add continuous features to the transfer function.
See :ref:`cookbook-custom-transfer-function` for an annotated, runnable tutorial
explaining usage of the ColorTransferFunction.
There are several methods to create a
:class:`~yt.visualization.volume_rendering.transfer_functions.ColorTransferFunction`
for a volume rendering. We will describe the low-level interface for
constructing color transfer functions here, and provide examples for each
option.
add_layers
""""""""""
The easiest way to create a ColorTransferFunction is to use the
:meth:`~yt.visualization.volume_rendering.transfer_functions.ColorTransferFunction.add_layers` function,
which will add evenly spaced isocontours along the transfer function, sampling a
colormap to determine the colors of the layers.
.. python-script::
import numpy as np
import yt
ds = yt.load("Enzo_64/DD0043/data0043")
sc = yt.create_scene(ds, lens_type="perspective")
source = sc[0]
source.set_field(("gas", "density"))
source.set_log(True)
bounds = (3e-31, 5e-27)
# Since this rendering is done in log space, the transfer function needs
# to be specified in log space.
tf = yt.ColorTransferFunction(np.log10(bounds))
tf.add_layers(5, colormap="cmyt.arbre")
source.tfh.tf = tf
source.tfh.bounds = bounds
source.tfh.plot("transfer_function.png", profile_field=("gas", "density"))
sc.save("rendering.png", sigma_clip=6)
sample_colormap
"""""""""""""""
To add a single gaussian layer with a color determined by a colormap value, use
:meth:`~yt.visualization.volume_rendering.transfer_functions.ColorTransferFunction.sample_colormap`.
.. python-script::
import numpy as np
import yt
ds = yt.load("Enzo_64/DD0043/data0043")
sc = yt.create_scene(ds, lens_type="perspective")
source = sc[0]
source.set_field(("gas", "density"))
source.set_log(True)
bounds = (3e-31, 5e-27)
# Since this rendering is done in log space, the transfer function needs
# to be specified in log space.
tf = yt.ColorTransferFunction(np.log10(bounds))
tf.sample_colormap(np.log10(1e-30), w=0.01, colormap="cmyt.arbre")
source.tfh.tf = tf
source.tfh.bounds = bounds
source.tfh.plot("transfer_function.png", profile_field=("gas", "density"))
sc.save("rendering.png", sigma_clip=6)
add_gaussian
""""""""""""
If you would like to add a gaussian with a customized color or no color, use
:meth:`~yt.visualization.volume_rendering.transfer_functions.ColorTransferFunction.add_gaussian`.
.. python-script::
import numpy as np
import yt
ds = yt.load("Enzo_64/DD0043/data0043")
sc = yt.create_scene(ds, lens_type="perspective")
source = sc[0]
source.set_field(("gas", "density"))
source.set_log(True)
bounds = (3e-31, 5e-27)
# Since this rendering is done in log space, the transfer function needs
# to be specified in log space.
tf = yt.ColorTransferFunction(np.log10(bounds))
tf.add_gaussian(np.log10(1e-29), width=0.005, height=[0.753, 1.0, 0.933, 1.0])
source.tfh.tf = tf
source.tfh.bounds = bounds
source.tfh.plot("transfer_function.png", profile_field=("gas", "density"))
sc.save("rendering.png", sigma_clip=6)
map_to_colormap
"""""""""""""""
Finally, to map a colormap directly to a range in densities use
:meth:`~yt.visualization.volume_rendering.transfer_functions.ColorTransferFunction.map_to_colormap`. This
makes it possible to map a segment of the transfer function space to a colormap
at a single alpha value. Where the above options produced layered volume
renderings, this allows all of the density values in a dataset to contribute to
the volume rendering.
.. python-script::
import numpy as np
import yt
ds = yt.load("Enzo_64/DD0043/data0043")
sc = yt.create_scene(ds, lens_type="perspective")
source = sc[0]
source.set_field(("gas", "density"))
source.set_log(True)
bounds = (3e-31, 5e-27)
# Since this rendering is done in log space, the transfer function needs
# to be specified in log space.
tf = yt.ColorTransferFunction(np.log10(bounds))
def linramp(vals, minval, maxval):
return (vals - vals.min()) / (vals.max() - vals.min())
tf.map_to_colormap(
np.log10(3e-31), np.log10(5e-27), colormap="cmyt.arbre", scale_func=linramp
)
source.tfh.tf = tf
source.tfh.bounds = bounds
source.tfh.plot("transfer_function.png", profile_field=("gas", "density"))
sc.save("rendering.png", sigma_clip=6)
Projection Transfer Function
++++++++++++++++++++++++++++
This is designed to allow you to generate projections like what you obtain
from the standard :ref:`projection-plots`, and it forms the basis of
:ref:`off-axis-projections`. See :ref:`cookbook-offaxis_projection` for a
simple example. Note that the integration here is scaled to a width of 1.0;
this means that if you want to apply a colorbar, you will have to multiply by
the integration width (specified when you initialize the volume renderer) in
whatever units are appropriate.
Planck Transfer Function
++++++++++++++++++++++++
This transfer function is designed to apply a semi-realistic color field based
on temperature, emission weighted by density, and approximate scattering based
on the density. This class is currently under-documented, and it may be best
to examine the source code to use it.
More Complicated Transfer Functions
+++++++++++++++++++++++++++++++++++
For more complicated transfer functions, you can use the
:class:`~yt.visualization.volume_rendering.transfer_functions.MultiVariateTransferFunction`
object. This allows for a set of weightings, linkages and so on.
All of the information about how all transfer functions are used and values are
extracted is contained in the sourcefile ``utilities/lib/grid_traversal.pyx``.
For more information on how the transfer function is actually applied, look
over the source code there.
.. _camera:
Camera
^^^^^^
The :class:`~yt.visualization.volume_rendering.camera.Camera` object
is what it sounds like, a camera within the Scene. It possesses the
quantities:
* :meth:`~yt.visualization.volume_rendering.camera.Camera.position` - the position of the camera in scene-space
* :meth:`~yt.visualization.volume_rendering.camera.Camera.width` - the width of the plane the camera can see
* :meth:`~yt.visualization.volume_rendering.camera.Camera.focus` - the point in space the camera is looking at
* :meth:`~yt.visualization.volume_rendering.camera.Camera.resolution` - the image resolution
* ``north_vector`` - a vector defining the "up" direction in an image
* :ref:`lens <lenses>` - an object controlling how rays traverse the Scene
.. _camera_movement:
Moving and Orienting the Camera
+++++++++++++++++++++++++++++++
There are multiple ways to manipulate the camera viewpoint and orientation.
One can set the properties listed above explicitly, or one can use the
:class:`~yt.visualization.volume_rendering.camera.Camera` helper methods.
In either case, any change triggers an update of all of the other properties.
Note that the camera exists in a right-handed coordinate system centered on
the camera.
Rotation-related methods
* :meth:`~yt.visualization.volume_rendering.camera.Camera.pitch` - rotate about the lateral axis
* :meth:`~yt.visualization.volume_rendering.camera.Camera.yaw` - rotate about the vertical axis (i.e. ``north_vector``)
* :meth:`~yt.visualization.volume_rendering.camera.Camera.roll` - rotate about the longitudinal axis (i.e. ``normal_vector``)
* :meth:`~yt.visualization.volume_rendering.camera.Camera.rotate` - rotate about an arbitrary axis
* :meth:`~yt.visualization.volume_rendering.camera.Camera.iter_rotate` - iteratively rotate about an arbitrary axis
For the rotation methods, the camera pivots around the ``rot_center`` rotation
center. By default, this is the camera position, which means that the
camera doesn't change its position at all, it just changes its orientation.
Zoom-related methods
* :meth:`~yt.visualization.volume_rendering.camera.Camera.set_width` - change the width of the FOV
* :meth:`~yt.visualization.volume_rendering.camera.Camera.zoom` - change the width of the FOV
* :meth:`~yt.visualization.volume_rendering.camera.Camera.iter_zoom` - iteratively change the width of the FOV
Perhaps counterintuitively, the camera does not get closer to the focus
during a zoom; it simply reduces the width of the field of view.
Translation-related methods
* :meth:`~yt.visualization.volume_rendering.camera.Camera.set_position` - change the location of the camera keeping the focus fixed
* :meth:`~yt.visualization.volume_rendering.camera.Camera.iter_move` - iteratively change the location of the camera keeping the focus fixed
The iterative methods provide iteration over a series of changes in the
position or orientation of the camera. These can be used within a loop.
For an example on how to use all of these camera movement functions, see
:ref:`cookbook-camera_movement`.
.. _lenses:
Camera Lenses
^^^^^^^^^^^^^
Cameras possess :class:`~yt.visualization.volume_rendering.lens.Lens` objects,
which control the geometric path in which rays travel to the camera. These
lenses can be swapped in and out of an existing camera to produce different
views of the same Scene. For a full demonstration of a Scene object
rendered with different lenses, see :ref:`cookbook-various_lens`.
Plane Parallel
++++++++++++++
The :class:`~yt.visualization.volume_rendering.lens.PlaneParallelLens` is the
standard lens type used for orthographic projections. All rays emerge
parallel to each other, arranged along a plane.
Perspective and Stereo Perspective
++++++++++++++++++++++++++++++++++
The :class:`~yt.visualization.volume_rendering.lens.PerspectiveLens`
adjusts for an opening view angle, so that the scene will have an
element of perspective to it.
:class:`~yt.visualization.volume_rendering.lens.StereoPerspectiveLens`
is identical to PerspectiveLens, but it produces two images from nearby
camera positions for use in 3D viewing. How 3D the image appears at viewing
will depend upon the value of
:attr:`~yt.visualization.volume_rendering.lens.StereoPerspectiveLens.disparity`,
which is half the maximum distance between two corresponding points in the left
and right images. By default, it is set to 3 pixels.
Fisheye or Dome
+++++++++++++++
The :class:`~yt.visualization.volume_rendering.lens.FisheyeLens`
is appropriate for viewing an arbitrary field of view. Fisheye images
are typically used for dome-based presentations; the Hayden planetarium
for instance has a field of view of 194.6. The images returned by this
camera will be flat pixel images that can and should be reshaped to the
resolution.
Spherical and Stereo Spherical
++++++++++++++++++++++++++++++
The :class:`~yt.visualization.volume_rendering.lens.SphericalLens` produces
a cylindrical-spherical projection. Movies rendered in this way can be
displayed as YouTube 360-degree videos (for more information see
`the YouTube help: Upload 360-degree videos
<https://support.google.com/youtube/answer/6178631?hl=en>`_).
:class:`~yt.visualization.volume_rendering.lens.StereoSphericalLens`
is identical to :class:`~yt.visualization.volume_rendering.lens.SphericalLens`
but it produces two images from nearby camera positions for virtual reality
movies, which can be displayed in head-tracking devices (e.g. Oculus Rift)
or in mobile YouTube app with Google Cardboard (for more information
see `the YouTube help: Upload virtual reality videos
<https://support.google.com/youtube/answer/6316263?hl=en>`_).
`This virtual reality video
<https://youtu.be/ZYWY53X7UQE>`_ on YouTube is an example produced with
:class:`~yt.visualization.volume_rendering.lens.StereoSphericalLens`. As in
the case of
:class:`~yt.visualization.volume_rendering.lens.StereoPerspectiveLens`, the
difference between the two images can be controlled by changing the value of
:attr:`~yt.visualization.volume_rendering.lens.StereoSphericalLens.disparity`
(See above).
.. _annotated-vr-example:
Annotated Examples
------------------
.. warning:: 3D visualizations can be fun but frustrating! Tuning the
parameters to both look nice and convey useful scientific
information can be hard. We've provided information about best
practices and tried to make the interface easy to develop nice
visualizations, but getting them *just right* is often
time-consuming. It's usually best to start out simple and expand
and tweak as needed.
The scene interface provides a modular interface for creating renderings
of arbitrary data sources. As such, manual composition of a scene can require
a bit more work, but we will also provide several helper functions that attempt
to create satisfactory default volume renderings.
When the
:func:`~yt.visualization.volume_rendering.volume_rendering.volume_render`
function is called, first an empty
:class:`~yt.visualization.volume_rendering.scene.Scene` object is created.
Next, a :class:`~yt.visualization.volume_rendering.api.VolumeSource`
object is created, which decomposes the volume elements
into a tree structure to provide back-to-front rendering of fixed-resolution
blocks of data. (If the volume elements are grids, this uses a
:class:`~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree` object.) When the
:class:`~yt.visualization.volume_rendering.api.VolumeSource`
object is created, by default it will create a transfer function
based on the extrema of the field that you are rendering. The transfer function
describes how rays that pass through the domain are "transferred" and thus how
brightness and color correlates to the field values. Modifying and adjusting
the transfer function is the primary way to modify the appearance of an image
based on volumes.
Once the basic set of objects to be rendered is constructed (e.g.
:class:`~yt.visualization.volume_rendering.scene.Scene`,
:class:`~yt.visualization.volume_rendering.render_source.RenderSource`, and
:class:`~yt.visualization.volume_rendering.api.VolumeSource` objects) , a
:class:`~yt.visualization.volume_rendering.camera.Camera` is created and
added to the scene. By default the creation of a camera also creates a
plane-parallel :class:`~yt.visualization.volume_rendering.lens.Lens`
object. The analog to a real camera is intentional -- a camera can take a
picture of a scene from a particular point in time and space, but different
lenses can be swapped in and out. For example, this might include a fisheye
lens, a spherical lens, or some other method of describing the direction and
origin of rays for rendering. Once the camera is added to the scene object, we
call the main methods of the
:class:`~yt.visualization.volume_rendering.scene.Scene` class,
:meth:`~yt.visualization.volume_rendering.scene.Scene.render` and
:meth:`~yt.visualization.volume_rendering.scene.Scene.save`. When rendered,
the scene will loop through all of the
:class:`~yt.visualization.volume_rendering.render_source.RenderSource` objects
that have been added and integrate the radiative transfer equations through the
volume. Finally, the image and scene object is returned to the user. An example
script the uses the high-level :func:`~yt.visualization.volume_rendering.volume_rendering.volume_render`
function to quickly set up defaults is:
.. python-script::
import yt
# load the data
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
# volume render the ("gas", "density") field, and save the resulting image
im, sc = yt.volume_render(ds, ("gas", "density"), fname="rendering.png")
# im is the image array generated. it is also saved to 'rendering.png'.
# sc is an instance of a Scene object, which allows you to further refine
# your renderings and later save them.
# Let's zoom in and take a closer look
sc.camera.width = (300, "kpc")
sc.camera.switch_orientation()
# Save the zoomed in rendering
sc.save("zoomed_rendering.png")
Alternatively, if you don't want to immediately generate an image of your
volume rendering, and you just want access to the default scene object,
you can skip the expensive operation of rendering by just running the
:func:`~yt.visualization.volume_rendering.volume_rendering.create_scene`
function in lieu of the
:func:`~yt.visualization.volume_rendering.volume_rendering.volume_render`
function. Example:
.. python-script::
import numpy as np
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
sc = yt.create_scene(ds, ("gas", "density"))
source = sc[0]
source.transfer_function = yt.ColorTransferFunction(
np.log10((1e-30, 1e-23)), grey_opacity=True
)
def linramp(vals, minval, maxval):
return (vals - vals.min()) / (vals.max() - vals.min())
source.transfer_function.map_to_colormap(
np.log10(1e-25), np.log10(8e-24), colormap="cmyt.arbre", scale_func=linramp
)
# For this low resolution dataset it's very important to use interpolated
# vertex centered data to avoid artifacts. For high resolution data this
# setting may cause a substantial slowdown for marginal visual improvement.
source.set_use_ghost_zones(True)
cam = sc.camera
cam.width = 15 * yt.units.kpc
cam.focus = ds.domain_center
cam.normal_vector = [-0.3, -0.3, 1]
cam.switch_orientation()
sc.save("rendering.png")
For an in-depth tutorial on how to create a Scene and modify its contents,
see this annotated :ref:`volume-rendering-tutorial`.
.. _volume-rendering-method:
Volume Rendering Method
-----------------------
Direct ray casting through a volume enables the generation of new types of
visualizations and images describing a simulation. yt has the facility
to generate volume renderings by a direct ray casting method. However, the
ability to create volume renderings informed by analysis by other mechanisms --
for instance, halo location, angular momentum, spectral energy distributions --
is useful.
The volume rendering in yt follows a relatively straightforward approach.
#. Create a set of transfer functions governing the emission and absorption as
a function of one or more variables. (:math:`f(v) \rightarrow (r,g,b,a)`)
These can be functions of any field variable, weighted by independent
fields, and even weighted by other evaluated transfer functions. (See
ref:`_transfer_functions`.)
#. Partition all chunks into non-overlapping, fully domain-tiling "bricks."
Each of these "bricks" contains the finest available data at any location.
#. Generate vertex-centered data for all grids in the volume rendered domain.
#. Order the bricks from front-to-back.
#. Construct plane of rays parallel to the image plane, with initial values set
to zero and located at the back of the region to be rendered.
#. For every brick, identify which rays intersect. These are then each 'cast'
through the brick.
#. Every cell a ray intersects is sampled 5 times (adjustable by parameter),
and data values at each sampling point are trilinearly interpolated from
the vertex-centered data.
#. Each transfer function is evaluated at each sample point. This gives us,
for each channel, both emission (:math:`j`) and absorption
(:math:`\alpha`) values.
#. The value for the pixel corresponding to the current ray is updated with
new values calculated by rectangular integration over the path length:
:math:`v^{n+1}_{i} = j_{i}\Delta s + (1 - \alpha_{i}\Delta s )v^{n}_{i}`
where :math:`n` and :math:`n+1` represent the pixel before and after
passing through a sample, :math:`i` is the color (red, green, blue) and
:math:`\Delta s` is the path length between samples.
#. Determine if any addition integrate will change the sample value; if not,
terminate integration. (This reduces integration time when rendering
front-to-back.)
#. The image is returned to the user:
.. image:: _images/vr_sample.jpg
:width: 512
Parallelism
-----------
yt can utilize both MPI and OpenMP parallelism for volume rendering. Both, and
their combination, are described below.
MPI Parallelization
^^^^^^^^^^^^^^^^^^^
Currently the volume renderer is parallelized using MPI to decompose the volume
by attempting to split up the
:class:`~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree` in a balanced way. This
has two advantages:
#. The :class:`~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree`
construction is parallelized since each MPI task only needs
to know about the part of the tree it will traverse.
#. Each MPI task will only read data for portion of the volume that it has
assigned.
Once the :class:`~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree` has been
constructed, each MPI task begins the rendering
phase until all of its bricks are completed. At that point, each MPI task has
a full image plane which we then use a tree reduction to construct the final
image, using alpha blending to add the images together at each reduction phase.
Caveats:
#. At this time, the :class:`~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree`
can only be decomposed by a power of 2 MPI
tasks. If a number of tasks not equal to a power of 2 are used, the largest
power of 2 below that number is used, and the remaining cores will be idle.
This issue is being actively addressed by current development.
#. Each MPI task, currently, holds the entire image plane. Therefore when
image plane sizes get large (>2048^2), the memory usage can also get large,
limiting the number of MPI tasks you can use. This is also being addressed
in current development by using image plane decomposition.
For more information about enabling parallelism, see :ref:`parallel-computation`.
OpenMP Parallelization
^^^^^^^^^^^^^^^^^^^^^^
The volume rendering also parallelized using the OpenMP interface in Cython.
While the MPI parallelization is done using domain decomposition, the OpenMP
threading parallelizes the rays intersecting a given brick of data. As the
average brick size relative to the image plane increases, the parallel
efficiency increases.
By default, the volume renderer will use the total number of cores available on
the symmetric multiprocessing (SMP) compute platform. For example, if you have
a shiny new laptop with 8 cores, you'll by default launch 8 OpenMP threads.
The number of threads can be controlled with the num_threads keyword in
:meth:`~yt.visualization.volume_rendering.camera.Camera.snapshot`. You may also restrict the number of OpenMP threads used
by default by modifying the environment variable OMP_NUM_THREADS.
Running in Hybrid MPI + OpenMP
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The two methods for volume rendering parallelization can be used together to
leverage large supercomputing resources. When choosing how to balance the
number of MPI tasks vs OpenMP threads, there are a few things to keep in mind.
For these examples, we will assume you are using Nmpi MPI tasks, and Nmp OpenMP
tasks, on a total of P cores. We will assume that the machine has a Nnode SMP
nodes, each with cores_per_node cores per node.
#. For each MPI task, num_threads (or OMP_NUM_THREADS) OpenMP threads will be
used. Therefore you should usually make sure that Nmpi*Nmp = P.
#. For simulations with many grids/AMRKDTree bricks, you generally want to increase Nmpi.
#. For simulations with large image planes (>2048^2), you generally want to
decrease Nmpi and increase Nmp. This is because, currently, each MPI task
stores the entire image plane, and doing so can approach the memory limits
of a given SMP node.
#. Please make sure you understand the (super)computer topology in terms of
the numbers of cores per socket, node, etc when making these decisions.
#. For many cases when rendering using your laptop/desktop, OpenMP will
provide a good enough speedup by default that it is preferable to launching
the MPI tasks.
For more information about enabling parallelism, see :ref:`parallel-computation`.
.. _vr-faq:
Volume Rendering Frequently Asked Questions
-------------------------------------------
.. _opaque_rendering:
Opacity
^^^^^^^
There are currently two models for opacity when rendering a volume, which are
controlled in the ``ColorTransferFunction`` with the keyword
``grey_opacity=False`` or ``True`` (the default). The first will act such for
each of the red, green, and blue channels, each channel is only opaque to
itself. This means that if a ray that has some amount of red then encounters
material that emits blue, the red will still exist and in the end that pixel
will be a combination of blue and red. However, if the ColorTransferFunction is
set up with grey_opacity=True, then blue will be opaque to red, and only the
blue emission will remain.
For an in-depth example, please see the cookbook example on opaque renders here:
:ref:`cookbook-opaque_rendering`.
.. _sigma_clip:
Improving Image Contrast with Sigma Clipping
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If your images appear to be too dark, you can try using the ``sigma_clip``
keyword in the :meth:`~yt.visualization.volume_rendering.scene.Scene.save`
or :func:`~yt.visualization.volume_rendering.volume_rendering.volume_render`
functions. Because the brightness range in an image is scaled to match the
range of emissivity values of underlying rendering, if you have a few really
high-emissivity points, they will scale the rest of your image to be quite
dark. ``sigma_clip = N`` can address this by removing values that are more
than ``N`` standard deviations brighter than the mean of your image.
Typically, a choice of 4 to 6 will help dramatically with your resulting image.
See the cookbook recipe :ref:`cookbook-sigma_clip` for a demonstration.
.. _when_to_render:
When to Render
^^^^^^^^^^^^^^
The rendering of a scene is the most computationally demanding step in
creating a final image and there are a number of ways to control at which point
a scene is actually rendered. The default behavior of the
:meth:`~yt.visualization.volume_rendering.scene.Scene.save` function includes
a call to :meth:`~yt.visualization.volume_rendering.scene.Scene.render`. This
means that in most cases (including the above examples), after you set up your
scene and volumes, you can simply call
:meth:`~yt.visualization.volume_rendering.scene.Scene.save` without first
calling :meth:`~yt.visualization.volume_rendering.scene.Scene.render`. If you
wish to save the most recently rendered image without rendering again, set
``render=False`` in the call to
:meth:`~yt.visualization.volume_rendering.scene.Scene.save`. Cases where you
may wish to use ``render=False`` include saving images at different
``sigma_clip`` values (see :ref:`cookbook-sigma_clip`) or when saving an image
that has already been rendered in a Jupyter notebook using
:meth:`~yt.visualization.volume_rendering.scene.Scene.show`. Changes to the
scene including adding sources, modifying transfer functions or adjusting camera
settings generally require rendering again.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/visualizing/volume_rendering.rst | volume_rendering.rst |
# Volume Rendering Tutorial
This notebook shows how to use the new (in version 3.3) Scene interface to create custom volume renderings. The tutorial proceeds in the following steps:
1. [Creating the Scene](#1.-Creating-the-Scene)
2. [Displaying the Scene](#2.-Displaying-the-Scene)
3. [Adjusting Transfer Functions](#3.-Adjusting-Transfer-Functions)
4. [Saving an Image](#4.-Saving-an-Image)
5. [Adding Annotations](#5.-Adding-Annotations)
## 1. Creating the Scene
To begin, we load up a dataset and use the `yt.create_scene` method to set up a basic Scene. We store the Scene in a variable called `sc` and render the default `('gas', 'density')` field.
```
import yt
from yt.visualization.volume_rendering.transfer_function_helper import (
TransferFunctionHelper,
)
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
sc = yt.create_scene(ds)
```
Note that to render a different field, we would use pass the field name to `yt.create_scene` using the `field` argument.
Now we can look at some information about the Scene we just created using the python print keyword:
```
print(sc)
```
This prints out information about the Sources, Camera, and Lens associated with this Scene. Each of these can also be printed individually. For example, to print only the information about the first (and currently, only) Source, we can do:
```
print(sc.get_source())
```
## 2. Displaying the Scene
We can see that the `yt.create_source` method has created a `VolumeSource` with default values for the center, bounds, and transfer function. Now, let's see what this Scene looks like. In the notebook, we can do this by calling `sc.show()`.
```
sc.show()
```
That looks okay, but it's a little too zoomed-out. To fix this, let's modify the Camera associated with our Scene. This next bit of code will zoom in the camera (i.e. decrease the width of the view) by a factor of 3.
```
sc.camera.zoom(3.0)
```
Now when we print the Scene, we see that the Camera width has decreased by a factor of 3:
```
print(sc)
```
To see what this looks like, we re-render the image and display the scene again. Note that we don't actually have to call `sc.show()` here - we can just have Ipython evaluate the Scene and that will display it automatically.
```
sc.render()
sc
```
That's better! The image looks a little washed-out though, so we use the `sigma_clip` argument to `sc.show()` to improve the contrast:
```
sc.show(sigma_clip=4.0)
```
Applying different values of `sigma_clip` with `sc.show()` is a relatively fast process because `sc.show()` will pull the most recently rendered image and apply the contrast adjustment without rendering the scene again. While this is useful for quickly testing the affect of different values of `sigma_clip`, it can lead to confusion if we don't remember to render after making changes to the camera. For example, if we zoom in again and simply call `sc.show()`, then we get the same image as before:
```
sc.camera.zoom(3.0)
sc.show(sigma_clip=4.0)
```
For the change to the camera to take affect, we have to explicitly render again:
```
sc.render()
sc.show(sigma_clip=4.0)
```
As a general rule, any changes to the scene itself such as adjusting the camera or changing transfer functions requires rendering again. Before moving on, let's undo the last zoom:
```
sc.camera.zoom(1.0 / 3.0)
```
## 3. Adjusting Transfer Functions
Next, we demonstrate how to change the mapping between the field values and the colors in the image. We use the TransferFunctionHelper to create a new transfer function using the `gist_rainbow` colormap, and then re-create the image as follows:
```
# Set up a custom transfer function using the TransferFunctionHelper.
# We use 10 Gaussians evenly spaced logarithmically between the min and max
# field values.
tfh = TransferFunctionHelper(ds)
tfh.set_field("density")
tfh.set_log(True)
tfh.set_bounds()
tfh.build_transfer_function()
tfh.tf.add_layers(10, colormap="gist_rainbow")
# Grab the first render source and set it to use the new transfer function
render_source = sc.get_source()
render_source.transfer_function = tfh.tf
sc.render()
sc.show(sigma_clip=4.0)
```
Now, let's try using a different lens type. We can give a sense of depth to the image by using the perspective lens. To do, we create a new Camera below. We also demonstrate how to switch the camera to a new position and orientation.
```
cam = sc.add_camera(ds, lens_type="perspective")
# Standing at (x=0.05, y=0.5, z=0.5), we look at the area of x>0.05 (with some open angle
# specified by camera width) along the positive x direction.
cam.position = ds.arr([0.05, 0.5, 0.5], "code_length")
normal_vector = [1.0, 0.0, 0.0]
north_vector = [0.0, 0.0, 1.0]
cam.switch_orientation(normal_vector=normal_vector, north_vector=north_vector)
# The width determines the opening angle
cam.set_width(ds.domain_width * 0.5)
print(sc.camera)
```
The resulting image looks like:
```
sc.render()
sc.show(sigma_clip=4.0)
```
## 4. Saving an Image
To save a volume rendering to an image file at any point, we can use `sc.save` as follows:
```
sc.save("volume_render.png", render=False)
```
Including the keyword argument `render=False` indicates that the most recently rendered image will be saved (otherwise, `sc.save()` will trigger a call to `sc.render()`). This behavior differs from `sc.show()`, which always uses the most recently rendered image.
An additional caveat is that if we used `sigma_clip` in our call to `sc.show()`, then we must **also** pass it to `sc.save()` as sigma clipping is applied on top of a rendered image array. In that case, we would do the following:
```
sc.save("volume_render_clip4.png", sigma_clip=4.0, render=False)
```
## 5. Adding Annotations
Finally, the next cell restores the lens and the transfer function to the defaults, moves the camera, and adds an opaque source that shows the axes of the simulation coordinate system.
```
# set the lens type back to plane-parallel
sc.camera.set_lens("plane-parallel")
# move the camera to the left edge of the domain
sc.camera.set_position(ds.domain_left_edge)
sc.camera.switch_orientation()
# add an opaque source to the scene
sc.annotate_axes()
sc.render()
sc.show(sigma_clip=4.0)
```
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/visualizing/Volume_Rendering_Tutorial.ipynb | Volume_Rendering_Tutorial.ipynb |
.. _visualizing:
Visualizing Data
================
yt comes with a number of ways for visualizing one's data including slices,
projections, line plots, profiles, phase plots, volume rendering, 3D surfaces,
streamlines, and a google-maps-like interface for exploring one's dataset
interactively.
.. toctree::
:maxdepth: 2
plots
callbacks
manual_plotting
volume_rendering
unstructured_mesh_rendering
interactive_data_visualization
visualizing_particle_datasets_with_firefly
sketchfab
mapserver
streamlines
colormaps/index
geographic_projections_and_transforms
writing_fits_images
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/visualizing/index.rst | index.rst |
.. _unstructured_mesh_rendering:
Unstructured Mesh Rendering
===========================
Beginning with version 3.3, yt has the ability to volume render unstructured
mesh data like that created by finite element calculations. No additional
dependencies are required in order to use this feature. However, it is
possible to speed up the rendering operation by installing with
`Embree <https://www.embree.org>`_ support. Embree is a fast ray-tracing
library from Intel that can substantially speed up the mesh rendering operation
on large datasets. You can read about how to install yt with Embree support
below, or you can skip to the examples.
Optional Embree Installation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You'll need to `install Python bindings for netCDF4 <https://github.com/Unidata/netcdf4-python#installation>`_.
Then you'll need to get Embree itself and its corresponding Python bindings (pyembree).
For conda-based systems, this is trivial, see
`pyembree's doc <https://github.com/scopatz/pyembree#installation>`_
For systems other than conda, you will need to install Embree first, either by
`compiling from source <https://github.com/embree/embree#installation-of-embree>`_
or by using one of the pre-built binaries available at Embree's
`releases <https://github.com/embree/embree/releases>`_ page.
Then you'll want to install pyembree from source as follows.
.. code-block:: bash
git clone https://github.com/scopatz/pyembree
To install, navigate to the root directory and run the setup script.
If Embree was installed to some location that is not in your path by default,
you will need to pass in CFLAGS and LDFLAGS to the setup.py script. For example,
the Mac OS X package installer puts the installation at /opt/local/ instead of
usr/local. To account for this, you would do:
.. code-block:: bash
CFLAGS='-I/opt/local/include' LDFLAGS='-L/opt/local/lib' python setup.py install
Once Embree and pyembree are installed, a,d in order to use the unstructured
mesh rendering capability, you must :ref:`rebuild yt from source
<install-from-source>`, . Once again, if embree is installed in a location that
is not part of your default search path, you must tell yt where to find it.
There are a number of ways to do this. One way is to again manually pass in the
flags when running the setup script in the yt-git directory:
.. code-block:: bash
CFLAGS='-I/opt/local/include' LDFLAGS='-L/opt/local/lib' python setup.py develop
You can also set EMBREE_DIR environment variable to '/opt/local', in which case
you could just run
.. code-block:: bash
python setup.py develop
as usual. Finally, if you create a file called embree.cfg in the yt-git directory with
the location of the embree installation, the setup script will find this and use it,
provided EMBREE_DIR is not set. An example embree.cfg file could like this:
.. code-block:: bash
/opt/local/
We recommend one of the later two methods, especially
if you plan on re-compiling the cython extensions regularly. Note that none of this is
necessary if you installed embree into a location that is in your default path, such
as /usr/local.
Examples
^^^^^^^^
First, here is an example of rendering an 8-node, hexahedral MOOSE dataset.
.. python-script::
import yt
ds = yt.load("MOOSE_sample_data/out.e-s010")
# create a default scene
sc = yt.create_scene(ds)
# override the default colormap
ms = sc.get_source()
ms.cmap = "Eos A"
# adjust the camera position and orientation
cam = sc.camera
cam.focus = ds.arr([0.0, 0.0, 0.0], "code_length")
cam_pos = ds.arr([-3.0, 3.0, -3.0], "code_length")
north_vector = ds.arr([0.0, -1.0, -1.0], "dimensionless")
cam.set_position(cam_pos, north_vector)
# increase the default resolution
cam.resolution = (800, 800)
# render and save
sc.save()
You can also overplot the mesh boundaries:
.. python-script::
import yt
ds = yt.load("MOOSE_sample_data/out.e-s010")
# create a default scene
sc = yt.create_scene(ds)
# override the default colormap
ms = sc.get_source()
ms.cmap = "Eos A"
# adjust the camera position and orientation
cam = sc.camera
cam.focus = ds.arr([0.0, 0.0, 0.0], "code_length")
cam_pos = ds.arr([-3.0, 3.0, -3.0], "code_length")
north_vector = ds.arr([0.0, -1.0, -1.0], "dimensionless")
cam.set_position(cam_pos, north_vector)
# increase the default resolution
cam.resolution = (800, 800)
# render, draw the element boundaries, and save
sc.render()
sc.annotate_mesh_lines()
sc.save()
As with slices, you can visualize different meshes and different fields. For example,
Here is a script similar to the above that plots the "diffused" variable
using the mesh labelled by "connect2":
.. python-script::
import yt
ds = yt.load("MOOSE_sample_data/out.e-s010")
# create a default scene
sc = yt.create_scene(ds, ("connect2", "diffused"))
# override the default colormap
ms = sc.get_source()
ms.cmap = "Eos A"
# adjust the camera position and orientation
cam = sc.camera
cam.focus = ds.arr([0.0, 0.0, 0.0], "code_length")
cam_pos = ds.arr([-3.0, 3.0, -3.0], "code_length")
north_vector = ds.arr([0.0, -1.0, -1.0], "dimensionless")
cam.set_position(cam_pos, north_vector)
# increase the default resolution
cam.resolution = (800, 800)
# render and save
sc.save()
Next, here is an example of rendering a dataset with tetrahedral mesh elements.
Note that in this dataset, there are multiple "steps" per file, so we specify
that we want to look at the last one.
.. python-script::
import yt
filename = "MOOSE_sample_data/high_order_elems_tet4_refine_out.e"
ds = yt.load(filename, step=-1) # we look at the last time frame
# create a default scene
sc = yt.create_scene(ds, ("connect1", "u"))
# override the default colormap
ms = sc.get_source()
ms.cmap = "Eos A"
# adjust the camera position and orientation
cam = sc.camera
camera_position = ds.arr([3.0, 3.0, 3.0], "code_length")
cam.set_width(ds.arr([2.0, 2.0, 2.0], "code_length"))
north_vector = ds.arr([0.0, -1.0, 0.0], "dimensionless")
cam.set_position(camera_position, north_vector)
# increase the default resolution
cam.resolution = (800, 800)
# render and save
sc.save()
Here is an example using 6-node wedge elements:
.. python-script::
import yt
ds = yt.load("MOOSE_sample_data/wedge_out.e")
# create a default scene
sc = yt.create_scene(ds, ("connect2", "diffused"))
# override the default colormap
ms = sc.get_source()
ms.cmap = "Eos A"
# adjust the camera position and orientation
cam = sc.camera
cam.set_position(ds.arr([1.0, -1.0, 1.0], "code_length"))
cam.width = ds.arr([1.5, 1.5, 1.5], "code_length")
# render and save
sc.save()
Another example, this time plotting the temperature field from a 20-node hex
MOOSE dataset:
.. python-script::
import yt
# We load the last time frame
ds = yt.load("MOOSE_sample_data/mps_out.e", step=-1)
# create a default scene
sc = yt.create_scene(ds, ("connect2", "temp"))
# override the default colormap. This time we also override
# the default color bounds
ms = sc.get_source()
ms.cmap = "hot"
ms.color_bounds = (500.0, 1700.0)
# adjust the camera position and orientation
cam = sc.camera
camera_position = ds.arr([-1.0, 1.0, -0.5], "code_length")
north_vector = ds.arr([0.0, -1.0, -1.0], "dimensionless")
cam.width = ds.arr([0.04, 0.04, 0.04], "code_length")
cam.set_position(camera_position, north_vector)
# increase the default resolution
cam.resolution = (800, 800)
# render, draw the element boundaries, and save
sc.render()
sc.annotate_mesh_lines()
sc.save()
The dataset in the above example contains displacement fields, so this is a good
opportunity to demonstrate their use. The following example is exactly like the
above, except we scale the displacements by a factor of a 10.0, and additionally
add an offset to the mesh by 1.0 unit in the x-direction:
.. python-script::
import yt
# We load the last time frame
ds = yt.load(
"MOOSE_sample_data/mps_out.e",
step=-1,
displacements={"connect2": (10.0, [0.01, 0.0, 0.0])},
)
# create a default scene
sc = yt.create_scene(ds, ("connect2", "temp"))
# override the default colormap. This time we also override
# the default color bounds
ms = sc.get_source()
ms.cmap = "hot"
ms.color_bounds = (500.0, 1700.0)
# adjust the camera position and orientation
cam = sc.camera
camera_position = ds.arr([-1.0, 1.0, -0.5], "code_length")
north_vector = ds.arr([0.0, -1.0, -1.0], "dimensionless")
cam.width = ds.arr([0.05, 0.05, 0.05], "code_length")
cam.set_position(camera_position, north_vector)
# increase the default resolution
cam.resolution = (800, 800)
# render, draw the element boundaries, and save
sc.render()
sc.annotate_mesh_lines()
sc.save()
As with other volume renderings in yt, you can swap out different lenses. Here is
an example that uses a "perspective" lens, for which the rays diverge from the
camera position according to some opening angle:
.. python-script::
import yt
ds = yt.load("MOOSE_sample_data/out.e-s010")
# create a default scene
sc = yt.create_scene(ds, ("connect2", "diffused"))
# override the default colormap
ms = sc.get_source()
ms.cmap = "Eos A"
# Create a perspective Camera
cam = sc.add_camera(ds, lens_type="perspective")
cam.focus = ds.arr([0.0, 0.0, 0.0], "code_length")
cam_pos = ds.arr([-4.5, 4.5, -4.5], "code_length")
north_vector = ds.arr([0.0, -1.0, -1.0], "dimensionless")
cam.set_position(cam_pos, north_vector)
# increase the default resolution
cam.resolution = (800, 800)
# render, draw the element boundaries, and save
sc.render()
sc.annotate_mesh_lines()
sc.save()
You can also create scenes that have multiple meshes. The ray-tracing infrastructure
will keep track of the depth information for each source separately, and composite
the final image accordingly. In the next example, we show how to render a scene
with two meshes on it:
.. python-script::
import yt
from yt.visualization.volume_rendering.api import MeshSource, Scene
ds = yt.load("MOOSE_sample_data/out.e-s010")
# this time we create an empty scene and add sources to it one-by-one
sc = Scene()
# set up our Camera
cam = sc.add_camera(ds)
cam.focus = ds.arr([0.0, 0.0, 0.0], "code_length")
cam.set_position(
ds.arr([-3.0, 3.0, -3.0], "code_length"),
ds.arr([0.0, -1.0, 0.0], "dimensionless"),
)
cam.set_width = ds.arr([8.0, 8.0, 8.0], "code_length")
cam.resolution = (800, 800)
# create two distinct MeshSources from 'connect1' and 'connect2'
ms1 = MeshSource(ds, ("connect1", "diffused"))
ms2 = MeshSource(ds, ("connect2", "diffused"))
sc.add_source(ms1)
sc.add_source(ms2)
# render and save
im = sc.render()
sc.save()
However, in the rendered image above, we note that the color is discontinuous on
in the middle and upper part of the cylinder's side. In the original data,
there are two parts but the value of ``diffused`` is continuous at the interface.
This discontinuous color is due to an independent colormap setting for the two
mesh sources. To fix it, we can explicitly specify the colormap bound for each
mesh source as follows:
.. python-script::
import yt
from yt.visualization.volume_rendering.api import MeshSource, Scene
ds = yt.load("MOOSE_sample_data/out.e-s010")
# this time we create an empty scene and add sources to it one-by-one
sc = Scene()
# set up our Camera
cam = sc.add_camera(ds)
cam.focus = ds.arr([0.0, 0.0, 0.0], "code_length")
cam.set_position(
ds.arr([-3.0, 3.0, -3.0], "code_length"),
ds.arr([0.0, -1.0, 0.0], "dimensionless"),
)
cam.set_width = ds.arr([8.0, 8.0, 8.0], "code_length")
cam.resolution = (800, 800)
# create two distinct MeshSources from 'connect1' and 'connect2'
ms1 = MeshSource(ds, ("connect1", "diffused"))
ms2 = MeshSource(ds, ("connect2", "diffused"))
# add the following lines to set the range of the two mesh sources
ms1.color_bounds = (0.0, 3.0)
ms2.color_bounds = (0.0, 3.0)
sc.add_source(ms1)
sc.add_source(ms2)
# render and save
im = sc.render()
sc.save()
Making Movies
^^^^^^^^^^^^^
Here are a couple of example scripts that show how to create image frames that
can later be stitched together into a movie. In the first example, we look at a
single dataset at a fixed time, but we move the camera around to get a different
vantage point. We call the rotate() method 300 times, saving a new image to the
disk each time.
.. code-block:: python
import numpy as np
import yt
ds = yt.load("MOOSE_sample_data/out.e-s010")
# create a default scene
sc = yt.create_scene(ds)
# override the default colormap
ms = sc.get_source()
ms.cmap = "Eos A"
# adjust the camera position and orientation
cam = sc.camera
cam.focus = ds.arr([0.0, 0.0, 0.0], "code_length")
cam_pos = ds.arr([-3.0, 3.0, -3.0], "code_length")
north_vector = ds.arr([0.0, -1.0, -1.0], "dimensionless")
cam.set_position(cam_pos, north_vector)
# increase the default resolution
cam.resolution = (800, 800)
# set the camera to use "steady_north"
cam.steady_north = True
# make movie frames
num_frames = 301
for i in range(num_frames):
cam.rotate(2.0 * np.pi / num_frames)
sc.render()
sc.save("movie_frames/surface_render_%.4d.png" % i)
Finally, this example demonstrates how to loop over the time steps in a single
file with a fixed camera position:
.. code-block:: python
import matplotlib.pyplot as plt
import yt
from yt.visualization.volume_rendering.api import MeshSource
NUM_STEPS = 127
CMAP = "hot"
VMIN = 300.0
VMAX = 2000.0
for step in range(NUM_STEPS):
ds = yt.load("MOOSE_sample_data/mps_out.e", step=step)
time = ds._get_current_time()
# the field name is a tuple of strings. The first string
# specifies which mesh will be plotted, the second string
# specifies the name of the field.
field_name = ('connect2', 'temp')
# this initializes the render source
ms = MeshSource(ds, field_name)
# set up the camera here. these values were arrived by
# calling pitch, yaw, and roll in the notebook until I
# got the angle I wanted.
sc.add_camera(ds)
camera_position = ds.arr([0.1, 0.0, 0.1], 'code_length')
cam.focus = ds.domain_center
north_vector = ds.arr([-0.3032476, -0.71782557, 0.62671153], 'dimensionless')
cam.width = ds.arr([ 0.04, 0.04, 0.04], 'code_length')
cam.resolution = (800, 800)
cam.set_position(camera_position, north_vector)
# actually make the image here
im = ms.render(cam, cmap=CMAP, color_bounds=(VMIN, VMAX))
# Plot the result using matplotlib and save.
# Note that we are setting the upper and lower
# bounds of the colorbar to be the same for all
# frames of the image.
# must clear the image between frames
plt.clf()
fig = plt.gcf()
ax = plt.gca()
ax.imshow(im, interpolation='nearest', origin='lower')
# Add the colorbar using a fake (not shown) image.
p = ax.imshow(ms.data, visible=False, cmap=CMAP, vmin=VMIN, vmax=VMAX)
cb = fig.colorbar(p)
cb.set_label(field_name[1])
ax.text(25, 750, 'time = %.2e' % time, color='k')
ax.axes.get_xaxis().set_visible(False)
ax.axes.get_yaxis().set_visible(False)
plt.savefig('movie_frames/test_%.3d' % step)
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/visualizing/unstructured_mesh_rendering.rst | unstructured_mesh_rendering.rst |
.. _mapserver:
Mapserver - A Google-Maps-like Interface to your Data
-----------------------------------------------------
The mapserver is an experimental feature. It's based on `Leaflet
<https://leafletjs.com/>`_, a library written to create zoomable,
map-tile interfaces. (Similar to Google Maps.) yt provides everything you
need to start up a web server that will interactively re-pixelize an adaptive
image. This means you can explore your datasets in a fully pan-n-zoom
interface.
.. note::
Previous versions of yt bundled the necessary dependencies, but with more
recent released you will need to install the package ``bottle`` via pip or
conda.
To start up the mapserver, you can use the command yt (see
:ref:`command-line`) with the ``mapserver`` subcommand. It takes several of
the same options and arguments as the ``plot`` subcommand. For instance:
.. code-block:: bash
yt mapserver DD0050/DD0050
That will take a slice along the x axis at the center of the domain. The
field, projection, weight and axis can all be specified on the command line.
When you do this, it will spawn a micro-webserver on your localhost, and output
the URL to connect to standard output. You can connect to it (or create an
SSH tunnel to connect to it) and explore your data. Double-clicking zooms, and
dragging drags.
.. image:: _images/mapserver.png
:scale: 50%
This is also functional on touch-capable devices such as Android Tablets and
iPads/iPhones.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/visualizing/mapserver.rst | mapserver.rst |
Here, we explain how to use TransferFunctionHelper to visualize and interpret yt volume rendering transfer functions. Creating a custom transfer function is a process that usually involves some trial-and-error. TransferFunctionHelper is a utility class designed to help you visualize the probability density functions of yt fields that you might want to volume render. This makes it easier to choose a nice transfer function that highlights interesting physical regimes.
First, we set up our namespace and define a convenience function to display volume renderings inline in the notebook. Using `%matplotlib inline` makes it so matplotlib plots display inline in the notebook.
```
import numpy as np
from IPython.core.display import Image
import yt
from yt.visualization.volume_rendering.transfer_function_helper import (
TransferFunctionHelper,
)
def showme(im):
# screen out NaNs
im[im != im] = 0.0
# Create an RGBA bitmap to display
imb = yt.write_bitmap(im, None)
return Image(imb)
```
Next, we load up a low resolution Enzo cosmological simulation.
```
ds = yt.load("Enzo_64/DD0043/data0043")
```
Now that we have the dataset loaded, let's create a `TransferFunctionHelper` to visualize the dataset and transfer function we'd like to use.
```
tfh = TransferFunctionHelper(ds)
```
`TransferFunctionHelpler` will intelligently choose transfer function bounds based on the data values. Use the `plot()` method to take a look at the transfer function.
```
# Build a transfer function that is a multivariate gaussian in temperature
tfh = TransferFunctionHelper(ds)
tfh.set_field(("gas", "temperature"))
tfh.set_log(True)
tfh.set_bounds()
tfh.build_transfer_function()
tfh.tf.add_layers(5)
tfh.plot()
```
Let's also look at the probability density function of the `mass` field as a function of `temperature`. This might give us an idea where there is a lot of structure.
```
tfh.plot(profile_field=("gas", "mass"))
```
It looks like most of the gas is hot but there is still a lot of low-density cool gas. Let's construct a transfer function that highlights both the rarefied hot gas and the dense cool gas simultaneously.
```
tfh = TransferFunctionHelper(ds)
tfh.set_field(("gas", "temperature"))
tfh.set_bounds()
tfh.set_log(True)
tfh.build_transfer_function()
tfh.tf.add_layers(
8,
w=0.01,
mi=4.0,
ma=8.0,
col_bounds=[4.0, 8.0],
alpha=np.logspace(-1, 2, 7),
colormap="RdBu_r",
)
tfh.tf.map_to_colormap(6.0, 8.0, colormap="Reds")
tfh.tf.map_to_colormap(-1.0, 6.0, colormap="Blues_r")
tfh.plot(profile_field=("gas", "mass"))
```
Let's take a look at the volume rendering. First use the helper function to create a default rendering, then we override this with the transfer function we just created.
```
im, sc = yt.volume_render(ds, [("gas", "temperature")])
source = sc.get_source()
source.set_transfer_function(tfh.tf)
im2 = sc.render()
showme(im2[:, :, :3])
```
That looks okay, but the red gas (associated with temperatures between 1e6 and 1e8 K) is a bit hard to see in the image. To fix this, we can make that gas contribute a larger alpha value to the image by using the ``scale`` keyword argument in ``map_to_colormap``.
```
tfh2 = TransferFunctionHelper(ds)
tfh2.set_field(("gas", "temperature"))
tfh2.set_bounds()
tfh2.set_log(True)
tfh2.build_transfer_function()
tfh2.tf.add_layers(
8,
w=0.01,
mi=4.0,
ma=8.0,
col_bounds=[4.0, 8.0],
alpha=np.logspace(-1, 2, 7),
colormap="RdBu_r",
)
tfh2.tf.map_to_colormap(6.0, 8.0, colormap="Reds", scale=5.0)
tfh2.tf.map_to_colormap(-1.0, 6.0, colormap="Blues_r", scale=1.0)
tfh2.plot(profile_field=("gas", "mass"))
```
Note that the height of the red portion of the transfer function has increased by a factor of 5.0. If we use this transfer function to make the final image:
```
source.set_transfer_function(tfh2.tf)
im3 = sc.render()
showme(im3[:, :, :3])
```
The red gas is now much more prominent in the image. We can clearly see that the hot gas is mostly associated with bound structures while the cool gas is associated with low-density voids.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/visualizing/TransferFunctionHelper_Tutorial.ipynb | TransferFunctionHelper_Tutorial.ipynb |
.. _visualizing_particle_datasets_with_firefly:
Visualizing Particle Datasets with Firefly
==========================================
`Firefly <https://github.com/ageller/Firefly>`_
is an interactive, browser-based,
particle visualization platform that allows you to filter, colormap, and fly
through their data. The Python frontend allows users to both load in their
own datasets and customize every aspect of the user interface.
yt offers to ability
to export your data to Firefly's ffly or JSON format through the
:meth:`~yt.data_objects.data_containers.YTDataContainer.create_firefly_object`
method.
You can adjust the interface settings, particle colors, decimation factors, and
other `Firefly settings <https://ageller.github.io/Firefly/docs/build/html/index.html>`_
through the returned ``Firefly.reader`` object. Once the
settings are tuned to your liking, calling the ``reader.writeToDisk()`` method will
produce the final ffly files. Note that ``reader.clean_datadir`` defaults to true
when using
:meth:`~yt.data_objects.data_containers.YTDataContainer.create_firefly_object`
so if you would like to manage multiple datasets make sure to pass different
``datadir`` keyword arguments.
.. image:: _images/firefly_example.png
:width: 85%
:align: center
:alt: Screenshot of a sample Firefly visualization
Exporting an Example Dataset to Firefly
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Here is an example of how to use yt to export data to Firefly using some
`sample data <https://yt-project.org/data/>`_.
.. code-block:: python
ramses_ds = yt.load("DICEGalaxyDisk_nonCosmological/output_00002/info_00002.txt")
region = ramses_ds.sphere(ramses_ds.domain_center, (1000, "kpc"))
reader = region.create_firefly_object(
"IsoGalaxyRamses",
fields_to_include=["particle_extra_field_1", "particle_extra_field_2"],
fields_units=["dimensionless", "dimensionless"],
)
## adjust some of the options
reader.settings["color"]["io"] = [1, 1, 0, 1] ## set default color
reader.particleGroups[0].decimation_factor = 100 ## increase the decimation factor
## dump files to
## ~/IsoGalaxyRamses/Dataio000.ffly
## ~/IsoGalaxyRamses/filenames.json
## ~/IsoGalaxyRamses/DataSettings.json
reader.writeToDisk()
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/visualizing/visualizing_particle_datasets_with_firefly.rst | visualizing_particle_datasets_with_firefly.rst |
.. _geographic_projections_and_transforms:
Geographic Projections and Transforms
=====================================
Geographic data that is on a sphere can be visualized by projecting that data
onto a representation of that sphere flattened into 2d space. There exist a
number of projection types, which can be found in the `the cartopy
documentation <https://scitools.org.uk/cartopy/docs/latest/crs/projections.html>`_.
With support from `cartopy <https://scitools.org.uk/cartopy/docs/latest/>`_,
``yt`` now supports these projection
types for geographically loaded data.
Underlying data is assumed to have a transform of `PlateCarree
<https://scitools.org.uk/cartopy/docs/latest/reference/projections.html#platecarree>`__,
which is data on a flattened, rectangular, latitude/longitude grid. This is a
a typical format for geographic data.
The distinction between the data transform and projection is worth noting. The data
transform is what system your data is defined with and the data projection is
what the resulting plot will display. For more information on this difference,
refer to `the cartopy documentation on these differences
<https://scitools.org.uk/cartopy/docs/latest/tutorials/understanding_transform.html>`_.
If your data is not of this form, feel free to open an issue or file a pull
request on the ``yt`` github page for this feature.
It should be noted that
these projections are not the same as yt's ProjectionPlot. For more information
on yt's projection plots, see :ref:`projection-types`.
.. _install-cartopy:
Installing Cartopy
^^^^^^^^^^^^^^^^^^
In order to access the geographic projection functionality, you will need to have an
installation of ``cartopy`` available on your machine. Please refer to `Cartopy's
documentation for detailed instructions <https://scitools.org.uk/cartopy/docs/latest/installing.html>`_
Using Basic Transforms
^^^^^^^^^^^^^^^^^^^^^^^
As mentioned above, the default data transform is assumed to be of `PlateCarree
<https://scitools.org.uk/cartopy/docs/latest/crs/projections.html#platecarree>`__,
which is data on a flattened, rectangular, latitude/longitude grid. To set
something other than ``PlateCarree``, the user can access the dictionary in the coordinate
handler that defines the coordinate transform to change the default transform
type. Because the transform
describes the underlying data coordinate system, the loaded dataset will carry
this newly set attribute and all future plots will have the user-defined data
transform. Also note that the dictionary is ordered by axis type. Because
slicing along the altitude may differ from, say, the latitude axis, we may
choose to have different transforms for each axis.
.. code-block:: python
ds = yt.load_uniform_grid(data, sizes, 1.0, geometry=("geographic", dims), bbox=bbox)
ds.coordinates.data_transform["altitude"] = "Miller"
p = yt.SlicePlot(ds, "altitude", "AIRDENS")
In this example, the ``data_transform`` kwarg has been changed from its default
of ``PlateCarree`` to ``Miller``. You can check that you have successfully changed
the defaults by inspecting the ``data_transform`` and ``data_projection`` dictionaries
in the coordinate
handler. For this dataset, that would be accessed by:
.. code-block:: python
print(ds.coordinates.data_transform["altitude"])
print(ds.coordinates.data_projection["altitude"])
Using Basic Projections
^^^^^^^^^^^^^^^^^^^^^^^
All of the transforms available in ``Cartopy`` v0.15 and above are accessible
with this functionality.
The next few examples will use a GEOS dataset accessible from the ``yt`` data
downloads page. For details about loading this data, please
see :ref:`cookbook-geographic_projections`.
If a geographic dataset is loaded without any defined projection the default
option of ``Mollweide`` will be displayed.
.. code-block:: python
ds = yt.load_uniform_grid(data, sizes, 1.0, geometry=("geographic", dims), bbox=bbox)
p = yt.SlicePlot(ds, "altitude", "AIRDENS")
If an option other than ``Mollweide`` is desired, the plot projection type can
be set with the ``set_mpl_projection`` function. The next code block illustrates how to
set the projection to a ``Robinson`` projection from the default ``PlateCarree``.
.. code-block:: python
ds = yt.load_uniform_grid(data, sizes, 1.0, geometry=("geographic", dims), bbox=bbox)
p = yt.SlicePlot(ds, "altitude", "AIRDENS")
p.set_mpl_projection("Robinson")
p.show()
The axes attributes of the plot can be accessed to add in annotations, such as
coastlines. The axes are matplotlib ``GeoAxes`` so any of the annotations
available with matplotlib should be available for customization. Here a
``Robinson`` plot is made with coastline annotations.
.. code-block:: python
p.set_mpl_projection("Robinson")
p.render()
p.plots["AIRDENS"].axes.set_global()
p.plots["AIRDENS"].axes.coastlines()
p.show()
``p.render()`` is required here to access the plot axes. When a new
projection is called the plot axes are reset and are not available unless set
up again.
Additional arguments can be passed to the projection function for further
customization. If additional arguments are desired, then rather than passing a
string of the projection name, one would pass a 2 or 3-item tuple, the first
item of the tuple corresponding to a string of the transform name, and the
second and third items corresponding to the args and kwargs of the transform,
respectively.
Alternatively, a user can pass a transform object rather than a string or tuple.
This allows for users to
create and define their own transforms, beyond what is available in cartopy.
The type must be a cartopy GeoAxes object or a matplotlib transform object. For
creating custom transforms, see `the matplotlib example
<https://matplotlib.org/examples/api/custom_projection_example.html>`_.
The function ``set_mpl_projection()`` accepts several input types for varying
levels of customization:
.. code-block:: python
set_mpl_projection("ProjectionType")
set_mpl_projection(("ProjectionType", (args)))
set_mpl_projection(("ProjectionType", (args), {kwargs}))
set_mpl_projection(cartopy.crs.PlateCarree())
Further examples of using the geographic transforms with this dataset
can be found in :ref:`cookbook-geographic_projections`.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/visualizing/geographic_projections_and_transforms.rst | geographic_projections_and_transforms.rst |
.. _colormaps:
Colormaps
=========
There are several colormaps available for yt. yt includes all of the
matplotlib colormaps as well for nearly all functions. Individual
visualization functions usually allow you to specify a colormap with the
``cmap`` flag.
In yt 3.3, we changed the default yt colormap from ``algae`` to ``arbre``.
This colormap was designed and voted on by the yt community and is designed to
be easier for people with different color sensitivities as well as when printed
in black and white. In 3.3, additional colormaps ``dusk``, ``kelp`` and
``octarine`` were also added, following the same guidelines. For a deeper dive
into colormaps, see the SciPy 2015 talk by Stéfan van der Walt and Nathaniel
Smith about the new matplotlib colormap ``viridis`` at
https://www.youtube.com/watch?v=xAoljeRJ3lU .
To specify a different default colormap (including ``viridis``), in your yt
configuration file (see :ref:`configuration-file`) you can set the value
``default_colormap`` to the name of the colormap you would like. In contrast
to previous versions of yt, starting in 3.3 yt no longer overrides any
matplotlib defaults and instead only applies the colormap to yt-produced plots.
.. _install-palettable:
Palettable and ColorBrewer2
~~~~~~~~~~~~~~~~~~~~~~~~~~~
While colormaps that employ a variety of colors often look attractive,
they are not always the best choice to convey information to one's audience.
There are numerous `articles <https://eagereyes.org/basics/rainbow-color-map>`_
and
`presentations <https://speakerdeck.com/kthyng/perceptions-of-matplotlib-colormaps>`_
that discuss how rainbow-based colormaps fail with regard to black-and-white
reproductions, colorblind audience members, and confusing in color ordering.
Depending on the application, the consensus seems to be that gradients between
one or two colors are the best way for the audience to extract information
from one's figures. Many such colormaps are found in palettable.
If you have installed `palettable <http://jiffyclub.github.io/palettable/>`_
(formerly brewer2mpl), you can also access the discrete colormaps available
to that package including those from `colorbrewer <http://colorbrewer2.org>`_.
Install `palettable <http://jiffyclub.github.io/palettable/>`_ with
``pip install palettable``. To access these maps in yt, instead of supplying
the colormap name, specify a tuple of the form (name, type, number), for
example ``('RdBu', 'Diverging', 9)``. These discrete colormaps will
not be interpolated, and can be useful for creating
colorblind/printer/grayscale-friendly plots. For more information, visit
`http://colorbrewer2.org <http://colorbrewer2.org>`_.
.. _custom-colormaps:
Making and Viewing Custom Colormaps
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
yt can also accommodate custom colormaps using the
:func:`~yt.visualization.color_maps.make_colormap` function
These custom colormaps can be made to an arbitrary level of
complexity. You can make these on the fly for each yt session, or you can
store them in your :ref:`plugin-file` for access to them in every future yt
session. The example below creates two custom colormaps, one that has
three equally spaced bars of blue, white and red, and the other that
interpolates in increasing lengthed intervals from black to red, to green,
to blue. These will be accessible for the rest of the yt session as
'french_flag' and 'weird'. See
:func:`~yt.visualization.color_maps.make_colormap` and
:func:`~yt.visualization.color_maps.show_colormaps` for more details.
.. code-block:: python
yt.make_colormap(
[("blue", 20), ("white", 20), ("red", 20)],
name="french_flag",
interpolate=False,
)
yt.make_colormap(
[("black", 5), ("red", 10), ("green", 20), ("blue", 0)],
name="weird",
interpolate=True,
)
yt.show_colormaps(subset=["french_flag", "weird"], filename="cmaps.png")
All Colormaps (including matplotlib)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This is a chart of all of the yt and matplotlib colormaps available. In
addition to each colormap displayed here, you can access its "reverse" by simply
appending a ``"_r"`` to the end of the colormap name.
.. image:: ../_images/all_colormaps.png
:width: 512
Native yt Colormaps
~~~~~~~~~~~~~~~~~~~
.. image:: ../_images/native_yt_colormaps.png
:width: 512
Displaying Colormaps Locally
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To display the most up to date colormaps locally, you can use the
:func:`~yt.visualization.color_maps.show_colormaps` function. By default,
you'll see every colormap available to you, but you can specify subsets
of colormaps to display, either as just the ``yt_native`` colormaps, or
by specifying a list of colormap names. This will display all the colormaps
available in a local window:
.. code-block:: python
import yt
yt.show_colormaps()
or to output the original yt colormaps to an image file, try:
.. code-block:: python
import yt
yt.show_colormaps(
subset=[
"cmyt.algae",
"cmyt.arbre",
"cmyt.dusk",
"cmyt.kelp",
"cmyt.octarine",
"cmyt.pastel",
],
filename="yt_native.png",
)
.. note ::
Since yt 4.1, yt native colormaps are shipped as a separate package
`cmyt <https://pypi.org/project/cmyt/>`_ that can be used
outside yt itself.
Within `yt` functions, these colormaps can still be referenced without
the ``"cmyt."`` prefix. However, there is no guarantee that this will
work in upcoming version of matplotlib, so our recommentation is to keep
the prefix at all times to retain forward compatibility.
yt also retains compatibility with names these colormaps were formerly
known as (for instance ``cmyt.pastel`` used to be named ``kamae``).
Applying a Colormap to your Rendering
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
All of the visualization functions in yt have a keyword allowing you to
manually specify a specific colormap. For example:
.. code-block:: python
yt.write_image(im, "output.png", cmap_name="jet")
If you're using the Plot Window interface (e.g. SlicePlot, ProjectionPlot,
etc.), it's even easier than that. Simply create your rendering, and you
can quickly swap the colormap on the fly after the fact with the ``set_cmap``
callback:
.. code-block:: python
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
p = yt.ProjectionPlot(ds, "z", ("gas", "density"))
p.set_cmap(field=("gas", "density"), cmap="turbo")
p.save("proj_with_jet_cmap.png")
p.set_cmap(field=("gas", "density"), cmap="hot")
p.save("proj_with_hot_cmap.png")
For more information about the callbacks available to Plot Window objects,
see :ref:`callbacks`.
Examples of Each Colormap
~~~~~~~~~~~~~~~~~~~~~~~~~
To give the reader a better feel for how a colormap appears once it is applied
to a dataset, below we provide a library of identical projections of an
isolated galaxy where only the colormap has changed. They use the sample
dataset "IsolatedGalaxy" available at
`https://yt-project.org/data <https://yt-project.org/data>`_.
.. yt_colormaps:: cmap_images.py
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/visualizing/colormaps/index.rst | index.rst |
.. _code-support:
Code Support
============
Levels of Support for Various Codes
-----------------------------------
yt provides frontends to support several different simulation code formats
as inputs. Below is a list showing what level of support is provided for
each code. See :ref:`loading-data` for examples of loading a dataset from
each supported output format using yt.
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| Capability ► | Fluid | Particles | Parameters | Units | Read on | Load Raw | Part of | Level of |
| Code/Format ▼ | Quantities | | | | Demand | Data | test suite | Support |
+=======================+============+===========+============+=======+==========+==========+============+=============+
| AMRVAC | Y | N | Y | Y | Y | Y | Y | Partial |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| AREPO | Y | Y | Y | Y | Y | Y | Y | Full [#f4]_ |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| ART | Y | Y | Y | Y | Y [#f2]_ | Y | N | Full |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| ARTIO | Y | Y | Y | Y | Y | Y | Y | Full |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| Athena | Y | N | Y | Y | Y | Y | Y | Full |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| Athena++ | Y | N | Y | Y | Y | Y | Y | Partial |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| Castro | Y | Y [#f3]_ | Partial | Y | Y | Y | N | Full |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| CfRadial | Y | N/A | Y | Y | Y | Y | Y | [#f5]_ |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| CHOLLA | Y | N/A | Y | Y | Y | Y | Y | Full |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| Chombo | Y | Y | Y | Y | Y | Y | Y | Full |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| Enzo | Y | Y | Y | Y | Y | Y | Y | Full |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| Enzo-E | Y | Y | Y | Y | Y | Y | Y | Full |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| Exodus II | ? | ? | ? | ? | ? | ? | ? | ? |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| FITS | Y | N/A | Y | Y | Y | Y | Y | Full |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| FLASH | Y | Y | Y | Y | Y | Y | Y | Full |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| Gadget | Y | Y | Y | Y | Y | Y | Y | Full |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| GAMER | Y | Y | Y | Y | Y | Y | Y | Full |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| Gasoline | Y | Y | Y | Y | Y | Y | Y | Full |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| Gizmo | Y | Y | Y | Y | Y | Y | Y | Full |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| Grid Data Format (GDF)| Y | N/A | Y | Y | Y | Y | Y | Full |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| IAMR | ? | ? | ? | ? | ? | ? | ? | ? |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| Maestro | Y [#f1]_ | N | Y | Y | Y | Y | N | Partial |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| MOAB | Y | N/A | Y | Y | Y | Y | Y | Full |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| Nyx | Y | Y | Y | Y | Y | Y | Y | Full |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| openPMD | Y | Y | N | Y | Y | Y | N | Partial |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| Orion | Y | Y | Y | Y | Y | Y | Y | Full |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| OWLS/EAGLE | Y | Y | Y | Y | Y | Y | Y | Full |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| Piernik | Y | N/A | Y | Y | Y | Y | Y | Full |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| Pluto | Y | N | Y | Y | Y | Y | Y | Partial |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| RAMSES | Y | Y | Y | Y | Y [#f2]_ | Y | Y | Full |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| Tipsy | Y | Y | Y | Y | Y | Y | Y | Full |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
| WarpX | Y | Y | Y | Y | Y | Y | Y | Full |
+-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+
.. [#f1] one-dimensional base-state not read in currently.
.. [#f2] These handle mesh fields using an in-memory octree that has not been parallelized.
Datasets larger than approximately 1024^3 will not scale well.
.. [#f3] Newer versions of Castro that use BoxLib's standard particle format are supported.
The older ASCII format is not.
.. [#f4] The Voronoi cells are currently treated as SPH-like particles, with a smoothing
length proportional to the cube root of the cell volume.
.. [#f5] yt provides support for cartesian-gridded CfRadial datasets. Data in native
CFRadial coordinates will be gridded on load, see :ref:`loading-cfradial-data`.
If you have a dataset that uses an output format not yet supported by yt, you
can either input your data following :ref:`loading-numpy-array` or
:ref:`generic-particle-data`, or help us by :ref:`creating_frontend` for this
new format.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/reference/code_support.rst | code_support.rst |
A Brief Introduction to Python
------------------------------
All scripts that use yt are really Python scripts that use yt as a library.
The great thing about Python is that the standard set of libraries that come
with it are very extensive -- Python comes with everything you need to write
and run mail servers and web servers, create Logo-style turtle graphics, do
arbitrary precision math, interact with the operating system, and many other
things. In addition to that, efforts by the scientific community to improve
Python for computational science have created libraries for fast array
computation, GPGPU operations, distributed computing, and visualization.
So when you use yt through the scripting interface, you get for free the
ability to interlink it with any of the other libraries available for Python.
In the past, this has been used to create new types of visualization using
OpenGL, data management using Google Docs, and even a simulation that sends an
SMS when it has new data to report on.
But, this also means learning a little bit of Python! This next section
presents a short tutorial of how to start up and think about Python, and then
moves on to how to use Python with yt.
Starting Python
+++++++++++++++
Python has two different modes of execution: interactive execution, and
scripted execution. We'll start with interactive execution and then move on to
how to write and use scripts.
Before we get started, we should briefly touch upon the commands ``help`` and
``dir``. These two commands provide a level of introspection:
``help(something)`` will return the internal documentation on ``something``,
including how it can be used and all of the possible "methods" that can be
called on it. ``dir()`` will return the available commands and objects that
can be directly called, and ``dir(something)`` will return information about
all the commands that ``something`` provides. This probably sounds a bit
opaque, but it will become clearer with time -- it's also probably helpful to
call ``help`` on any or all of the objects we create during this orientation.
To start up Python, at your prompt simply type:
.. code-block:: bash
$ python
This will open up Python and give to you a simple prompt of three greater-than
signs. Let's inaugurate the occasion appropriately -- type this::
>>> print("Hello, world.")
As you can see, this printed out the string "Hello, world." just as we
expected. Now let's try a more advanced string, one with a number in it. For
this we'll use an "f-string", which is the preferred way to format strings in modern Python.
We'll print pi, but only with three digits of accuracy.::
>>> print(f"Pi is precisely {3.1415926:0.2f}")
This took the number we fed it (3.1415926) and printed it out as a floating
point number with two decimal places. Now let's try something a bit different
-- let's print out both the name of the number and its value.::
>>> print(f"{'pi'} is precisely {3.1415926:0.2f}")
And there you have it -- the very basics of starting up Python, and some very
simple mechanisms for printing values out. Now let's explore a few types of
data that Python can store and manipulate.
Data Types
++++++++++
Python provides a number of datatypes, but the main ones that we'll concern
ourselves with at first are lists, tuples, strings, numbers, and dictionaries.
Most of these can be instantiated in a couple different ways, and we'll look at
a few of them. Some of these objects can be modified in place, which is called
being mutable, and some are immutable and cannot be modified in place. We'll
talk below about what that means.
Perhaps most importantly, though, is an idea about how Python works in terms of
names and bindings -- this is called the "object model." When you create an
object, that is independent from binding it to a name -- think of this like
pointers in C. This also operates a bit differently for mutable and immutable
types. We'll talk a bit more about this later, but it's handy to initially
think of things in terms of references. (This is also, not coincidentally, how
Python thinks of things internally as well!) When you create an object,
initially it has no references to it -- there's nothing that points to it.
When you bind it to a name, then you are making a reference, and so its
"reference count" is now 1. When something else makes a reference to it, the
reference count goes up to 2. When the reference count returns to 0, the
object is deleted -- but not before. This concept of reference counting comes
up from time to time, but it's not going to be a focus of this orientation.
The two easiest datatypes are simply strings and numbers. We can make a string
very easily::
>>> my_string = "Hello there"
>>> print(my_string)
We can also take a look at each individual part of a string. We'll use the
'slicing' notation for this. As a brief note, slicing is 0-indexed, so that
element 0 corresponds to the first element. If we wanted to see the third
element of our string::
>>> print(my_string[2])
We can also take the third through the 5 elements::
>>> print(my_string[2:5])
But note that if you try to change an element directly, Python objects and it
won't let you -- that's because strings are immutable. (But, note that because
of how the += operator works, we can do "my_string += '1'" without issue.)
To create a number, we do something similar::
>>> a = 10
>>> print(a)
This works for floating points as well. Now we can do math on these numbers::
>>> print(a**2)
>>> print(a + 5)
>>> print(a + 5.1)
>>> print(a / 2.0)
Now that we have a couple primitive datatypes, we can move on to sequences --
lists and tuples. These two objects are very similar, in that they are
collections of arbitrary data types. We'll only look at collections of strings
and numbers for now, but these can be filled with arbitrary datatypes
(including objects that yt provides, like spheres, datasets, grids, and
so on.) The easiest way to create a list is to simply construct one::
>>> my_list = []
At this point, you can find out how long it is, you can append elements, and
you can access them at will::
>>> my_list.append(1)
>>> my_list.append(my_string)
>>> print(my_list[0])
>>> print(my_list[-1])
>>> print(len(my_list))
You can also create a list already containing an initial set of elements::
>>> my_list = [1, 2, 3, "four"]
>>> my_list[2] = "three!!"
Lists are very powerful objects, which we'll talk about a bit below when
discussing how iteration works in Python.
A tuple is like a list, in that it's a sequence of objects, and it can be
sliced and examined piece by piece. But unlike a list, it's immutable:
whatever a tuple contains at instantiation is what it contains for the rest of
its existence. Creating a tuple is just like creating a list, except that you
use parentheses instead of brackets::
>>> my_tuple = (1, "a", 62.6)
Tuples show up very commonly when handling arguments to Python functions and
when dealing with multiple return values from a function. They can also be
unpacked::
>>> v1, v2, v3 = my_tuple
will assign 1, "a", and 62.6 to v1, v2, and v3, respectively.
Mutables vs Immutables and Is Versus Equals
+++++++++++++++++++++++++++++++++++++++++++
This section is not a "must read" -- it's more of an exploration of how
Python's objects work. At some point this is something you may want to be
familiar with, but it's not strictly necessary on your first pass.
Python provides the operator ``is`` as well as the comparison operator ``==``.
The operator ``is`` determines whether two objects are in fact the same object,
whereas the operator ``==`` determines if they are equal, according to some
arbitrarily defined equality operation. Think of this like comparing the
serial numbers on two pictures of a dollar bill (the ``is`` operator) versus
comparing the values of two pieces of currency (the ``==`` operator).
This digs in to the idea of how the Python object model works, so let's test
some things out. For instance, let's take a look at comparing two floating
point numbers::
>>> a = 10.1
>>> b = 10.1
>>> print(a == b)
>>> print(a is b)
The first one returned True, but the second one returned False. Even though
both numbers are equal, they point to different points in memory. Now let's
try assigning things a bit differently::
>>> b = a
>>> print(a is b)
This time it's true -- they point to the same part of memory. Try incrementing
one and seeing what happens. Now let's try this with a string::
>>> a = "Hi there"
>>> b = a
>>> print(a is b)
Okay, so our intuition here works the same way, and it returns True. But what
happens if we modify the string?::
>>> a += "!"
>>> print(a)
>>> print(b)
>>> print(a is b)
As you can see, now not only does a contain the value "Hi there!", but it also
is a different value than what b contains, and it also points to a different
region in memory. That's because strings are immutable -- the act of adding on
"!" actually creates an entirely new string and assigns that entirely new
string to the variable a, leaving the string pointed to by b untouched.
With lists, which are mutable, we have a bit more liberty with how we modify
the items and how that modifies the object and its pointers. A list is really
just a pointer to a collection; the list object itself does not have any
special knowledge of what constitutes that list. So when we initialize a and
b::
>>> a = [1, 5, 1094.154]
>>> b = a
We end up with two pointers to the same set of objects. (We can also have a
list inside a list, which adds another fun layer.) Now when we modify a, it
shows up in b::
>>> a.append("hat wobble")
>>> print(b[-1])
This also works with the concatenation operator::
>>> a += ["beta sequences"]
>>> print(a[-1], b[-1])
But we can force a break in this by slicing the list when we initialize::
>>> a = [1, 2, 3, 4]
>>> b = a[:]
>>> a.append(5)
>>> print(b[-1], a[-1])
Here they are different, because we have sliced the list when initializing b.
The coolest datatype available in Python, however, is the dictionary. This is
a mapping object of key:value pairs, where one value is used to look up another
value. We can instantiate a dictionary in a variety of ways, but for now we'll
only look at one of the simplest mechanisms for doing so::
>>> my_dict = {}
>>> my_dict["A"] = 1.0
>>> my_dict["B"] = 154.014
>>> my_dict[14001] = "This number is great"
>>> print(my_dict["A"])
As you can see, one value can be used to look up another. Almost all datatypes
(with a few notable exceptions, but for the most part these are quite uncommon)
can be used as a key, and you can use any object as a value.
We won't spend too much time discussing dictionaries explicitly, but I will
leave you with a word on their efficiency: the Python lookup algorithm is known
for its hand-tuned optimization and speed, and it's very common to use
dictionaries to look up hundreds of thousands or even millions of elements and
to expect it to be responsive.
Looping
+++++++
Looping in Python is both different and more powerful than in lower-level
languages. Rather than looping based exclusively on conditionals (which is
possible in Python) the fundamental mode of looping in Python is iterating
over objects. In C, one might construct a loop where some counter variable is
initialized, and at each iteration of the loop it is incremented and compared
against a reference value; when the counter variable reaches the reference
variable, the loop is terminated.
In Python, on the other hand, to accomplish iteration through a set of
sequential integers, one actually constructs a sequence of those integers, and
iterates over that sequence. For more discussion of this, and some very, very
powerful ways of accomplishing this iteration process, look through the Python
documentation for the words 'iterable' and 'generator.'
To see this in action, let's first take a look at the built-in function
``range``. ::
>>> print(range(10))
As you can see, what the function ``range`` returns is a list of integers,
starting at zero, that is as long as the argument to the ``range`` function.
In practice, this means that calling ``range(N)`` returns ``0, 1, 2, ... N-1``
in a list. So now we can execute a for loop, but first, an important
interlude:
Control blocks in Python are delimited by white space.
This means that, unlike in C with its brackets, you indicate an isolated
control block for conditionals, function declarations, loops and other things
with an indentation. When that control block ends, you dedent the text. In
yt, we use four spaces -- I recommend you do the same -- which can be inserted
by a text editor in place of tab characters.
Let's try this out with a for loop. First type ``for i in range(10):`` and
press enter. This will change the prompt to be three periods, instead of three
greater-than signs, and you will be expected to hit the tab key to indent.
Then type "print(i)", press enter, and then instead of indenting again, press
enter again. The entire entry should look like this::
>>> for i in range(10):
... print(i)
...
As you can see, it prints out each integer in turn. So far this feels a lot
like C. (It won't, if you start using iterables in place of sequences -- for
instance, ``xrange`` operates just like range, except instead of returning an
already-created list, it returns the promise of a sequence, whose elements
aren't created until they are requested.) Let's try it with our earlier list::
>>> my_sequence = ["a", "b", 4, 110.4]
>>> for i in my_sequence:
... print(i)
...
This time it prints out every item in the sequence.
A common idiom that gets used a lot is to figure out which index the loop is
at. The first time this is written, it usually goes something like this::
>>> index = 0
>>> my_sequence = ["a", "b", 4, 110.4]
>>> for i in my_sequence:
... print("%s = %s" % (index, i))
... index += 1
...
This does what you would expect: it prints out the index we're at, then the
value of that index in the list. But there's an easier way to do this, less
prone to error -- and a bit cleaner! You can use the ``enumerate`` function to
accomplish this::
>>> my_sequence = ["a", "b", 4, 110.4]
>>> for index, val in enumerate(my_sequence):
... print("%s = %s" % (index, val))
...
This does the exact same thing, but we didn't have to keep track of the counter
variable ourselves. You can use the function ``reversed`` to reverse a
sequence in a similar fashion. Try this out::
>>> my_sequence = range(10)
>>> for val in reversed(my_sequence):
... print(val)
...
We can even combine the two!::
>>> my_sequence = range(10)
>>> for index, val in enumerate(reversed(my_sequence)):
... print("%s = %s" % (index, val))
...
The most fun of all the built-in functions that operate on iterables, however,
is the ``zip`` function. This function will combine two sequences (but only up
to the shorter of the two -- so if one has 16 elements and the other 1000, the
zipped sequence will only have 16) and produce iterators over both.
As an example, let's say you have two sequences of values, and you want to
produce a single combined sequence from them.::
>>> seq1 = ["Hello", "What's up", "I'm fine"]
>>> seq2 = ["!", "?", "."]
>>> seq3 = []
>>> for v1, v2 in zip(seq1, seq2):
... seq3.append(v1 + v2)
...
>>> print(seq3)
As you can see, this is much easier than constructing index values by hand and
then drawing from the two sequences using those index values. I should note
that while this is great in some instances, for numeric operations, NumPy
arrays (discussed below) will invariably be faster.
Conditionals
++++++++++++
Conditionals, like loops, are delimited by indentation. They follow a
relatively simple structure, with an "if" statement, followed by the
conditional itself, and then a block of indented text to be executed in the
event of the success of that conditional. For subsequent conditionals, the
word "elif" is used, and for the default, the word "else" is used.
As a brief aside, the case/switch statement in Python is typically executed
using an if/elif/else block; this can be done using more complicated
dictionary-type statements with functions, but that typically only adds
unnecessary complexity.
For a simple example of how to do an if/else statement, we'll return to the
idea of iterating over a loop of numbers. We'll use the ``%`` operator, which
is a binary modulus operation: it divides the first number by the second and
then returns the remainder. Our first pass will examine the remainders from
dividing by 2, and print out all the even numbers. (There are of course easier
ways of determining which numbers are multiples of 2 -- particularly using
NumPy, as we'll do below.)::
>>> for val in range(100):
... if val % 2 == 0:
... print("%s is a multiple of 2" % (val))
...
Now we'll add on an ``else`` statement, so that we print out all the odd
numbers as well, with the caveat that they are not multiples of 2.::
>>> for val in range(100):
... if val % 2 == 0:
... print("%s is a multiple of 2" % (val))
... else:
... print("%s is not a multiple of 2" % (val))
...
Let's extend this to check the remainders of division with both 2 and 3, and
determine which numbers are multiples of 2, 3, or neither. We'll do this for
all numbers between 0 and 99.::
>>> for val in range(100):
... if val % 2 == 0:
... print("%s is a multiple of 2" % (val))
... elif val % 3 == 0:
... print("%s is a multiple of 3" % (val))
... else:
... print("%s is not a multiple of 2 or 3" % (val))
...
This should print out which numbers are multiples of 2 or 3 -- but note that
we're not catching all the multiples of 6, which are multiples of both 2 and 3.
To do that, we have a couple options, but we can start with just changing the
first if statement to encompass both, using the ``and`` operator::
>>> for val in range(100):
... if val % 2 == 3 and val % 3 == 0:
... print("%s is a multiple of 6" % (val))
... elif val % 2 == 0:
... print("%s is a multiple of 2" % (val))
... elif val % 3 == 0:
... print("%s is a multiple of 3" % (val))
... else:
... print("%s is not a multiple of 2 or 3" % (val))
...
In addition to the ``and`` statement, the ``or`` and ``not`` statements work in
the expected manner. There are also several built-in operators, including
``any`` and ``all`` that operate on sequences of conditionals, but those are
perhaps better saved for later.
Array Operations
++++++++++++++++
In general, iteration over sequences carries with it some substantial overhead:
each value is selected, bound to a local name, and then its type is determined
when it is acted upon. This is, regrettably, the price of the generality that
Python brings with it. While this overhead is minimal for operations acting on
a handful of values, if you have a million floating point elements in a
sequence and you want to simply add 1.2 to all of them, or multiply them by
2.5, or exponentiate them, this carries with it a substantial performance hit.
To accommodate this, the NumPy library has been created to provide very fast
operations on arrays of numerical elements. When you create a NumPy array, you
are creating a shaped array of (potentially) sequential locations in memory
which can be operated on at the C-level, rather than at the interpreted Python
level. For this reason, which NumPy arrays can act like Python sequences can,
and can thus be iterated over, modified in place, and sliced, they can also be
addressed as a monolithic block. All of the fluid and particle quantities used
in yt will be expressed as NumPy arrays, allowing for both efficient
computation and a minimal memory footprint.
For instance, the following operation will not work in standard Python::
>>> vals = range(10)
>>> vals *= 2.0
(Note that multiplying vals by the integer 2 will not do what you think: rather
than multiplying each value by 2.0, it will simply double the length of the
sequence!)
To get started with array operations, let's first import the NumPy library.
This is the first time we've seen an import in this orientation, so we'll
dwell for a moment on what this means. When a library is imported, it is read
from disk, the functions are loaded into memory, and they are made available
to the user. So when we execute::
>>> import numpy
The ``numpy`` module is loaded, and then can be accessed::
>>> numpy.arange(10)
This calls the ``arange`` function that belongs to the ``numpy`` module's
"namespace." We'll use the term namespace to refer to the variables,
functions, and submodules that belong to a given conceptual region. We can
also extend our current namespace with the contents of the ``numpy`` module, so
that we don't have to prefix all of our calling of ``numpy`` functions with
``numpy.`` but we will not do so here, so as to preserve the distinction
between the built-in Python functions and the NumPy-provided functions.
To get started, let's perform the NumPy version of getting a sequence of
numbers from 0 to 99::
>>> my_array = numpy.arange(100)
>>> print(my_array)
>>> print(my_array * 2.0)
>>> print( my_array * 2)
As you can see, each of these operations does exactly what we think it ought
to. And, in fact, so does this one::
>>> my_array *= 2.0
So far we've only examined what happens if we have operate on a single array of
a given shape -- specifically, if we have an array that is N elements long, but
only one dimensional. NumPy arrays are, for the most part, defined by their
data, their shape, and their data type. We can examine both the shape (which
includes dimensionality) and the size (strictly the total number of elements)
in an array by looking at a couple properties of the array::
>>> print(my_array.size)
>>> print(my_array.shape)
Note that size must be the product of the components of the shape. In this
case, both are 100. We can obtain a new array of a different shape by calling
the ``reshape`` method on an array::
>>> print(my_array.reshape((10, 10)))
In this case, we have not modified ``my_array`` but instead created a new array
containing the same elements, but with a different dimensionality and shape.
You can modify an array's shape in place, as well, but that should be done with
care and the explanation of how that works and its caveats can come a bit
later.
There are a few other important characteristics of arrays, and ways to create
them. We can see what kind of datatype an array is by examining its ``dtype``
attribute::
>>> print(my_array.dtype)
This can be changed by calling ``astype`` with another datatype. Datatypes
include, but are not limited to, ``int32``, ``int64``, ``float32``,
``float64``.::
>>> float_array = my_array.astype("float64")
Arrays can also be operated on together, in lieu of something like an iteration
using the ``zip`` function. To show this, we'll use the
``numpy.random.random`` function to generate a random set of values of length
100, and then we'll multiply our original array against those random values.::
>>> rand_array = numpy.random.random(100)
>>> print(rand_array * my_array)
There are a number of functions you can call on arrays, as well. For
instance::
>>> print(rand_array.sum())
>>> print(rand_array.mean())
>>> print(rand_array.min())
>>> print(rand_array.max())
Indexing in NumPy is very fun, and also provides some advanced functionality
for selecting values. You can slice and dice arrays::
>>> print(my_array[50:60])
>>> print(my_array[::2])
>>> print(my_array[:-10])
But Numpy also provides the ability to construct boolean arrays, which are the
result of conditionals. For example, let's say that you wanted to generate a
random set of values, and select only those less than 0.2::
>>> rand_array = numpy.random.random(100)
>>> print(rand_array < 0.2)
What is returned is a long list of booleans. Boolean arrays can be used as
indices -- what this means is that you can construct an index array and then
use that toe select only those values where that index array is true. In this
example we also use the ``numpy.all`` and ``numpy.any`` functions, which do
exactly what you might think -- they evaluate a statement and see if all
elements satisfy it, and if any individual element satisfies it,
respectively.::
>>> ind_array = rand_array < 0.2
>>> print(rand_array[ind_array])
>>> print(numpy.all(rand_array[ind_array] < 0.2))
You can even skip the creation of the variable ``ind_array`` completely, and
instead just coalesce the statements into a single statement::
>>> print(numpy.all(rand_array[rand_array < 0.2] < 0.2))
>>> print(numpy.any(rand_array[rand_array < 0.2] > 0.2))
You might look at these and wonder why this is useful -- we've already selected
those elements that are less than 0.2, so why do we want to re-evaluate it?
But the interesting component to this is that a conditional applied to one
array can be used to index another array. For instance::
>>> print(my_array[rand_array < 0.2])
Here we've identified those elements in our random number array that are less
than 0.2, and printed the corresponding elements from our original sequential
array of integers. This is actually a great way of selecting a random sample
of a dataset -- in this case we get back approximately 20% of the dataset
``my_array``, selected at random.
To create arrays from nothing, several options are available. The command
``numpy.array`` will create an array from any arbitrary sequence::
>>> my_sequence = [1.0, 510.42, 1789532.01482]
>>> my_array = numpy.array(my_sequence)
Additionally, arrays full of ones and zeros can be created::
>>> my_integer_ones = numpy.ones(100)
>>> my_float_ones = numpy.ones(100, dtype="float64")
>>> my_integer_zeros = numpy.zeros(100)
>>> my_float_zeros = numpy.zeros(100, dtype="float64")
The function ``numpy.concatenate`` is also useful, but outside the scope of
this orientation.
The NumPy documentation has a number of more advanced mechanisms for combining
arrays; the documentation for "broadcasting" in particular is very useful, and
covers mechanisms for combining arrays of different shapes and sizes, which can
be tricky but also extremely powerful. We won't discuss the idea of
broadcasting here, simply because I don't know that I could do it justice! The
NumPy Docs have a great `section on broadcasting
<https://numpy.org/doc/stable/user/basics.broadcasting.html>`_.
Scripted Usage
++++++++++++++
We've now explored Python interactively. However, for long-running analysis
tasks or analysis tasks meant to be run on a compute cluster non-interactively,
we will want to utilize its scripting interface. Let's start by quitting out
of the interpreter. If you have not already done so, you can quit by pressing
"Ctrl-D", which will free all memory used by Python and return you to your
shell's command prompt.
At this point, open up a text editor and edit a file called
``my_first_script.py``. Python scripts typically end in the extension ``.py``.
We'll start our scripting tests by doing some timing of array operations versus
sequence operations. Into this file, type this text::
import numpy
import time
my_array = numpy.arange(1000000, dtype="float64")
t1 = time.time()
my_array_squared = my_array**2.0
t2 = time.time()
print("It took me %0.3e seconds to square the array using NumPy" % (t2-t1))
t1 = time.time()
my_sequence_squared = []
for i in range(1000000):
my_sequence_squared.append(i**2.0)
t2 = time.time()
print("It took me %0.3e seconds to square the sequence without NumPy" % (t2-t1))
Now save this file, and return to the command prompt. We can execute it by
supplying it to Python:
.. code-block:: bash
$ python my_first_script.py
It should run, display two pieces of information, and terminate, leaving you
back at the command prompt. On my laptop, the array operation is approximately
42 times faster than the sequence operation! Of course, depending on the
operation conducted, this number can go up quite substantially.
If you want to run a Python script and then be given a Python interpreter
prompt, you can call the ``python`` command with the option ``-i``:
.. code-block:: bash
$ python -i my_first_script.py
Python will execute the script and when it has reached the end it will give you
a command prompt. At this point, all of the variables you have set up and
created will be available to you -- so you can, for instance, print out the
contents of ``my_array_squared``::
>>> print(my_array_squared)
The scripting interface for Python is quite powerful, and by combining it with
interactive execution, you can, for instance, set up variables and functions
for interactive exploration of data.
Functions and Objects
+++++++++++++++++++++
Functions and Objects in Python are the easiest way to perform very complex,
powerful actions in Python. For the most part we will not discuss them; in
fact, the standard Python tutorial that comes with the Python documentation is
a very good explanation of how to create and use objects and functions, and
attempting to replicate it here would simply be futile.
yt provides both many objects and functions for your usage, and it is through
the usage and combination of functions and objects that you will be able to
create plots, manipulate data, and visualize your data.
And with that, we conclude our brief introduction to Python. I recommend
checking out the standard Python tutorial or browsing some of the NumPy
documentation. If you're looking for a book to buy, the only book I've
personally ever been completely satisfied with has been David Beazley's book on
Python Essentials and the Python standard library, but I've also heard good
things about many of the others, including those by Alex Martelli and Wesley
Chun.
We'll now move on to talking more about how to use yt, both from a scripting
perspective and interactively.
Python and Related References
+++++++++++++++++++++++++++++
* `Python quickstart <https://docs.python.org/3/tutorial/>`_
* `Learn Python the Hard Way <https://learnpythonthehardway.org/python3/>`_
* `Byte of Python <https://python.swaroopch.com/>`_
* `Dive Into Python <https://diveintopython3.problemsolving.io/>`_
* `Numpy docs <https://numpy.org/doc/stable/>`_
* `Matplotlib docs <https://matplotlib.org>`_
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/reference/python_introduction.rst | python_introduction.rst |
.. _command-line:
Command-Line Usage
------------------
Command-line Functions
~~~~~~~~~~~~~~~~~~~~~~
The :code:`yt` command-line tool allows you to access some of yt's basic
functionality without opening a python interpreter. The tools is a collection of
subcommands. These can quickly making plots of slices and projections through a
dataset, updating yt's codebase, print basic statistics about a dataset, launch
an IPython notebook session, and more. To get a quick list of what is
available, just type:
.. code-block:: bash
yt -h
This will print the list of available subcommands,
.. config_help:: yt
To execute any such function, simply run:
.. code-block:: bash
yt <subcommand>
Finally, to identify the options associated with any of these subcommand, run:
.. code-block:: bash
yt <subcommand> -h
Plotting from the command line
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
First, we'll discuss plotting from the command line, then we will give a brief
summary of the functionality provided by each command line subcommand. This
example uses the :code:`DD0010/moving7_0010` dataset distributed in the yt
git repository.
First let's see what our options are for plotting:
.. code-block:: bash
$ yt plot --help
There are many! We can choose whether we want a slice (default) or a
projection (``-p``), the field, the colormap, the center of the image, the
width and unit of width of the image, the limits, the weighting field for
projections, and on and on. By default the plotting command will execute the
same thing along all three axes, so keep that in mind if it takes three times
as long as you'd like! The center of a slice defaults to the center of
the domain, so let's just give that a shot and see what it looks like:
.. code-block:: bash
$ yt plot DD0010/moving7_0010
Well, that looks pretty bad! What has happened here is that the center of the
domain only has some minor shifts in density, so the plot is essentially
incomprehensible. Let's try it again, but instead of slicing, let's project.
This is a line integral through the domain, and for the density field this
becomes a column density:
.. code-block:: bash
$ yt plot -p DD0010/moving7_0010
Now that looks much better! Note that all three axes' projections appear
nearly indistinguishable, because of how the two spheres are located in the
domain. We could center our domain on one of the spheres and take a slice, as
well. Now let's see what the domain looks like with grids overlaid, using the
``--show-grids`` option:
.. code-block:: bash
$ yt plot --show-grids -p DD0010/moving7_0010
We can now see all the grids in the field of view. If you want to
annotate your plot with a scale bar, you can use the
``--show-scale-bar`` option:
.. code-block:: bash
$ yt plot --show-scale-bar -p DD0010/moving7_0010
Command-line subcommand summary
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
help
++++
Help lists all of the various command-line options in yt.
instinfo and version
++++++++++++++++++++
This gives information about where your yt installation is, what version
and changeset you're using and more.
mapserver
+++++++++
Ever wanted to interact with your data using a
`google maps <http://maps.google.com/>`_-style interface? Now you can by using the
yt mapserver. See :ref:`mapserver` for more details.
pastebin and pastebin_grab
++++++++++++++++++++++++++
The `pastebin <http://paste.yt-project.org/>`_ is an online location where
you can anonymously post code snippets and error messages to share with
other users in a quick, informal way. It is often useful for debugging
code or co-developing. By running the ``pastebin`` subcommand with a
text file, you send the contents of that file to an anonymous pastebin;
.. code-block:: bash
yt pastebin my_script.py
By running the ``pastebin_grab`` subcommand with a pastebin number
(e.g. 1768), it will grab the contents of that pastebin
(e.g. the website http://paste.yt-project.org/show/1768 ) and send it to
STDOUT for local use. See :ref:`pastebin` for more information.
.. code-block:: bash
yt pastebin_grab 1768
upload
++++++
Upload a file to a public curldrop instance. Curldrop is a simple web
application that allows you to upload and download files straight from your
Terminal with an http client like e.g. curl. It was initially developed by
`Kevin Kennell <https://github.com/kennell/curldrop>`_ and later forked and
adjusted for yt’s needs. After a successful upload you will receive a url that
can be used to share the data with other people.
.. code-block:: bash
yt upload my_file.tar.gz
plot
++++
This command generates one or many simple plots for a single dataset.
By specifying the axis, center, width, etc. (run ``yt help plot`` for
details), you can create slices and projections easily at the
command-line.
rpdb
++++
Connect to a currently running (on localhost) rpdb session. See
:ref:`remote-debugging` for more info.
notebook
++++++++
Launches a Jupyter notebook server and prints out instructions on how to open
an ssh tunnel to connect to the notebook server with a web browser. This is
most useful when you want to run a Jupyter notebook using CPUs on a remote
host.
stats
+++++
This subcommand provides you with some basic statistics on a given dataset.
It provides you with the number of grids and cells in each level, the time
of the dataset, and the resolution. It is tantamount to calling the
``Dataset.print_stats`` method.
Additionally, there is the option to print the minimum, maximum, or both for
a given field. The field is assumed to be density by default:
.. code-block:: bash
yt stats GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150 --max --min
or a different field can be specified using the ``-f`` flag:
.. code-block:: bash
yt stats GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150 --max --min -f gas,temperature
The field-related stats output from this command can be directed to a file using
the ``-o`` flag:
.. code-block:: bash
yt stats GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150 --max -o out_stats.dat
update
++++++
This subcommand updates the yt installation to the most recent version for
your repository (e.g. stable, 2.0, development, etc.). Adding the ``--all``
flag will update the dependencies as well.
.. _upload-image:
upload_image
++++++++++++
Images are often worth a thousand words, so when you're trying to
share a piece of code that generates an image, or you're trying to
debug image-generation scripts, it can be useful to send your
co-authors a link to the image. This subcommand makes such sharing
a breeze. By specifying the image to share, ``upload_image`` automatically
uploads it anonymously to the website `imgur.com <https://imgur.com/>`_ and
provides you with a link to share with your collaborators. Note that the
image *must* be in the PNG format in order to use this function.
delete_image
++++++++++++
The image uploaded using ``upload_image`` is assigned with a unique hash that
can be used to remove it. This subcommand provides an easy way to send a delete
request directly to the `imgur.com <https://imgur.com/>`_.
download
~~~~~~~~
This subcommand downloads a file from https://yt-project.org/data. Using ``yt download``,
one can download a file to:
* ``"test_data_dir"``: Save the file to the location specified in
the ``"test_data_dir"`` configuration entry for test data.
* ``"supp_data_dir"``: Save the file to the location specified in
the ``"supp_data_dir"`` configuration entry for supplemental data.
* Any valid path to a location on disk, e.g. ``/home/jzuhone/data``.
Examples:
.. code-block:: bash
$ yt download apec_emissivity_v2.h5 supp_data_dir
.. code-block:: bash
$ yt download GasSloshing.tar.gz test_data_dir
.. code-block:: bash
$ yt download ZeldovichPancake.tar.gz /Users/jzuhone/workspace
If the configuration values ``"test_data_dir"`` or ``"supp_data_dir"`` have not
been set by the user, an error will be thrown.
Config helper
~~~~~~~~~~~~~
The :code:`yt config` command-line tool allows you to modify and access yt's
configuration without manually locating and opening the config file in an editor.
To get a quick list of available commands, just type:
.. code-block:: bash
yt config -h
This will print the list of available subcommands:
.. config_help:: yt config
Since yt version 4, the configuration file is located in ``$XDG_CONFIG_HOME/yt/yt.toml`` adhering to the
`XDG Base Directory Specification
<https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html>`_.
Unless customized, this defaults to ``$HOME/.config/`` on Unix-like systems (macOS, Linux, ...).
The old configuration file (``$XDG_CONFIG_HOME/yt/ytrc``) is deprecated.
In order to perform an automatic migration of the old config, you are
encouraged to run:
.. code-block:: bash
yt config migrate
This will convert your old config file to the toml format. The original file
will be moved to ``$XDG_CONFIG_HOME/yt/ytrc.bak``.
Examples
++++++++
Listing current content of the config file:
.. code-block:: bash
$ yt config list
[yt]
log_level = 50
Obtaining a single config value by name:
.. code-block:: bash
$ yt config get yt log_level
50
Changing a single config value:
.. code-block:: bash
$ yt config set yt log_level 10
Removing a single config entry:
.. code-block:: bash
$ yt config rm yt log_level
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/reference/command-line.rst | command-line.rst |
Reference Materials
===================
Here we include reference materials for yt with a list of all the code formats
supported, a description of how to use yt at the command line, a detailed
listing of individual classes and functions, a description of the useful
config file, and finally a list of changes between each major release of the
code.
.. toctree::
:maxdepth: 2
code_support
command-line
api/api
api/modules
configuration
python_introduction
field_list
demeshening
changelog
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/reference/index.rst | index.rst |
.. _changelog:
ChangeLog
=========
This is a non-comprehensive log of changes to yt over its many releases.
Contributors
------------
The `CREDITS file <https://github.com/yt-project/yt/blob/main/CREDITS>`_
contains the most up-to-date list of everyone who has contributed to the yt
source code.
yt 4.0
------
Welcome to yt 4.0! This release is the result of several years worth of
developer effort and has been in progress since the mid 3.x series. Please keep
in mind that this release **will** have breaking changes. Please see the yt 4.0
differences page for how you can expect behavior to differ from the 3.x series.
This is a manually curated list of pull requests that went in to yt 4.0,
representing a subset of `the full
list <https://gist.github.com/matthewturk/7a1f21d98aa5188de7645eda082ce4e6>`__.
New Functions
^^^^^^^^^^^^^
- ``yt.load_sample`` (PR
#\ `2417 <https://github.com/yt-project/yt/pull/2417>`__, PR
#\ `2496 <https://github.com/yt-project/yt/pull/2496>`__, PR
#\ `2875 <https://github.com/yt-project/yt/pull/2875>`__, PR
#\ `2877 <https://github.com/yt-project/yt/pull/2877>`__, PR
#\ `2894 <https://github.com/yt-project/yt/pull/2894>`__, PR
#\ `3262 <https://github.com/yt-project/yt/pull/3262>`__, PR
#\ `3263 <https://github.com/yt-project/yt/pull/3263>`__, PR
#\ `3277 <https://github.com/yt-project/yt/pull/3277>`__, PR
#\ `3309 <https://github.com/yt-project/yt/pull/3309>`__, and PR
#\ `3336 <https://github.com/yt-project/yt/pull/3336>`__)
- ``yt.set_log_level`` (PR
#\ `2869 <https://github.com/yt-project/yt/pull/2869>`__ and PR
#\ `3094 <https://github.com/yt-project/yt/pull/3094>`__)
- ``list_annotations`` method for plots (PR
#\ `2562 <https://github.com/yt-project/yt/pull/2562>`__)
API improvements
^^^^^^^^^^^^^^^^
- ``yt.load`` with support for ``os.PathLike`` objects, improved UX
and moved a new ``yt.loaders`` module, along with sibling functions (PR
#\ `2405 <https://github.com/yt-project/yt/pull/2405>`__, PR
#\ `2722 <https://github.com/yt-project/yt/pull/2722>`__, PR
#\ `2695 <https://github.com/yt-project/yt/pull/2695>`__, PR
#\ `2818 <https://github.com/yt-project/yt/pull/2818>`__, and PR
#\ `2831 <https://github.com/yt-project/yt/pull/2831>`__, PR
#\ `2832 <https://github.com/yt-project/yt/pull/2832>`__)
- ``Dataset`` now has a more useful repr (PR
#\ `3217 <https://github.com/yt-project/yt/pull/3217>`__)
- Explicit JPEG export support (PR
#\ `2549 <https://github.com/yt-project/yt/pull/2549>`__)
- ``annotate_clear`` is now ``clear_annotations`` (PR
#\ `2569 <https://github.com/yt-project/yt/pull/2569>`__)
- Throw an error if field access is ambiguous (PR
#\ `2967 <https://github.com/yt-project/yt/pull/2967>`__)
Newly supported data formats
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Arepo
~~~~~
- PR #\ `1807 <https://github.com/yt-project/yt/pull/1807>`__
- PR #\ `2236 <https://github.com/yt-project/yt/pull/2236>`__
- PR #\ `2244 <https://github.com/yt-project/yt/pull/2244>`__
- PR #\ `2344 <https://github.com/yt-project/yt/pull/2344>`__
- PR #\ `2434 <https://github.com/yt-project/yt/pull/2434>`__
- PR #\ `3258 <https://github.com/yt-project/yt/pull/3258>`__
- PR #\ `3265 <https://github.com/yt-project/yt/pull/3265>`__
- PR #\ `3291 <https://github.com/yt-project/yt/pull/3291>`__
Swift
~~~~~
- PR #\ `1962 <https://github.com/yt-project/yt/pull/1962>`__
Improved support and frontend specific bugfixes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
adaptahop
~~~~~~~~~
- PR #\ `2678 <https://github.com/yt-project/yt/pull/2678>`__
AMRVAC
~~~~~~
- PR #\ `2541 <https://github.com/yt-project/yt/pull/2541>`__
- PR #\ `2745 <https://github.com/yt-project/yt/pull/2745>`__
- PR #\ `2746 <https://github.com/yt-project/yt/pull/2746>`__
- PR #\ `3215 <https://github.com/yt-project/yt/pull/3215>`__
ART
~~~
- PR #\ `2688 <https://github.com/yt-project/yt/pull/2688>`__
ARTIO
~~~~~
- PR #\ `2613 <https://github.com/yt-project/yt/pull/2613>`__
Athena++
~~~~~~~~
- PR #\ `2985 <https://github.com/yt-project/yt/pull/2985>`__
Boxlib
~~~~~~
- PR #\ `2807 <https://github.com/yt-project/yt/pull/2807>`__
- PR #\ `2814 <https://github.com/yt-project/yt/pull/2814>`__
- PR #\ `2938 <https://github.com/yt-project/yt/pull/2938>`__ (AMReX)
Enzo-E (formerly Enzo-P)
~~~~~~~~~~~~~~~~~~~~~~~~
- PR #\ `3273 <https://github.com/yt-project/yt/pull/3273>`__
- PR #\ `3274 <https://github.com/yt-project/yt/pull/3274>`__
- PR #\ `3290 <https://github.com/yt-project/yt/pull/3290>`__
- PR #\ `3372 <https://github.com/yt-project/yt/pull/3372>`__
fits
~~~~
- PR #\ `2246 <https://github.com/yt-project/yt/pull/2246>`__
- PR #\ `2345 <https://github.com/yt-project/yt/pull/2345>`__
Gadget
~~~~~~
- PR #\ `2145 <https://github.com/yt-project/yt/pull/2145>`__
- PR #\ `3233 <https://github.com/yt-project/yt/pull/3233>`__
- PR #\ `3258 <https://github.com/yt-project/yt/pull/3258>`__
Gadget FOF Halo
~~~~~~~~~~~~~~~
- PR #\ `2296 <https://github.com/yt-project/yt/pull/2296>`__
GAMER
~~~~~
- PR #\ `3033 <https://github.com/yt-project/yt/pull/3033>`__
Gizmo
~~~~~
- PR #\ `3234 <https://github.com/yt-project/yt/pull/3234>`__
MOAB
~~~~
- PR #\ `2856 <https://github.com/yt-project/yt/pull/2856>`__
Owls
~~~~
- PR #\ `3325 <https://github.com/yt-project/yt/pull/3325>`__
Ramses
~~~~~~
- PR #\ `2679 <https://github.com/yt-project/yt/pull/2679>`__
- PR #\ `2714 <https://github.com/yt-project/yt/pull/2714>`__
- PR #\ `2960 <https://github.com/yt-project/yt/pull/2960>`__
- PR #\ `3017 <https://github.com/yt-project/yt/pull/3017>`__
- PR #\ `3018 <https://github.com/yt-project/yt/pull/3018>`__
Tipsy
~~~~~
- PR #\ `2193 <https://github.com/yt-project/yt/pull/2193>`__
Octree Frontends
~~~~~~~~~~~~~~~~
- Ghost zone access (PR
#\ `2425 <https://github.com/yt-project/yt/pull/2425>`__ and PR
#\ `2958 <https://github.com/yt-project/yt/pull/2958>`__)
- Volume Rendering (PR
#\ `2610 <https://github.com/yt-project/yt/pull/2610>`__)
Configuration file
^^^^^^^^^^^^^^^^^^
- Config files are now in `TOML <https://toml.io/en/>`__ (PR
#\ `2981 <https://github.com/yt-project/yt/pull/2981>`__)
- Allow a local plugin file (PR
#\ `2534 <https://github.com/yt-project/yt/pull/2534>`__)
- Allow per-field local config (PR
#\ `1931 <https://github.com/yt-project/yt/pull/1931>`__)
yt CLI
^^^^^^
- Fix broken command-line options (PR
#\ `3361 <https://github.com/yt-project/yt/pull/3361>`__)
- Drop yt hub command (PR
#\ `3363 <https://github.com/yt-project/yt/pull/3363>`__)
Deprecations
^^^^^^^^^^^^
- Smoothed fields are no longer necessary (PR
#\ `2194 <https://github.com/yt-project/yt/pull/2194>`__)
- Energy and momentum field names are more accurate (PR
#\ `3059 <https://github.com/yt-project/yt/pull/3059>`__)
- Incorrectly-named ``WeightedVariance`` is now
``WeightedStandardDeviation`` and the old name has been deprecated
(PR #\ `3132 <https://github.com/yt-project/yt/pull/3132>`__)
- Colormap auto-registration has been changed and yt 4.1 will not
register ``cmocean`` (PR
#\ `3175 <https://github.com/yt-project/yt/pull/3175>`__ and PR
#\ `3214 <https://github.com/yt-project/yt/pull/3214>`__)
Removals
~~~~~~~~
- ``analysis_modules`` has been
`extracted <https://github.com/yt-project/yt_astro_analysis/>`__ (PR
#\ `2081 <https://github.com/yt-project/yt/pull/2081>`__)
- Interactive volume rendering has been
`extracted <https://github.com/yt-project/yt_idv/>`__ (PR
#\ `2896 <https://github.com/yt-project/yt/pull/2896>`__)
- The bundled version of ``poster`` has been removed (PR
#\ `2783 <https://github.com/yt-project/yt/pull/2783>`__)
- The deprecated ``particle_position_relative`` field has been removed
(PR #\ `2901 <https://github.com/yt-project/yt/pull/2901>`__)
- Deprecated functions have been removed (PR
#\ `3007 <https://github.com/yt-project/yt/pull/3007>`__)
- Vendored packages have been removed (PR
#\ `3008 <https://github.com/yt-project/yt/pull/3008>`__)
- ``yt.pmods`` has been removed (PR
#\ `3061 <https://github.com/yt-project/yt/pull/3061>`__)
- yt now utilizes unyt as an external package (PR
#\ `2219 <https://github.com/yt-project/yt/pull/2219>`__, PR
#\ `2300 <https://github.com/yt-project/yt/pull/2300>`__, and PR
#\ `2303 <https://github.com/yt-project/yt/pull/2303>`__)
Version 3.6.1
-------------
Version 3.6.1 is a bugfix release. It includes the following backport:
- hotfix: support matplotlib 3.3.0.
See `PR 2754 <https://github.com/yt-project/yt/pull/2754>`__.
Version 3.6.0
-------------
Version 3.6.0 our next major release since 3.5.1, which was in February
2019. It includes roughly 180 pull requests contributed from 39 contributors,
22 of which committed for their first time to the project.
We have also updated our project governance and contribution guidelines, which
you can `view here <https://yt-project.github.io/governance/>`_ .
We'd like to thank all of the individuals who contributed to this release. There
are lots of new features and we're excited to share them with the community.
Breaking Changes
^^^^^^^^^^^^^^^^
The following breaking change was introduced. Please be aware that this could
impact your code if you use this feature.
- The angular momentum has been reversed compared to previous versions of yt.
See `PR 2043 <https://github.com/yt-project/yt/pull/2043>`__.
Major Changes and New Features
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- New frontend support for the code AMRVAC. Many thanks to Clément Robert
and Niels Claes who were major contributors to this initiative. Relevant PRs include
- Initial PR to support AMRVAC native data files
`PR 2321 <https://github.com/yt-project/yt/pull/2321>`__.
- added support for dust fields and derived fields
`PR 2387 <https://github.com/yt-project/yt/pull/2387>`__.
- added support for derived fields for hydro runs
`PR 2381 <https://github.com/yt-project/yt/pull/2381>`__.
- API documentation and docstrings for AMRVAC frontend
`PR 2384 <https://github.com/yt-project/yt/pull/2384>`__,
`PR 2380 <https://github.com/yt-project/yt/pull/2380>`__,
`PR 2382 <https://github.com/yt-project/yt/pull/2382>`__.
- testing-related PRs for AMRVAC:
`PR 2379 <https://github.com/yt-project/yt/pull/2379>`__,
`PR 2360 <https://github.com/yt-project/yt/pull/2360>`__.
- add verbosity to logging of geometry or ``geometry_override``
`PR 2421 <https://github.com/yt-project/yt/pull/2421>`__.
- add attribute to ``_code_unit_attributes`` specific to AMRVAC to ensure
consistent renormalisation of AMRVAC datasets. See
`PR 2357 <https://github.com/yt-project/yt/pull/2357>`__.
- parse AMRVAC's parfiles if user-provided
`PR 2369 <https://github.com/yt-project/yt/pull/2369>`__.
- ensure that min_level reflects dataset that has refinement
`PR 2475 <https://github.com/yt-project/yt/pull/2475>`__.
- fix derived unit parsing `PR 2362 <https://github.com/yt-project/yt/pull/2362>`__.
- update energy field to be ``energy_density`` and have units of code
pressure `PR 2376 <https://github.com/yt-project/yt/pull/2376>`__.
- Support for the AdaptaHOP halo finder code
`PR 2385 <https://github.com/yt-project/yt/pull/2385>`__.
- yt now supports geographic transforms and projections of data with
cartopy with support from `PR 1966 <https://github.com/yt-project/yt/pull/1966>`__.
- annotations used to work for only a single point, they now work for multiple points
on a plot, see `PR 2122 <https://github.com/yt-project/yt/pull/2122>`__.
- cosmology calculations now have support for the relativistic energy density of the
universe, see `PR 1714 <https://github.com/yt-project/yt/pull/1714>`__.
This feature is accessible to cosmology datasets and was added to the Enzo frontend.
- the eps writer now allows for arrow rotation. this is accessible with
the ``rotate`` kwarg in the ``arrow`` function.
See `PR 2151 <https://github.com/yt-project/yt/pull/2151>`__.
- allow for dynamic load balancing with parallel loading of timeseries
data using the ``dynamic`` kwarg. `PR 2149 <https://github.com/yt-project/yt/pull/2149>`__.
- show/hide colorbar and show/hide axes are now available for
``ProfilePlot`` s. These functions were also moved from the PlotWindow to the
PlotContainer class. `PR 2169 <https://github.com/yt-project/yt/pull/2169>`__.
- add support for ipywidgets with an ``__ipython_display__`` method on the
FieldTypeContainer. Field variables, source, and the field array can be
viewed with this widget. See PRs `PR 1844 <https://github.com/yt-project/yt/pull/1844>`__
and `PR 1848 <https://github.com/yt-project/yt/pull/1848>`__,
or try ``display(ds.fields)`` in a Jupyter notebook.
- cut regions can now be made with ``exclude_`` and ``include_`` on a number of objects,
including above and below values, inside or outside regions, equal values, or nans.
See `PR 1964 <https://github.com/yt-project/yt/pull/1964>`__ and supporting
documentation fix at `PR 2262 <https://github.com/yt-project/yt/pull/2262>`__.
- previously aliased fluid vector fields in curvilinear geometries were not
converted to curvilinear coordinates, this was addressed in
`PR 2105 <https://github.com/yt-project/yt/pull/2105>`__.
- 2d polar and 3d cylindrical geometries now support annotate_quivers,
streamlines, line integral convolutions, see
`PR 2105 <https://github.com/yt-project/yt/pull/2105>`__.
- add support for exporting data to firefly `PR 2190 <https://github.com/yt-project/yt/pull/2190>`__.
- gradient fields are now supported in curvilinear geometries. See
`PR 2483 <https://github.com/yt-project/yt/pull/2483>`__.
- plotwindow colorbars now utilize mathtext in their labels,
from `PR 2516 <https://github.com/yt-project/yt/pull/2516>`__.
- raise deprecation warning when using ``mylog.warn``. Instead use
``mylog.warning``. See `PR 2285 <https://github.com/yt-project/yt/pull/2285>`__.
- extend support of the ``marker``, ``text``, ``line`` and ``sphere`` annotation
callbacks to polar geometries `PR 2466 <https://github.com/yt-project/yt/pull/2466>`__.
- Support MHD in the GAMER frontend `PR 2306 <https://github.com/yt-project/yt/pull/2306>`__.
- Export data container and profile fields to AstroPy QTables and
pandas DataFrames `PR 2418 <https://github.com/yt-project/yt/pull/2418>`__.
- Add turbo colormap, a colorblind safe version of jet. See
`PR 2339 <https://github.com/yt-project/yt/pull/2339>`__.
- Enable exporting regular grids (i.e., covering grids, arbitrary grids and
smoothed grids) to ``xarray`` `PR 2294 <https://github.com/yt-project/yt/pull/2294>`__.
- add automatic loading of ``namelist.txt``, which contains the parameter file
RAMSES uses to produce output `PR 2347 <https://github.com/yt-project/yt/pull/2347>`__.
- adds support for a nearest neighbor value field, accessible with
the ``add_nearest_neighbor_value_field`` function for particle fields. See
`PR 2301 <https://github.com/yt-project/yt/pull/2301>`__.
- speed up mesh deposition (uses caching) `PR 2136 <https://github.com/yt-project/yt/pull/2136>`__.
- speed up ghost zone generation. `PR 2403 <https://github.com/yt-project/yt/pull/2403>`__.
- ensure that a series dataset has kwargs passed down to data objects `PR 2366 <https://github.com/yt-project/yt/pull/2366>`__.
Documentation Changes
^^^^^^^^^^^^^^^^^^^^^
Our documentation has received some attention in the following PRs:
- include donation/funding links in README `PR 2520 <https://github.com/yt-project/yt/pull/2520>`__.
- Included instructions on how to install yt on the
Intel Distribution `PR 2355 <https://github.com/yt-project/yt/pull/2355>`__.
- include documentation on package vendors `PR 2494 <https://github.com/yt-project/yt/pull/2494>`__.
- update links to yt hub cookbooks `PR 2477 <https://github.com/yt-project/yt/pull/2477>`__.
- include relevant API docs in .gitignore `PR 2467 <https://github.com/yt-project/yt/pull/2467>`__.
- added docstrings for volume renderer cython code. see
`PR 2456 <https://github.com/yt-project/yt/pull/2456>`__ and
for `PR 2449 <https://github.com/yt-project/yt/pull/2449>`__.
- update documentation install recommendations to include newer
python versions `PR 2452 <https://github.com/yt-project/yt/pull/2452>`__.
- update custom CSS on docs to sphinx >=1.6.1. See
`PR 2199 <https://github.com/yt-project/yt/pull/2199>`__.
- enhancing the contribution documentation on git, see
`PR 2420 <https://github.com/yt-project/yt/pull/2420>`__.
- update documentation to correctly reference issues suitable for new
contributors `PR 2346 <https://github.com/yt-project/yt/pull/2346>`__.
- fix URLs and spelling errors in a number of the cookbook notebooks
`PR 2341 <https://github.com/yt-project/yt/pull/2341>`__.
- update release docs to include information about building binaries, tagging,
and various upload locations. See
`PR 2156 <https://github.com/yt-project/yt/pull/2156>`__ and
`PR 2160 <https://github.com/yt-project/yt/pull/2160>`__.
- ensuring the ``load_octree`` API docs are rendered
`PR 2088 <https://github.com/yt-project/yt/pull/2088>`__.
- fixing doc build errors, see: `PR 2077 <https://github.com/yt-project/yt/pull/2077>`__.
- add an instruction to the doc about continuous mesh colormap
`PR 2358 <https://github.com/yt-project/yt/pull/2358>`__.
- Fix minor typo `PR 2327 <https://github.com/yt-project/yt/pull/2327>`__.
- Fix some docs examples `PR 2316 <https://github.com/yt-project/yt/pull/2316>`__.
- fix sphinx formatting `PR 2409 <https://github.com/yt-project/yt/pull/2409>`__.
- Improve doc and fix docstring in deposition
`PR 2453 <https://github.com/yt-project/yt/pull/2453>`__.
- Update documentation to reflect usage of rcfile (no brackets allowed),
including strings. See `PR 2440 <https://github.com/yt-project/yt/pull/2440>`__.
Minor Enhancements and Bugfixes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- update pressure units in artio frontend (they were unitless
previously) `PR 2521 <https://github.com/yt-project/yt/pull/2521>`__.
- ensure that modules supported by ``on_demand_imports`` are imported
with that functionality `PR 2436 <https://github.com/yt-project/yt/pull/2436/files>`__.
- fix issues with groups in python3 in Ramses frontend
`PR 2092 <https://github.com/yt-project/yt/pull/2092>`__.
- add tests to ytdata frontend api `PR 2075 <https://github.com/yt-project/yt/pull/2075>`__.
- update internal field usage from ``particle_{}_relative`` to ``relative_particle_{}``
so particle-based fields don't see deprecation warnings
see `PR 2073 <https://github.com/yt-project/yt/pull/2073>`__.
- update save of ``field_data`` in clump finder, see
`PR 2079 <https://github.com/yt-project/yt/pull/2079>`__.
- ensure map.js is included in the sdist for mapserver. See
`PR 2158 <https://github.com/yt-project/yt/pull/2158>`__.
- add wrapping around ``yt_astro_analysis`` where it is used, in case it
isn't installed `PR 2159 <https://github.com/yt-project/yt/pull/2159>`__.
- the contour finder now uses a maximum data value supplied by the user,
rather than assuming the maximum value in the data container.
Previously this caused issues in the clump finder.
See `PR 2170 <https://github.com/yt-project/yt/pull/2170>`__.
- previously ramses data with non-hilbert ordering crashed.
fixed by `PR 2200 <https://github.com/yt-project/yt/pull/2200>`__.
- fix an issue related to creating a ds9 region with
FITS `PR 2335 <https://github.com/yt-project/yt/pull/2335>`__.
- add a check to see if pluginfilename is specified in
ytrc `PR 2319 <https://github.com/yt-project/yt/pull/2319>`__.
- sort .so input file list so that the yt package builds in a reproducible
way `PR 2206 <https://github.com/yt-project/yt/pull/2206>`__.
- update ``stack`` ufunc usage to include ``axis`` kwarg.
See `PR 2204 <https://github.com/yt-project/yt/pull/2204>`__.
- extend support for field names in RAMSES descriptor file to include all names
that don't include a comma. See `PR 2202 <https://github.com/yt-project/yt/pull/2202>`__.
- ``set_buff_size`` now works for ``OffAxisProjectionPlot``,
see `PR 2239 <https://github.com/yt-project/yt/pull/2239>`__.
- fix chunking for chained cut regions. previously chunking commands would
only look at the most recent cut region conditionals, and not any of the
previous cut regions. See `PR 2234 <https://github.com/yt-project/yt/pull/2234>`__.
- update git command in Castro frontend to
include ``git describe`` `PR 2235 <https://github.com/yt-project/yt/pull/2235>`__.
- in datasets with a single oct correctly guess the shape of the
array `PR 2241 <https://github.com/yt-project/yt/pull/2241>`__.
- update ``get_yt_version`` function to support python 3.
See `PR 2226 <https://github.com/yt-project/yt/pull/2226>`__.
- the ``"stream"`` frontend now correctly returns ``min_level`` for the mesh refinement.
`PR 2519 <https://github.com/yt-project/yt/pull/2519>`__.
- region expressions (``ds.r[]``) can now be used on 2D
datasets `PR 2482 <https://github.com/yt-project/yt/pull/2482>`__.
- background colors in cylindrical coordinate plots are now set
correctly `PR 2517 <https://github.com/yt-project/yt/pull/2517>`__.
- Utilize current matplotlib interface for the ``_png`` module to write
images to disk `PR 2514 <https://github.com/yt-project/yt/pull/2514>`__.
- fix issue with fortran utils where empty records were not
supported `PR 2259 <https://github.com/yt-project/yt/pull/2259>`__.
- add support for python 3.7 in iterator used by dynamic parallel
loading `PR 2265 <https://github.com/yt-project/yt/pull/2265>`__.
- add support to handle boxlib data where ``raw_fields`` contain
ghost zones `PR 2255 <https://github.com/yt-project/yt/pull/2255>`__.
- update quiver fields to use native units, not assuming
cgs `PR 2292 <https://github.com/yt-project/yt/pull/2292>`__.
- fix annotations on semi-structured mesh data with
exodus II `PR 2274 <https://github.com/yt-project/yt/pull/2274>`__.
- extend support for loading exodus II data
`PR 2274 <https://github.com/yt-project/yt/pull/2274>`__.
- add support for yt to load data generated by WarpX code that
includes ``rigid_injected`` species `PR 2289 <https://github.com/yt-project/yt/pull/2289>`__.
- fix issue in GAMER frontend where periodic boundary conditions were not
identified `PR 2287 <https://github.com/yt-project/yt/pull/2287>`__.
- fix issue in ytdata frontend where data size was calculated to have size
``(nparticles, dimensions)``. Now updated to use
``(nparticles, nparticles, dimensions)``.
see `PR 2280 <https://github.com/yt-project/yt/pull/2280>`__.
- extend support for OpenPMD frontend to load data containing no particles
see `PR 2270 <https://github.com/yt-project/yt/pull/2270>`__.
- raise a meaningful error on negative and zero zooming factors,
see `PR 2443 <https://github.com/yt-project/yt/pull/2443>`__.
- ensure Datasets are consistent in their ``min_level`` attribute.
See `PR 2478 <https://github.com/yt-project/yt/pull/2478>`__.
- adding matplotlib to trove classifiers `PR 2473 <https://github.com/yt-project/yt/pull/2473>`__.
- Add support for saving additional formats supported by
matplotlib `PR 2318 <https://github.com/yt-project/yt/pull/2318>`__.
- add support for numpy 1.18.1 and help ensure consistency with unyt
`PR 2448 <https://github.com/yt-project/yt/pull/2448>`__.
- add support for spherical geometries in ``plot_2d``. See
`PR 2371 <https://github.com/yt-project/yt/pull/2371>`__.
- add support for sympy 1.5 `PR 2407 <https://github.com/yt-project/yt/pull/2407>`__.
- backporting unyt PR 102 for clip `PR 2329 <https://github.com/yt-project/yt/pull/2329>`__.
- allow code units in fields ``jeans_mass`` and ``dynamical_time``.
See`PR 2454 <https://github.com/yt-project/yt/pull/2454>`__.
- fix for the case where boxlib nghost is different in different
directions `PR 2343 <https://github.com/yt-project/yt/pull/2343>`__.
- bugfix for numpy 1.18 `PR 2419 <https://github.com/yt-project/yt/pull/2419>`__.
- Invoke ``_setup_dx`` in the enzo inline analysis. See
`PR 2460 <https://github.com/yt-project/yt/pull/2460>`__.
- Update annotate_timestamp to work with ``"code"`` unit system. See
`PR 2435 <https://github.com/yt-project/yt/pull/2435>`__.
- use ``dict.get`` to pull attributes that may not exist in ytdata
frontend `PR 2471 <https://github.com/yt-project/yt/pull/2471>`__.
- solved bug related to slicing out ghost cells in
chombo `PR 2388 <https://github.com/yt-project/yt/pull/2388>`__.
- correctly register reversed versions of cmocean
cmaps `PR 2390 <https://github.com/yt-project/yt/pull/2390>`__.
- correctly set plot axes units to ``"code length"`` for datasets
loaded with ``unit_system="code"`` `PR 2354 <https://github.com/yt-project/yt/pull/2354>`__.
- deprecate ``ImagePlotContainer.set_cbar_minorticks``. See
`PR 2444 <https://github.com/yt-project/yt/pull/2444>`__.
- enzo-e frontend bugfix for single block datasets. See
`PR 2424 <https://github.com/yt-project/yt/pull/2424>`__.
- explicitly default to solid lines in contour callback. See
`PR 2330 <https://github.com/yt-project/yt/pull/2330>`__.
- replace all bare ``Except`` statements `PR 2474 <https://github.com/yt-project/yt/pull/2474>`__.
- fix an inconsistency between ``argmax`` and ``argmin`` methods in
YTDataContainer class `PR 2457 <https://github.com/yt-project/yt/pull/2457>`__.
- fixed extra extension added by ``ImageArray.save()``. See
`PR 2364 <https://github.com/yt-project/yt/pull/2364>`__.
- fix incorrect usage of ``is`` comparison with ``==`` comparison throughout the codebase
`PR 2351 <https://github.com/yt-project/yt/pull/2351>`__.
- fix streamlines ``_con_args`` attribute `PR 2470 <https://github.com/yt-project/yt/pull/2470>`__.
- fix python 3.8 warnings `PR 2386 <https://github.com/yt-project/yt/pull/2386>`__.
- fix some invalid escape sequences. `PR 2488 <https://github.com/yt-project/yt/pull/2488>`__.
- fix typo in ``_vorticity_z`` field definition. See
`PR 2398 <https://github.com/yt-project/yt/pull/2398>`__.
- fix an inconsistency in annotate_sphere callback.
See `PR 2464 <https://github.com/yt-project/yt/pull/2464>`__.
- initialize unstructured mesh visualization
background to ``nan`` `PR 2308 <https://github.com/yt-project/yt/pull/2308>`__.
- raise a meaningful error on negative and zero
zooming factors `PR 2443 <https://github.com/yt-project/yt/pull/2443>`__.
- set ``symlog`` scaling to ``log`` if ``vmin > 0``.
See `PR 2485 <https://github.com/yt-project/yt/pull/2485>`__.
- skip blank lines when reading parameters.
See `PR 2406 <https://github.com/yt-project/yt/pull/2406>`__.
- Update magnetic field handling for RAMSES.
See `PR 2377 <https://github.com/yt-project/yt/pull/2377>`__.
- Update ARTIO frontend to support compressed files.
See `PR 2314 <https://github.com/yt-project/yt/pull/2314>`__.
- Use mirror copy of SDF data `PR 2334 <https://github.com/yt-project/yt/pull/2334>`__.
- Use sorted glob in athena to ensure reproducible ordering of
grids `PR 2363 <https://github.com/yt-project/yt/pull/2363>`__.
- fix cartopy failures by ensuring data is in lat/lon when passed to
cartopy `PR 2378 <https://github.com/yt-project/yt/pull/2378>`__.
- enforce unit consistency in plot callbacks, which fixes some unexpected
behaviour in the plot annotations callbacks that use the plot
window width or the data width `PR 2524 <https://github.com/yt-project/yt/pull/2524>`__.
Separate from our list of minor enhancements and bugfixes, we've grouped PRs
related to infrastructure and testing in the next three sub-sub-sub sections.
Testing and Infrastructure
~~~~~~~~~~~~~~~~~~~~~~~~~~
- infrastructure to change our testing from nose to pytest, see
`PR 2401 <https://github.com/yt-project/yt/pull/2401>`__.
- Adding test_requirements and test_minimum requirements files to have
bounds on installed testing versioning `PR 2083 <https://github.com/yt-project/yt/pull/2083>`__.
- Update the test failure report to include all failed tests related
to a single test specification `PR 2084 <https://github.com/yt-project/yt/pull/2084>`__.
- add required dependencies for docs testing on Jenkins. See
`PR 2090 <https://github.com/yt-project/yt/pull/2090>`__.
- suppress pyyaml warning that pops up when running
tests `PR 2182 <https://github.com/yt-project/yt/pull/2182>`__.
- add tests for pre-existing ytdata datasets. See
`PR 2229 <https://github.com/yt-project/yt/pull/2229>`__.
- add a test to check if cosmology calculator and cosmology dataset
share the same unit registry `PR 2230 <https://github.com/yt-project/yt/pull/2230>`__.
- fix kh2d test name `PR 2342 <https://github.com/yt-project/yt/pull/2342>`__.
- disable OSNI projection answer test to remove cartopy errors `PR 2350 <https://github.com/yt-project/yt/pull/2350>`__.
CI related support
~~~~~~~~~~~~~~~~~~
- disable coverage on OSX to speed up travis testing and avoid
timeouts `PR 2076 <https://github.com/yt-project/yt/pull/2076>`__.
- update travis base images on Linux and
MacOSX `PR 2093 <https://github.com/yt-project/yt/pull/2093>`__.
- add ``W504`` and ``W605`` to ignored flake8 errors, see
`PR 2078 <https://github.com/yt-project/yt/pull/2078>`__.,
- update pyyaml version in ``test_requirements.txt`` file to address
github warning `PR 2148 <https://github.com/yt-project/yt/pull/2148/files>`__.,
- fix travis build errors resulting from numpy and cython being
unavailable `PR 2171 <https://github.com/yt-project/yt/pull/2171>`__.
- fix appveyor build failures `PR 2231 <https://github.com/yt-project/yt/pull/2231>`__.
- Add Python 3.7 and Python 3.8 to CI test jobs. See
`PR 2450 <https://github.com/yt-project/yt/pull/2450>`__.
- fix build failure on Windows `PR 2333 <https://github.com/yt-project/yt/pull/2333>`__.
- fix warnings due to travis configuration file. See
`PR 2451 <https://github.com/yt-project/yt/pull/2451>`__.
- install pyyaml on appveyor `PR 2367 <https://github.com/yt-project/yt/pull/2367>`__.
- install sympy 1.4 on appveyor to work around regression in
1.5 `PR 2395 <https://github.com/yt-project/yt/pull/2395>`__.
- update CI recipes to fix recent failures `PR 2489 <https://github.com/yt-project/yt/pull/2489>`__.
Other Infrastructure
~~~~~~~~~~~~~~~~~~~~
- Added a welcomebot to our github page for new contributors, see
`PR 2181 <https://github.com/yt-project/yt/pull/2181>`__.
- Added a pep8 bot to pre-run before tests, see
`PR 2179 <https://github.com/yt-project/yt/pull/2179>`__,
`PR 2184 <https://github.com/yt-project/yt/pull/2184>`__ and
`PR 2185 <https://github.com/yt-project/yt/pull/2185>`__.
Version 3.5.0
-------------
Version 3.5.0 is the first major release of yt since August 2017. It includes
328 pull requests from 41 contributors, including 22 new contributors.
Major Changes
^^^^^^^^^^^^^
- ``yt.analysis_modules`` has been deprecated in favor of the new
``yt_astro_analysis`` package. New features and new astronomy-specific
analysis modules will go into ``yt_astro_analysis`` and importing from
``yt.analysis_modules`` will raise a noisy warning. We will remove
``yt.analysis_modules`` in a future release. See `PR 1938
<https://github.com/yt-project/yt/pull/1938>`__.
- Vector fields and derived fields depending on vector fields have been
systematically updated to account for a bulk correction field parameter. For
example, for the velocity field, all derived fields that depend on velocity
will now account for the ``"bulk_velocity"`` field parameter. In addition, we
have defined ``"relative_velocity"`` and ``"relative_magnetic_field"`` fields
that include the bulk correction. Both of these are vector fields, to access
the components, use e.g. ``"relative_velocity_x"``. The
``"particle_position_relative"`` and ``"particle_velocity_relative"`` fields
have been deprecated. See `PR 1693
<https://github.com/yt-project/yt/pull/1693>`__ and `PR 2022
<https://github.com/yt-project/yt/pull/2022>`__.
- Aliases to spatial fields with the ``"gas"`` field type will now be returned
in the default unit system for the dataset. As an example the ``"x"`` field
might resolve to the field tuples ``("index", "x")`` or ``("gas",
"x")``. Accessing the former will return data in code units while the latter
will return data in whatever unit system the dataset is configured to use
(CGS, by default). This means that to ensure the units of a spatial field will
always be consistent, one must access the field as a tuple, explicitly
specifying the field type. Accessing a spatial field using a string field name
may return data in either code units or the dataset's default unit system
depending on the history of field accesses prior to accessing that field. In
the future accessing fields using an ambiguous field name will raise an
error. See `PR 1799 <https://github.com/yt-project/yt/pull/1799>`__ and `PR
1850 <https://github.com/yt-project/yt/pull/1850>`__.
- The ``max_level`` and ``min_level`` attributes of yt data objects now
correctly update the state of the underlying data objects when set. In
addition we have added an example to the cookbook that shows how to downsample
AMR data using this functionality. See `PR 1737
<https://github.com/yt-project/yt/pull/1737>`__.
- It is now possible to customize the formatting of labels for ion species
fields. Rather than using the default spectroscopic notation, one can call
``ds.set_field_label_format("ionization_label", "plus_minus")`` to use the
more traditional notation where ionization state is indicated with ``+`` and
``-`` symbols. See `PR 1867 <https://github.com/yt-project/yt/pull/1867>`__.
Improvements to the RAMSES frontend
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We would particularly like to recognize Corentin Cadiou for his tireless work over the past year on improving support for RAMSES and octree AMR data in yt.
- Added support for reading RAMSES sink particles. See `PR 1548
<https://github.com/yt-project/yt/pull/1548>`__.
- Add support for the new self-describing Ramses particle output format. See `PR
1616 <https://github.com/yt-project/yt/pull/1616>`__.
- It is now possible to restrict the domain of a loaded Ramses dataset by
passing a ``bbox`` keyword argument to ``yt.load()``. If passed this
corresponds to the coordinates of the top-left and bottom-right hand corner of
the subvolume to load. Data outside the bounding box will be ignored. This is
useful for loading very large Ramses datasets where yt currently has poor
scaling. See `PR 1637 <https://github.com/yt-project/yt/pull/1637>`__.
- The Ramses ``"particle_birth_time"`` field now contains the time when star
particles form in a simulation in CGS units, formerly these times were only
accessible via the incorrectly named ``"particle_age"`` field in conformal
units. Correspondingly the ``"particle_age"`` field has been deprecated. The
conformal birth time is not available via the ``"conformal_birth_time``"
field. See `PR 1649 <https://github.com/yt-project/yt/pull/1649>`__.
- Substantial performance improvement for reading RAMSES AMR data. See `PR 1671
<https://github.com/yt-project/yt/pull/1671>`__.
- The RAMSES frontend will now produce less voluminous logging feedback when
loading the dataset or reading data. This is particularly noticeable for very
large datasets with many CPU files. See `PR 1738
<https://github.com/yt-project/yt/pull/1738>`__.
- Avoid repeated parsing of RAMSES particle and RT descriptors. See `PR 1739
<https://github.com/yt-project/yt/pull/1739>`__.
- Added support for reading the RAMSES gravitational potential field. See `PR
1751 <https://github.com/yt-project/yt/pull/1751>`__.
- Add support for RAMSES datasets that use the ``groupsize`` feature. See `PR
1769 <https://github.com/yt-project/yt/pull/1769>`__.
- Dramatically improve the overall performance of the RAMSES frontend. See `PR
1771 <https://github.com/yt-project/yt/pull/1771>`__.
Additional Improvements
^^^^^^^^^^^^^^^^^^^^^^^
- Added support for particle data in the Enzo-E frontend. See `PR 1490
<https://github.com/yt-project/yt/pull/1490>`__.
- Added an ``equivalence`` keyword argument to ``YTArray.in_units()`` and
``YTArray.to()``. This makes it possible to specify an equivalence when
converting data to a new unit. Also added ``YTArray.to_value()`` which allows
converting to a new unit, then stripping off the units to return a plain numpy
array. See `PR 1563 <https://github.com/yt-project/yt/pull/1563>`__.
- Rather than crashing, yt will now assume default values for cosmology
parameters in Gadget HDF5 data if it cannot find the relevant header
information. See `PR 1578
<https://github.com/yt-project/yt/pull/1578>`__.
- Improve detection for OpenMP support at compile-time, including adding support
for detecting OpenMP on Windows. See `PR 1591
<https://github.com/yt-project/yt/pull/1591>`__, `PR 1695
<https://github.com/yt-project/yt/pull/1695>`__ and `PR 1696
<https://github.com/yt-project/yt/pull/1696>`__.
- Add support for 2D cylindrical data for most plot callbacks. See `PR 1598
<https://github.com/yt-project/yt/pull/1598>`__.
- Particles outside the domain are now ignored by ``load_uniform_grid()`` and
``load_amr_grids()``. See `PR 1602
<https://github.com/yt-project/yt/pull/1602>`__.
- Fix incorrect units for the Gadget internal energy field in cosmology
simulations. See `PR 1611
<https://github.com/yt-project/yt/pull/1611>`__.
- Add support for calculating covering grids in parallel. See `PR 1612
<https://github.com/yt-project/yt/pull/1612>`__.
- The number of particles in a dataset loaded by the stream frontend (e.g. via
``load_uniform_grid``) no longer needs to be explicitly provided via the
``number_of_particles`` keyword argument, using the ``number_of_particles``
keyword will now generate a deprecation warning. See `PR 1620
<https://github.com/yt-project/yt/pull/1620>`__.
- Add support for non-cartesian GAMER data. See `PR 1622
<https://github.com/yt-project/yt/pull/1622>`__.
- If a particle filter depends on another particle filter, both particle filters
will be registered for a dataset if the dependent particle filter is
registered with a dataset. See `PR 1624
<https://github.com/yt-project/yt/pull/1624>`__.
- The ``save()`` method of the various yt plot objects now optionally can accept
a tuple of strings instead of a string. If a tuple is supplied, the elements
are joined with ``os.sep`` to form a path. See `PR 1630
<https://github.com/yt-project/yt/pull/1630>`__.
- The quiver callback now accepts a ``plot_args`` keyword argument that allows
passing keyword arguments to matplotlib to allow for customization of the
quiver plot. See `PR 1636 <https://github.com/yt-project/yt/pull/1636>`__.
- Updates and improvements for the OpenPMD frontend. See `PR 1645
<https://github.com/yt-project/yt/pull/1645>`__.
- The mapserver now works correctly under Python3 and has new features like a
colormap selector and plotting multiple fields via layers. See `PR 1654
<https://github.com/yt-project/yt/pull/1654>`__ and `PR 1668
<https://github.com/yt-project/yt/pull/1668>`__.
- Substantial performance improvement for calculating the gravitational
potential in the clump finder. See `PR 1684
<https://github.com/yt-project/yt/pull/1684>`__.
- Added new methods to ``ProfilePlot``: ``set_xlabel()``, ``set_ylabel()``,
``annotate_title()``, and ``annotate_text()``. See `PR 1700
<https://github.com/yt-project/yt/pull/1700>`__ and `PR 1705
<https://github.com/yt-project/yt/pull/1705>`__.
- Speedup for parallel halo finding operation for the FOF and HOP halo
finders. See `PR 1724 <https://github.com/yt-project/yt/pull/1724>`__.
- Add support for halo finding using the rockstar halo finder on Python3. See
`PR 1740 <https://github.com/yt-project/yt/pull/1740>`__.
- The ``ValidateParameter`` field validator has gained the ability for users to
explicitly specify the values of field parameters during field detection. This
makes it possible to write fields that access different sets of fields
depending on the value of the field parameter. For example, a field might
define an ``'axis'`` field parameter that can be either ``'x'``, ``'y'`` or
``'z'``. One can now explicitly tell the field detection system to access the
field using all three values of ``'axis'``. This improvement avoids errors one
would see now where only one value or an invalid value of the field parameter
will be tested by yt. See `PR 1741
<https://github.com/yt-project/yt/pull/1741>`__.
- It is now legal to pass a dataset instance as the first argument to
``ProfilePlot`` and ``PhasePlot``. This is equivalent to passing
``ds.all_data()``.
- Functions that accept a ``(length, unit)`` tuple (e.g. ``(3, 'km')`` for 3
kilometers) will not raise an error if ``length`` is a ``YTQuantity`` instance
with units attached. See `PR 1749
<https://github.com/yt-project/yt/pull/1749>`__.
- The ``annotate_timestamp`` plot annotation now optionally accepts a
``time_offset`` keyword argument that sets the zero point of the time
scale. Additionally, the ``annotate_scale`` plot annotation now accepts a
``format`` keyword argument, allowing custom formatting of the scale
annotation. See `PR 1755 <https://github.com/yt-project/yt/pull/1755>`__.
- Add support for magnetic field variables and creation time fields in the GIZMO
frontend. See `PR 1756 <https://github.com/yt-project/yt/pull/1756>`__ and `PR
1914 <https://github.com/yt-project/yt/pull/1914>`__.
- ``ParticleProjectionPlot`` now supports the ``annotate_particles`` plot
callback. See `PR 1765 <https://github.com/yt-project/yt/pull/1765>`__.
- Optimized the performance of off-axis projections for octree AMR data. See `PR
1766 <https://github.com/yt-project/yt/pull/1766>`__.
- Added support for several radiative transfer fields in the ARTIO frontend. See
`PR 1804 <https://github.com/yt-project/yt/pull/1804>`__.
- Performance improvement for Boxlib datasets that don't use AMR. See `PR 1834
<https://github.com/yt-project/yt/pull/1834>`__.
- It is now possible to set custom profile bin edges. See `PR 1837
<https://github.com/yt-project/yt/pull/1837>`__.
- Dropped support for Python3.4. See `PR 1840
<https://github.com/yt-project/yt/pull/1840>`__.
- Add support for reading RAMSES cooling fields. See `PR 1853
<https://github.com/yt-project/yt/pull/1853>`__.
- Add support for NumPy 1.15. See `PR 1854
<https://github.com/yt-project/yt/pull/1854>`__.
- Ensure that functions defined in the plugins file are available in the yt
namespace. See `PR 1855 <https://github.com/yt-project/yt/pull/1855>`__.
- Creating a profiles with log-scaled bins but where the bin edges are negative
or zero now raises an error instead of silently generating a corrupt,
incorrect answer. See `PR 1856
<https://github.com/yt-project/yt/pull/1856>`__.
- Systematically added validation for inputs to data object initializers. See
`PR 1871 <https://github.com/yt-project/yt/pull/1871>`__.
- It is now possible to select only a specific particle type in the particle
trajectories analysis module. See `PR 1887
<https://github.com/yt-project/yt/pull/1887>`__.
- Substantially improve the performance of selecting particle fields with a
``cut_region`` data object. See `PR 1892
<https://github.com/yt-project/yt/pull/1892>`__.
- The ``iyt`` command-line entry-point into IPython now installs yt-specific
tab-completions. See `PR 1900 <https://github.com/yt-project/yt/pull/1900>`__.
- Derived quantities have been systematically updated to accept a
``particle_type`` keyword argument, allowing easier analysis of only a single
particle type. See `PR 1902 <https://github.com/yt-project/yt/pull/1902>`__
and `PR 1922 <https://github.com/yt-project/yt/pull/1922>`__.
- The ``annotate_streamlines()`` function now accepts a ``display_threshold``
keyword argument. This suppresses drawing streamlines over any region of a
dataset where the field being displayed is less than the threshold. See `PR
1922 <https://github.com/yt-project/yt/pull/1922>`__.
- Add support for 2D nodal data. See `PR 1923
<https://github.com/yt-project/yt/pull/1923>`__.
- Add support for GAMER outputs that use patch groups. This substantially
reduces the memory requirements for loading large GAMER datasets. See `PR 1935
<https://github.com/yt-project/yt/pull/1935>`__.
- Add a ``data_source`` keyword argument to the ``annotate_particles`` plot
callback. See `PR 1937 <https://github.com/yt-project/yt/pull/1937>`__.
- Define species fields in the NMSU Art frontend. See `PR 1981
<https://github.com/yt-project/yt/pull/1981>`__.
- Added a ``__format__`` implementation for ``YTArray``. See `PR 1985
<https://github.com/yt-project/yt/pull/1985>`__.
- Derived fields that use a particle filter now only need to be derived for the
particle filter type, not for the particle types used to define the particle
filter. See `PR 1993 <https://github.com/yt-project/yt/pull/1993>`__.
- Added support for periodic visualizations using
``ParticleProjectionPlot``. See `PR 1996
<https://github.com/yt-project/yt/pull/1996>`__.
- Added ``YTArray.argsort()``. See `PR 2002
<https://github.com/yt-project/yt/pull/2002>`__.
- Calculate the header size from the header specification in the Gadget frontend
to allow reading from Gadget binary datasets with nonstandard headers. See `PR
2005 <https://github.com/yt-project/yt/pull/2005>`__ and `PR 2036
<https://github.com/yt-project/yt/pull/2036>`__.
- Save the standard deviation in ``profile.save_as_dataset()``. See `PR 2008
<https://github.com/yt-project/yt/pull/2008>`__.
- Allow the ``color`` keyword argument to be passed to matplotlib in the
``annotate_clumps`` callback to control the color of the clump annotation. See
`PR 2019 <https://github.com/yt-project/yt/pull/2019>`__.
- Raise an exception when profiling fields of unequal shape. See `PR 2025
<https://github.com/yt-project/yt/pull/2025>`__.
- The clump info dictionary is now populated as clumps get created instead of
during ``clump.save_as_dataset()``. See `PR 2053
<https://github.com/yt-project/yt/pull/2053>`__.
- Avoid segmentation fault in slice selector by clipping slice integer
coordinates. See `PR 2055 <https://github.com/yt-project/yt/pull/2055>`__.
Minor Enhancements and Bugfixes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- Fix incorrect use of floating point division in the parallel analysis framework.
See `PR 1538 <https://github.com/yt-project/yt/pull/1538>`__.
- Fix integration with that matplotlib QT backend for interactive plotting.
See `PR 1540 <https://github.com/yt-project/yt/pull/1540>`__.
- Add support for the particle creation time field in the GAMER frontend.
See `PR 1546 <https://github.com/yt-project/yt/pull/1546>`__.
- Various minor improvements to the docs. See `PR 1542
<https://github.com/yt-project/yt/pull/1542>`__. and `PR 1547
<https://github.com/yt-project/yt/pull/1547>`__.
- Add better error handling for invalid tipsy aux files. See `PR 1549
<https://github.com/yt-project/yt/pull/1549>`__.
- Fix typo in default Gadget header specification. See `PR 1550
<https://github.com/yt-project/yt/pull/1550>`__.
- Use the git version in the get_yt_version function. See `PR 1551
<https://github.com/yt-project/yt/pull/1551>`__.
- Assume dimensionless units for fields from FITS datasets when we can't infer
the units. See `PR 1553 <https://github.com/yt-project/yt/pull/1553>`__.
- Autodetect ramses extra particle fields. See `PR 1555
<https://github.com/yt-project/yt/pull/1555>`__.
- Fix issue with handling unitless halo quantities in HaloCatalog. See `PR 1558
<https://github.com/yt-project/yt/pull/1558>`__.
- Track the halo catalog creation process using a parallel-safe progress bar.
See `PR 1559 <https://github.com/yt-project/yt/pull/1559>`__.
- The PPV Cube functionality no longer crashes if there is no temperature field
in the dataset. See `PR 1562
<https://github.com/yt-project/yt/pull/1562>`__.
- Fix crash caused by saving the ``'x'``, ``'y'``, or ``'z'`` fields in
clump.save_as_dataset(). See `PR 1567
<https://github.com/yt-project/yt/pull/1567>`__.
- Accept both string and tuple field names in ``ProfilePlot.set_unit()`` and
``PhasePlot.set_unit()``. See `PR 1568
<https://github.com/yt-project/yt/pull/1568>`__.
- Fix issues with some arbitrary grid attributes not being reloaded properly
after being saved with ``save_as_dataset()``. See `PR 1569
<https://github.com/yt-project/yt/pull/1569>`__.
- Fix units issue in the light cone projection operation. See `PR 1574
<https://github.com/yt-project/yt/pull/1574>`__.
- Use ``astropy.wcsaxes`` instead of the independent ``wcsaxes`` project. See
`PR 1577 <https://github.com/yt-project/yt/pull/1577>`__.
- Correct typo in WarpX field definitions. See `PR 1583
<https://github.com/yt-project/yt/pull/1583>`__.
- Avoid crashing when loading an Enzo dataset with a parameter file that has
commented out parameters. See `PR 1586
<https://github.com/yt-project/yt/pull/1586>`__.
- Fix a corner case in the clump finding machinery where the reference to the
parent clump is invalid after pruning a child clump that has no siblings. See
`PR 1587 <https://github.com/yt-project/yt/pull/1587>`__.
- Fix issues with setting up yt fields for the magnetic and velocity field
components and associated derived fields in curvilinear coordinate
systems. See `PR 1588 <https://github.com/yt-project/yt/pull/1588>`__ and `PR
1687 <https://github.com/yt-project/yt/pull/1687>`__.
- Fix incorrect profile values when the profile weight field has values equal to
zero. See `PR 1590 <https://github.com/yt-project/yt/pull/1590>`__.
- Fix issues with making matplotlib animations of a
``ParticleProjectionPlot``. See `PR 1594
<https://github.com/yt-project/yt/pull/1594>`__.
- The ``Scene.annotate_axes()`` function will now use the correct colors for
drawing the axes annotation. See `PR 1596
<https://github.com/yt-project/yt/pull/1596>`__.
- Fix incorrect default plot bounds for a zoomed-in slice plot of a 2D
cylindrical dataset. See `PR 1597
<https://github.com/yt-project/yt/pull/1597>`__.
- Fix issue where field accesses on 2D grids would return data with incorrect
shapes. See `PR 1603 <https://github.com/yt-project/yt/pull/1603>`__.
- Added a cookbook example for a multipanel phase plot. See `PR 1605
<https://github.com/yt-project/yt/pull/1605>`__.
- Boolean simulation parameters in the Boxlib frontend will now be interpreted
correctly. See `PR 1619 <https://github.com/yt-project/yt/pull/1619>`__.
- The ``ds.particle_type_counts`` attribute will now be populated correctly for
AMReX data.
- The ``"rad"`` unit (added for compatibility with astropy) now has the correct
dimensions of angle instead of solid angle. See `PR 1628
<https://github.com/yt-project/yt/pull/1628>`__.
- Fix units issues in several plot callbacks. See `PR 1633
<https://github.com/yt-project/yt/pull/1633>`__ and `PR 1674
<https://github.com/yt-project/yt/pull/1674>`__.
- Various fixes for how WarpX fields are interpreted. See `PR 1634
<https://github.com/yt-project/yt/pull/1634>`__.
- Fix incorrect units in the automatically deposited particle fields. See `PR
1638 <https://github.com/yt-project/yt/pull/1638>`__.
- It is now possible to set the axes background color after calling
``plot.hide_axes()``. See `PR 1662
<https://github.com/yt-project/yt/pull/1662>`__.
- Fix a typo in the name of the ``colors`` keyword argument passed to matplotlib
for the contour callback. See `PR 1664
<https://github.com/yt-project/yt/pull/1664>`__.
- Add support for Enzo Active Particle fields that arrays. See `PR 1665
<https://github.com/yt-project/yt/pull/1665>`__.
- Avoid crash when generating halo catalogs from the rockstar halo finder for
small simulation domains. See `PR 1679
<https://github.com/yt-project/yt/pull/1679>`__.
- The clump callback now functions correctly for a reloaded clump dataset. See
`PR 1683 <https://github.com/yt-project/yt/pull/1683>`__.
- Fix incorrect calculation for tangential components of vector fields. See `PR
1688 <https://github.com/yt-project/yt/pull/1688>`__.
- Allow halo finders to run in parallel on Python3. See `PR 1690
<https://github.com/yt-project/yt/pull/1690>`__.
- Fix issues with Gadget particle IDs for simulations with large numbers of
particles being incorrectly rounded. See `PR 1692
<https://github.com/yt-project/yt/pull/1692>`__.
- ``ParticlePlot`` no longer needs to be passed spatial fields in a particular
order to ensure that a ``ParticleProjectionPlot`` is returned. See `PR 1697
<https://github.com/yt-project/yt/pull/1697>`__.
- Accessing data from a FLASH grid directly now returns float64 data. See `PR
1708 <https://github.com/yt-project/yt/pull/1708>`__.
- Fix periodicity check in ``YTPoint`` data object. See `PR 1712
<https://github.com/yt-project/yt/pull/1712>`__.
- Avoid crash on matplotlib 2.2.0 when generating yt plots with symlog
colorbars. See `PR 1720 <https://github.com/yt-project/yt/pull/1720>`__.
- Avoid crash when FLASH ``"unitsystem"`` parameter is quoted in the HDF5
file. See `PR 1722 <https://github.com/yt-project/yt/pull/1722>`__.
- Avoid issues with creating custom particle filters for OWLS/EAGLE
datasets. See `PR 1723 <https://github.com/yt-project/yt/pull/1723>`__.
- Adapt to behavior change in matplotlib that caused plot inset boxes for
annotated text to be drawn when none was requested. See `PR 1731
<https://github.com/yt-project/yt/pull/1731>`__ and `PR 1827
<https://github.com/yt-project/yt/pull/1827>`__.
- Fix clump finder ignoring field parameters. See `PR 1732
<https://github.com/yt-project/yt/pull/1732>`__.
- Avoid generating NaNs in x-ray emission fields. See `PR 1742
<https://github.com/yt-project/yt/pull/1742>`__.
- Fix compatibility with Sphinx 1.7 when building the docs. See `PR 1743
<https://github.com/yt-project/yt/pull/1743>`__.
- Eliminate usage of deprecated ``"clobber"`` keyword argument for various
usages of astropy in yt. See `PR 1744
<https://github.com/yt-project/yt/pull/1744>`__.
- Fix incorrect definition of the ``"d"`` unit (an alias of ``"day"``). See `PR
1746 <https://github.com/yt-project/yt/pull/1746>`__.
- ``PhasePlot.set_log()`` now correctly handles tuple field names as well as
string field names. See `PR 1787
<https://github.com/yt-project/yt/pull/1787>`__.
- Fix incorrect axis order in aitoff pixelizer. See `PR 1791
<https://github.com/yt-project/yt/pull/1791>`__.
- Fix crash in when exporting a surface as a ply model. See `PR 1792
<https://github.com/yt-project/yt/pull/1792>`__ and `PR 1817
<https://github.com/yt-project/yt/pull/1817>`__.
- Fix crash in scene.save_annotated() in newer numpy versions. See `PR 1793
<https://github.com/yt-project/yt/pull/1793>`__.
- Many tests no longer depend on real datasets. See `PR 1801
<https://github.com/yt-project/yt/pull/1801>`__, `PR 1805
<https://github.com/yt-project/yt/pull/1805>`__, `PR 1809
<https://github.com/yt-project/yt/pull/1809>`__, `PR 1883
<https://github.com/yt-project/yt/pull/1883>`__, and `PR 1941
<https://github.com/yt-project/yt/pull/1941>`__
- New tests were added to improve test coverage or the performance of the
tests. See `PR 1820 <https://github.com/yt-project/yt/pull/1820>`__, `PR 1831
<https://github.com/yt-project/yt/pull/1831>`__, `PR 1833
<https://github.com/yt-project/yt/pull/1833>`__, `PR 1841
<https://github.com/yt-project/yt/pull/1841>`__, `PR 1842
<https://github.com/yt-project/yt/pull/1842>`__, `PR 1885
<https://github.com/yt-project/yt/pull/1885>`__, `PR 1886
<https://github.com/yt-project/yt/pull/1886>`__, `PR 1952
<https://github.com/yt-project/yt/pull/1952>`__, `PR 1953
<https://github.com/yt-project/yt/pull/1953>`__, `PR 1955
<https://github.com/yt-project/yt/pull/1955>`__, and `PR 1957
<https://github.com/yt-project/yt/pull/1957>`__.
- The particle trajectories machinery will raise an error if it is asked to
analyze a set of particles with duplicated particle IDs. See `PR 1818
<https://github.com/yt-project/yt/pull/1818>`__.
- Fix incorrect velocity unit int he ``gadget_fof`` frontend. See `PR 1829
<https://github.com/yt-project/yt/pull/1829>`__.
- Making an off-axis projection of a cut_region data object with an octree AMR
dataset now works correctly. See `PR 1858
<https://github.com/yt-project/yt/pull/1858>`__.
- Replace hard-coded constants in Enzo frontend with calculations to improve
agreement with Enzo's internal constants and improve clarity. See `PR 1873
<https://github.com/yt-project/yt/pull/1873>`__.
- Correct issues with Enzo magnetic units in cosmology simulations. See `PR 1876
<https://github.com/yt-project/yt/pull/1876>`__.
- Use the species names from the dataset rather than hardcoding species names in
the WarpX frontend. See `PR 1884
<https://github.com/yt-project/yt/pull/1884>`__.
- Fix issue with masked I/O for unstructured mesh data. See `PR 1918
<https://github.com/yt-project/yt/pull/1918>`__.
- Fix crash when reading DM-only Enzo datasets where some grids have no particles. See `PR 1919 <https://github.com/yt-project/yt/pull/1919>`__.
- Fix crash when loading pure-hydro Nyx dataset. See `PR 1950
<https://github.com/yt-project/yt/pull/1950>`__.
- Avoid crashes when plotting fields that contain NaN. See `PR 1951
<https://github.com/yt-project/yt/pull/1951>`__.
- Avoid crashes when loading NMSU ART data. See `PR 1960
<https://github.com/yt-project/yt/pull/1960>`__.
- Avoid crash when loading WarpX dataset with no particles. See `PR 1979
<https://github.com/yt-project/yt/pull/1979>`__.
- Adapt to API change in glue to fix the ``to_glue()`` method on yt data
objects. See `PR 1991 <https://github.com/yt-project/yt/pull/1991>`__.
- Fix incorrect width calculation in the ``annotate_halos()`` plot callback. See
`PR 1995 <https://github.com/yt-project/yt/pull/1995>`__.
- Don't try to read from files containing zero halos in the ``gadget_fof``
frontend. See `PR 2001 <https://github.com/yt-project/yt/pull/2001>`__.
- Fix incorrect calculation in ``get_ortho_base()``. See `PR 2013
<https://github.com/yt-project/yt/pull/2013>`__.
- Avoid issues with the axes background color being inconsistently set. See `PR
2018 <https://github.com/yt-project/yt/pull/2018>`__.
- Fix issue with reading multiple fields at once for octree AMR data sometimes
returning data for another field for one of the requested fields. See `PR 2020
<https://github.com/yt-project/yt/pull/2020>`__.
- Fix incorrect domain annotation for ``Scene.annotate_domain()`` when using the
plane-parallel camera. See `PR 2024
<https://github.com/yt-project/yt/pull/2024>`__.
- Avoid crash when particles are on the domain edges for ``gadget_fof``
data. See `PR 2034 <https://github.com/yt-project/yt/pull/2034>`__.
- Avoid stripping code units when processing units through a dataset's unit
system. See `PR 2035 <https://github.com/yt-project/yt/pull/2035>`__.
- Avoid incorrectly rescaling units of metalicity fields. See `PR 2038
<https://github.com/yt-project/yt/pull/2038>`__.
- Fix incorrect units for FLASH ``"divb"`` field. See `PR 2062
<https://github.com/yt-project/yt/pull/2062>`__.
Version 3.4
-----------
Version 3.4 is the first major release of yt since July 2016. It includes 450
pull requests from 44 contributors including 18 new contributors.
- yt now supports displaying plots using the interactive matplotlib
backends. To enable this functionality call
``yt.toggle_interactivity()``. This is currently supported at an
experimental level, please let us know if you come across issues
using it. See `Bitbucket PR
2294 <https://bitbucket.org/yt_analysis/yt/pull-requests/2294>`__.
- The yt configuration file should now be located in a location
following the XDG\_CONFIG convention (usually ``~/.config/yt/ytrc``)
rather than the old default location (usually ``~/.yt/config``). You
can use ``yt config migrate`` at the bash command line to migrate
your configuration file to the new location. See `Bitbucket PR
2343 <https://bitbucket.org/yt_analysis/yt/pull-requests/2343>`__.
- Added ``yt.LinePlot``, a new plotting class for creating 1D plots
along lines through a dataset. See `Github PR
1509 <https://github.com/yt-project/yt/pull/1509>`__ and `Github PR
1440 <https://github.com/yt-project/yt/pull/1440>`__.
- Added ``yt.define_unit`` to easily define new units in yt's unit
system. See `Bitbucket PR
2485 <https://bitbucket.org/yt_analysis/yt/pull-requests/2485>`__.
- Added ``yt.plot_2d``, a wrapper around SlicePlot for plotting 2D
datasets. See `Github PR
1476 <https://github.com/yt-project/yt/pull/1476>`__.
- We have restored support for boolean data objects. Boolean objects
are data objects that are defined in terms of boolean operations on
other data objects. See `Bitbucket PR
2257 <https://bitbucket.org/yt_analysis/yt/pull-requests/2257>`__.
- Datasets now have a ``fields`` attribute that allows access to fields
via a python object. For example, instead of using a tuple field name
like ``('gas', 'density')``, one can now use
``ds.fields.gas.density``. See `Bitbucket PR
2459 <https://bitbucket.org/yt_analysis/yt/pull-requests/2459>`__.
- It is now possible to create a wider variety of data objects via
``ds.r``, including rays, fixed resolution rays, points, and images.
See `Github PR 1518 <https://github.com/yt-project/yt/pull/1518>`__
and `Github PR 1393 <https://github.com/yt-project/yt/pull/1393>`__.
- ``add_field`` and ``ds.add_field`` must now be called with a
``sampling_type`` keyword argument. Possible values are currently
``cell`` and ``particle``. We have also deprecated the
``particle_type`` keyword argument in favor of
``sampling_type='cell'``. For now a ``'cell'`` ``sampling_type`` is
assumed if ``sampling_type`` is not specified but in the future
``sampling_type`` will always need to be specified.
- Added support for the ``Athena++`` code. See `Bitbucket PR
2149 <https://bitbucket.org/yt_analysis/yt/pull-requests/2149>`__.
- Added support for the ``Enzo-E`` code. See `Github PR
1447 <https://github.com/yt-project/yt/pull/1447>`__, `Github PR
1443 <https://github.com/yt-project/yt/pull/1443>`__ and `Github PR
1439 <https://github.com/yt-project/yt/pull/1439>`__.
- Added support for the ``AMReX`` code. See `Bitbucket PR
2530 <https://bitbucket.org/yt_analysis/yt/pull-requests/2530>`__.
- Added support for the ``openPMD`` output format. See `Bitbucket PR
2376 <https://bitbucket.org/yt_analysis/yt/pull-requests/2376>`__.
- Added support for reading face-centered and vertex-centered fields
for block AMR codes. See `Bitbucket PR
2575 <https://bitbucket.org/yt_analysis/yt/pull-requests/2575>`__.
- Added support for loading outputs from the Amiga Halo Finder. See
`Github PR 1477 <https://github.com/yt-project/yt/pull/1477>`__.
- Added support for particle fields for Boxlib data. See `Bitbucket PR
2510 <https://bitbucket.org/yt_analysis/yt/pull-requests/2510>`__ and
`Bitbucket PR
2497 <https://bitbucket.org/yt_analysis/yt/pull-requests/2497>`__.
- Added support for custom RAMSES particle fields. See `Github PR
1470 <https://github.com/yt-project/yt/pull/1470>`__.
- Added support for RAMSES-RT data. See `Github PR
1456 <https://github.com/yt-project/yt/pull/1456>`__ and `Github PR
1449 <https://github.com/yt-project/yt/pull/1449>`__.
- Added support for Enzo MHDCT fields. See `Github PR
1438 <https://github.com/yt-project/yt/pull/1438>`__.
- Added support for units and particle fields to the GAMER frontend.
See `Bitbucket PR
2366 <https://bitbucket.org/yt_analysis/yt/pull-requests/2366>`__ and
`Bitbucket PR
2408 <https://bitbucket.org/yt_analysis/yt/pull-requests/2408>`__.
- Added support for type 2 Gadget binary outputs. See `Bitbucket PR
2355 <https://bitbucket.org/yt_analysis/yt/pull-requests/2355>`__.
- Added the ability to detect and read double precision Gadget data.
See `Bitbucket PR
2537 <https://bitbucket.org/yt_analysis/yt/pull-requests/2537>`__.
- Added the ability to detect and read in big endian Gadget data. See
`Github PR 1353 <https://github.com/yt-project/yt/pull/1353>`__.
- Added support for Nyx datasets that do not contain particles. See
`Bitbucket PR
2571 <https://bitbucket.org/yt_analysis/yt/pull-requests/2571>`__
- A number of untested and unmaintained modules have been deprecated
and moved to the `yt attic
repository <https://github.com/yt-project/yt_attic>`__. This includes
the functionality for calculating two point functions, the Sunrise
exporter, the star analysis module, and the functionality for
calculating halo mass functions. If you are interested in working on
restoring the functionality in these modules, we welcome
contributions. Please contact us on the mailing list or by opening an
issue on GitHub if you have questions.
- The particle trajectories functionality has been removed from the
analysis modules API and added as a method of the ``DatasetSeries``
object. You can now create a ``ParticleTrajectories`` object using
``ts.particle_trajectories()`` where ``ts`` is a time series of
datasets.
- The ``spectral_integrator`` analysis module is now available via
``yt.fields.xray_emission_fields``. See `Bitbucket PR
2465 <https://bitbucket.org/yt_analysis/yt/pull-requests/2465>`__.
- The ``photon_simulator`` analysis module has been deprecated in favor
of the ``pyXSIM`` package, available separately from ``yt``. See
`Bitbucket PR
2441 <https://bitbucket.org/yt_analysis/yt/pull-requests/2441>`__.
- ``yt.utilities.fits_image`` is now available as
``yt.visualization.fits_image``. In addition classes that were in the
``yt.utilities.fits_image`` namespace are now available in the main
``yt`` namespace.
- The ``profile.variance`` attribute has been deprecated in favor of
``profile.standard_deviation``.
- The ``number_of_particles`` key no longer needs to be defined when
loading data via the stream frontend. See `Github PR
1428 <https://github.com/yt-project/yt/pull/1428>`__.
- The install script now only supports installing via miniconda. We
have removed support for compiling python and yt's dependencies from
source. See `Github PR
1459 <https://github.com/yt-project/yt/pull/1459>`__.
- Added ``plot.set_background_color`` for ``PlotWindow`` and
``PhasePlot`` plots. This lets users specify a color to fill in the
background of a plot instead of the default color, white. See
`Bitbucket PR
2513 <https://bitbucket.org/yt_analysis/yt/pull-requests/2513>`__.
- ``PlotWindow`` plots can now optionally use a right-handed coordinate
system. See `Bitbucket PR
2318 <https://bitbucket.org/yt_analysis/yt/pull-requests/2318>`__.
- The isocontour API has been overhauled to make use of units. See
`Bitbucket PR
2453 <https://bitbucket.org/yt_analysis/yt/pull-requests/2453>`__.
- ``Dataset`` instances now have a ``checksum`` property, which can be
accessed via ``ds.checksum``. This provides a unique identifier that
is guaranteed to be the same from session to session. See `Bitbucket
PR 2503 <https://bitbucket.org/yt_analysis/yt/pull-requests/2503>`__.
- Added a ``data_source`` keyword argument to
``OffAxisProjectionPlot``. See `Bitbucket PR
2490 <https://bitbucket.org/yt_analysis/yt/pull-requests/2490>`__.
- Added a ``yt download`` command-line helper to download test data
from https://yt-project.org/data. For more information see
``yt download --help`` at the bash command line. See `Bitbucket PR
2495 <https://bitbucket.org/yt_analysis/yt/pull-requests/2495>`__ and
`Bitbucket PR
2471 <https://bitbucket.org/yt_analysis/yt/pull-requests/2471>`__.
- Added a ``yt upload`` command-line helper to upload files to the `yt
curldrop <https://docs.hub.yt/services.html#curldrop>`__ at the bash
command line. See `Github PR
1471 <https://github.com/yt-project/yt/pull/1471>`__.
- If it's installed, colormaps from the `cmocean
package <https://matplotlib.org/cmocean/>`__ will be made available as
yt colormaps. See `Bitbucket PR
2439 <https://bitbucket.org/yt_analysis/yt/pull-requests/2439>`__.
- It is now possible to visualize unstructured mesh fields defined on
multiple mesh blocks. See `Bitbucket PR
2487 <https://bitbucket.org/yt_analysis/yt/pull-requests/2487>`__.
- Add support for second-order interpolation when slicing tetrahedral
unstructured meshes. See `Bitbucket PR
2550 <https://bitbucket.org/yt_analysis/yt/pull-requests/2550>`__.
- Add support for volume rendering second-order tetrahedral meshes. See
`Bitbucket PR
2401 <https://bitbucket.org/yt_analysis/yt/pull-requests/2401>`__.
- Add support for QUAD9 mesh elements. See `Bitbucket PR
2549 <https://bitbucket.org/yt_analysis/yt/pull-requests/2549>`__.
- Add support for second-order triangle mesh elements. See `Bitbucket
PR 2378 <https://bitbucket.org/yt_analysis/yt/pull-requests/2378>`__.
- Added support for dynamical dark energy parameterizations to the
``Cosmology`` object. See `Bitbucket PR
2572 <https://bitbucket.org/yt_analysis/yt/pull-requests/2572>`__.
- ``ParticleProfile`` can now handle log-scaled bins and data with
negative values. See `Bitbucket PR
2564 <https://bitbucket.org/yt_analysis/yt/pull-requests/2564>`__ and
`Github PR 1510 <https://github.com/yt-project/yt/pull/1510>`__.
- Cut region data objects can now be saved as reloadable datasets using
``save_as_dataset``. See `Bitbucket PR
2541 <https://bitbucket.org/yt_analysis/yt/pull-requests/2541>`__.
- Clump objects can now be saved as reloadable datasets using
``save_as_dataset``. See `Bitbucket PR
2326 <https://bitbucket.org/yt_analysis/yt/pull-requests/2326>`__.
- It is now possible to specify the field to use for the size of the
circles in the ``annotate_halos`` plot modifying function. See
`Bitbucket PR
2493 <https://bitbucket.org/yt_analysis/yt/pull-requests/2493>`__.
- The ``ds.max_level`` attribute is now a property that is computed on
demand. The more verbose ``ds.index.max_level`` will continue to
work. See `Bitbucket PR
2461 <https://bitbucket.org/yt_analysis/yt/pull-requests/2461>`__.
- The ``PointSource`` volume rendering source now optionally accepts a
``radius`` keyword argument to draw spatially extended points. See
`Bitbucket PR
2404 <https://bitbucket.org/yt_analysis/yt/pull-requests/2404>`__.
- It is now possible to save volume rendering images in eps, ps, and
pdf format. See `Github PR
1504 <https://github.com/yt-project/yt/pull/1504>`__.
Minor Enhancements and Bugfixes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- Fixed issue selecting and visualizing data at very high AMR levels.
See `Github PR 1521 <https://github.com/yt-project/yt/pulls/1521>`__
and `Github PR 1433 <https://github.com/yt-project/yt/pull/1433>`__.
- Print a more descriptive error message when defining a particle
filter fails with missing fields See `Github PR
1517 <https://github.com/yt-project/yt/pull/1517>`__.
- Removed grid edge rounding from the FLASH frontend. This fixes a
number of pernicious visualization artifacts for FLASH data. See
`Github PR 1493 <https://github.com/yt-project/yt/pull/1493>`__.
- Parallel projections no longer error if there are less io chunks than
MPI tasks. See `Github PR
1488 <https://github.com/yt-project/yt/pull/1488>`__.
- A memory leak in the volume renderer has been fixed. See `Github PR
1485 <https://github.com/yt-project/yt/pull/1485>`__ and `Github PR
1435 <https://github.com/yt-project/yt/pull/1435>`__.
- The ``force_override`` keyword argument now raises an error when used
with on-disk fields. See `Github PR
1516 <https://github.com/yt-project/yt/pull/1516>`__.
- Restore support for making plots from reloaded plots. See `Github PR
1514 <https://github.com/yt-project/yt/pull/1514>`__
- Don't ever try to read inputs or probin files for Castro and Maestro.
See `Github PR 1445 <https://github.com/yt-project/yt/pull/1445>`__.
- Fixed issue that caused visualization artifacts when creating an
off-axis projection for particle or octree AMR data. See `Github PR
1434 <https://github.com/yt-project/yt/pull/1434>`__.
- Fix i/o for the Enzo ``'Dark_Matter_Density'`` field. See `Github PR
1360 <https://github.com/yt-project/yt/pull/1360>`__.
- Create the ``'particle_ones'`` field even if we don't have a particle
mass field. See `Github PR
1424 <https://github.com/yt-project/yt/pull/1424>`__.
- Fixed issues with minor colorbar ticks with symlog colorbar scaling.
See `Github PR 1423 <https://github.com/yt-project/yt/pull/1423>`__.
- Using the rockstar halo finder is now supported under Python3. See
`Github PR 1414 <https://github.com/yt-project/yt/pull/1414>`__.
- Fixed issues with orientations of volume renderings when compositing
multiple sources. See `Github PR
1411 <https://github.com/yt-project/yt/pull/1411>`__.
- Added a check for valid AMR structure in ``load_amr_grids``. See
`Github PR 1408 <https://github.com/yt-project/yt/pull/1408>`__.
- Fix bug in handling of periodic boundary conditions in the
``annotate_halos`` plot modifying function. See `Github PR
1351 <https://github.com/yt-project/yt/pull/1351>`__.
- Add support for plots with non-unit aspect ratios to the
``annotate_scale`` plot modifying function. See `Bitbucket PR
2551 <https://bitbucket.org/yt_analysis/yt/pull-requests/2551>`__.
- Fixed issue with saving light ray datasets. See `Bitbucket PR
2589 <https://bitbucket.org/yt_analysis/yt/pull-requests/2589>`__.
- Added support for 2D WarpX data. ee `Bitbucket PR
2583 <https://bitbucket.org/yt_analysis/yt/pull-requests/2583>`__.
- Ensure the ``particle_radius`` field is always accessed with the
correct field type. See `Bitbucket PR
2562 <https://bitbucket.org/yt_analysis/yt/pull-requests/2562>`__.
- It is now possible to use a covering grid to access particle filter
fields. See `Bitbucket PR
2569 <https://bitbucket.org/yt_analysis/yt/pull-requests/2569>`__.
- The x limits of a ``ProfilePlot`` will now snap exactly to the limits
specified in calls to ``ProfilePlot.set_xlim``. See `Bitbucket PR
2546 <https://bitbucket.org/yt_analysis/yt/pull-requests/2546>`__.
- Added a cookbook example showing how to make movies using
matplotlib's animation framework. See `Bitbucket PR
2544 <https://bitbucket.org/yt_analysis/yt/pull-requests/2544>`__.
- Use a parallel-safe wrapper around mkdir when creating new
directories. See `Bitbucket PR
2570 <https://bitbucket.org/yt_analysis/yt/pull-requests/2570>`__.
- Removed ``yt.utilities.spatial``. This was a forked version of
``scipy.spatial`` with support for a periodic KD-tree. Scipy now has
a periodic KD-tree, so we have removed the forked version from yt.
Please use ``scipy.spatial`` if you were relying on
``yt.utilities.spatial``. See `Bitbucket PR
2576 <https://bitbucket.org/yt_analysis/yt/pull-requests/2576>`__.
- Improvements for the ``HaloCatalog``. See `Bitbucket PR
2536 <https://bitbucket.org/yt_analysis/yt/pull-requests/2536>`__ and
`Bitbucket PR
2535 <https://bitbucket.org/yt_analysis/yt/pull-requests/2535>`__.
- Removed ``'log'`` in colorbar label in annotated volume rendering.
See `Bitbucket PR
2548 <https://bitbucket.org/yt_analysis/yt/pull-requests/2548>`__
- Fixed a crash triggered by depositing particle data onto a covering
grid. See `Bitbucket PR
2545 <https://bitbucket.org/yt_analysis/yt/pull-requests/2545>`__.
- Ensure field type guessing is deterministic on Python3. See
`Bitbucket PR
2559 <https://bitbucket.org/yt_analysis/yt/pull-requests/2559>`__.
- Removed unused yt.utilities.exodusII\_reader module. See `Bitbucket
PR 2533 <https://bitbucket.org/yt_analysis/yt/pull-requests/2533>`__.
- The ``cell_volume`` field in curvilinear coordinates now uses an
exact rather than an approximate definition. See `Bitbucket PR
2466 <https://bitbucket.org/yt_analysis/yt/pull-requests/2466>`__.
Version 3.3
-----------
Version 3.3 is the first major release of yt since July 2015. It includes more
than 3000 commits from 41 contributors, including 12 new contributors.
Major enhancements
^^^^^^^^^^^^^^^^^^
* Raw and processed data from selections, projections, profiles and so forth can
now be saved in a ytdata format and loaded back in by yt. See
:ref:`saving_data`.
* Totally re-worked volume rendering API. The old API is still available for users
who prefer it, however. See :ref:`volume_rendering`.
* Support for unstructured mesh visualization. See
:ref:`unstructured-mesh-slices` and :ref:`unstructured_mesh_rendering`.
* Interactive Data Visualization for AMR and unstructured mesh datasets. See
:ref:`interactive_data_visualization`.
* Several new colormaps, including a new default, 'arbre'. The other new
colormaps are named 'octarine', 'kelp', and 'dusk'. All these new colormaps
were generated using the `viscm package
<https://github.com/matplotlib/viscm>`_ and should do a better job of
representing the data for colorblind viewers and when printed out in
grayscale. See :ref:`colormaps` for more detail.
* New frontends for the :ref:`ExodusII <loading-exodusii-data>`,
:ref:`GAMER <loading-gamer-data>`, and :ref:`Gizmo <loading-gizmo-data>` data
formats.
* The unit system associated with a dataset is now customizable, defaulting to
CGS.
* Enhancements and usability improvements for analysis modules, especially the
``absorption_spectrum``, ``photon_simulator``, and ``light_ray`` modules. See
:ref:`synthetic-observations`.
* Data objects can now be created via an alternative Numpy-like API. See
:ref:`quickly-selecting-data`.
* A line integral convolution plot modification. See
:ref:`annotate-line-integral-convolution`.
* Many speed optimizations, including to the volume rendering, units, tests,
covering grids, the absorption spectrum and photon simulator analysis modules,
and ghost zone generation.
* Packaging and release-related improvements: better install and setup scripts,
automated PR backporting.
* Readability improvements to the codebase, including linting, removing dead
code, and refactoring much of the Cython.
* Improvements to the CI infrastructure, including more extensible answer tests
and automated testing for Python 3 and Windows.
* Numerous documentation improvements, including formatting tweaks, bugfixes,
and many new cookbook recipes.
* Support for geographic (lat/lon) coordinates.
* Several improvements for SPH codes, including alternative smoothing kernels,
an ``add_smoothed_particle_field`` function, and particle type-aware octree
construction for Gadget data.
* Roundtrip conversions between Pint and yt units.
* Added halo data containers for gadget_fof frontend.
* Enabled support for spherical datasets in the BoxLib frontend.
* Many new tests have been added.
* Better hashing for Selector objects.
Minor enhancements and bugfixes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* Fixed many bugs related to Python 3 compatibility
* Fixed bugs related to compatibility issues with newer versions of numpy
* Added the ability to export data objects to a Pandas dataframe
* Added support for the fabs ufunc to YTArray
* Fixed two licensing issues
* Fixed a number of bugs related to Windows compatibility.
* We now avoid hard-to-decipher tracebacks when loading empty files or
directories
* Fixed a bug related to ART star particle creation time field
* Fixed a bug caused by using the wrong int type for indexing in particle deposit
* Fixed a NameError bug in comparing temperature units with offsets
* Fixed an API bug in YTArray casting during coercion from YTQuantity
* Added loadtxt and savetxt convenience functions for ``YTArray``
* Fixed an issue caused by not sort species names with Enzo
* Fixed a units bug for RAMSES when ``boxlen > 1``.
* Fixed ``process_chunk`` function for non-cartesian geometry.
* Added ``scale_factor`` attribute to cosmological simulation datasets
* Fixed a bug where "center" vectors are used instead of "normal" vectors in
get_sph_phi(), etc.
* Fixed issues involving invalid FRBs when uses called _setup_plots in their
scripts
* Added a ``text_args`` keyword to ``annotate_scale()`` callback
* Added a print_stats function for RAMSES
* Fixed a number of bugs in the Photon Simulator
* Added support for particle fields to the [Min,Max]Location derived quantities
* Fixed some units bugs for Gadget cosmology simulations
* Fixed a bug with Gadget/GIZMO StarFormationRate units
* Fixed an issue in TimeSeriesData where all the filenames were getting passed
to ``load`` on each processor.
* Fixed a units bug in the Tipsy frontend
* Ensured that ARTIOIndex.get_smallest_dx() returns a quantity with units
* Ensured that plots are valid after invalidating the figure
* Fixed a bug regarding code unit labels
* Fixed a bug with reading Tipsy Aux files
* Added an effective redshift field to the Light Ray analysis module for use in
AbsorptionSpectrum
* Fixed a bug with the redshift calculation in LightRay analysis module
* Fixed a bug in the Orion frontend when you had more than 10 on-disk particle
fields in the file
* Detect more types of ART files
* Update derived_field_list in add_volume_weighted_smoothed_field
* Fixed casting issues for 1D and 2D Enzo simulations
* Avoid type indirection when setting up data object entry points
* Fixed issues with SIMPUT files
* Fixed loading athena data in python3 with provided parameters
* Tipsy cosmology unit fixes
* Fixed bad unit labels for compound units
* Making the xlim and ylim of the PhasePlot plot axes controllable
* Adding grid_arrays to grid_container
* An Athena and a GDF bugfix
* A small bugfix and some small enhancements for sunyaev_zeldovich
* Defer to coordinate handlers for width
* Make array_like_field return same units as get_data
* Fixing bug in ray "dts" and "t" fields
* Check against string_types not str
* Closed a loophole that allowed improper LightRay use
* Enabling AbsorptionSpectrum to deposit unresolved spectral lines
* Fixed an ART byte/string/array issue
* Changing AbsorptionSpectrum attribute lambda_bins to be lambda_field for
consistency
* No longer require user to save to disk when generating an AbsorptionSpectrum
* ParticlePlot FRBs can now use save_as_dataset and save attributes properly
* Added checks to assure ARTIO creates a metal_density field from existing metal
fields.
* Added mask to LightRay to assure output elements have non-zero density (a
problem in some SPH datasets)
* Added a "fields" attribute to datasets
* Updated the TransferFunctionHelper to work with new profiles
* Fixed a bug where the field_units kwarg to load_amr_grids didn't do anything
* Changed photon_simulator's output file structure
* Fixed a bug related to setting output_units.
* Implemented ptp operation.
* Added effects of transverse doppler redshift to LightRay
* Fixed a casting error for float and int64 multiplication in sdf class
* Added ability to read and write YTArrays to and from groups within HDF5 files
* Made ftype of "on-disk" stream fields "stream"
* Fixed a strings decoding issue in the photon simulator
* Fixed an incorrect docstring in load_uniform_grid
* Made PlotWindow show/hide helpers for axes and colorbar return self
* Made Profile objects store field metadata.
* Ensured GDF unit names are strings
* Taught off_axis_projection about its resolution keyword.
* Reintroduced sanitize_width for polar/cyl coordinates.
* We now fail early when load_uniform_grid is passed data with an incorrect shape
* Replaced progress bar with tqdm
* Fixed redshift scaling of "Overdensity" field in yt-2.x
* Fixed several bugs in the eps_writer
* Fixed bug affecting 2D BoxLib simulations.
* Implemented to_json and from_json for the UnitRegistry object
* Fixed a number of issues with ds.find_field_values_at_point[s]
* Fixed a bug where sunrise_exporter was using wrong imports
* Import HUGE from utilities.physical_ratios
* Fixed bug in ARTIO table look ups
* Adding support for longitude and latitude
* Adding halo data containers for gadget_fof frontend.
* Can now compare YTArrays without copying them
* Fixed several bugs related to active particle datasets
* Angular_momentum_vector now only includes space for particle fields if they
exist.
* Image comparison tests now print a meaningful error message if they fail.
* Fixed numpy 1.11 compatibility issues.
* Changed _skip_cache to be True by default.
* Enable support for spherical datasets in the BoxLib frontend.
* Fixed a bug in add_deposited_particle_field.
* Fixed issues with input sanitization in the point data object.
* Fixed a copy/paste error introduced by refactoring WeightedMenParticleField
* Fixed many formatting issues in the docs build
* Now avoid creating particle unions for particle types that have no common
fields
* Patched ParticlePlot to work with filtered particle fields.
* Fixed a couple corner cases in gadget_fof frontend
* We now properly normalise all normal vectors in functions that take a normal
vector (for e.g. get_sph_theta)
* Fixed a bug where the transfer function features were not always getting
cleared properly.
* Made the Chombo frontend is_valid method smarter.
* Added a get_hash() function to yt/funcs.py which returns a hash for a file
* Added Sievert to the default unit symbol table
* Corrected an issue with periodic "wiggle" in AbsorptionSpectrum instances
* Made ``ds.field_list`` sorted by default
* Bug fixes for the Nyx frontend
* Fixed a bug where the index needed to be created before calling derived
quantities
* Made latex_repr a property, computed on-demand
* Fixed a bug in off-axis slice deposition
* Fixed a bug with some types of octree block traversal
* Ensured that mpi operations retain ImageArray type instead of downgrading to
YTArray parent class
* Added a call to _setup_plots in the custom colorbar tickmark example
* Fixed two minor bugs in save_annotated
* Added ability to specify that DatasetSeries is not a mixed data type
* Fixed a memory leak in ARTIO
* Fixed copy/paste error in to_frb method.
* Ensured that particle dataset max_level is consistent with the index max_level
* Fixed an issue where fields were getting added multiple times to
field_info.field_list
* Enhanced annotate_ray and annotate_arrow callbacks
* Added GDF answer tests
* Made the YTFieldTypeNotFound exception more informative
* Added a new function, fake_vr_orientation_test_ds(), for use in testing
* Ensured that instances of subclasses of YTArray have the correct type
* Re-enabled max_level for projections, ProjectionPlot, and OffAxisProjectionPlot
* Fixed a bug in the Orion 2 field definitions
* Fixed a bug caused by matplotlib not being added to install_requires
* Edited PhasePlot class to have an annotate_title method
* Implemented annotate_cell_edges
* Handled KeyboardInterrupt in volume rendering Cython loop
* Made old halo finders now accept ptype
* Updated the latex commands in yt cheatsheet
* Fixed a circular dependency loop bug in abar field definition for FLASH
datasets
* Added neutral species aliases as described in YTEP 0003
* Fixed a logging issue: don't create a StreamHandler unless we will use it
* Correcting how theta and phi are calculated in
``_particle_velocity_spherical_radius``,
``_particle_velocity_spherical_theta``,
``_particle_velocity_cylindrical_radius``, and
``_particle_velocity_cylindrical_theta``
* Fixed a bug related to the field dictionary in ``load_particles``
* Allowed for the special case of supplying width as a tuple of tuples
* Made yt compile with MSVC on Windows
* Fixed a bug involving mask for dt in octree
* Merged the get_yt.sh and install_script.sh into one
* Added tests for the install script
* Allowed use axis names instead of dimensions for spherical pixelization
* Fixed a bug where close() wasn't being called in HDF5FileHandler
* Enhanced commandline image upload/delete
* Added get_brewer_cmap to get brewer colormaps without importing palettable at
the top level
* Fixed a bug where a parallel_root_only function was getting called inside
another parallel_root_only function
* Exit the install script early if python can't import '_ssl' module
* Make PlotWindow's annotate_clear method invalidate the plot
* Adding int wrapper to avoid deprecation warning from numpy
* Automatically create vector fields for magnetic_field
* Allow users to completely specify the filename of a 1D profile
* Force nose to produce meaningful traceback for cookbook recipes' tests
* Fixed x-ray display_name and documentation
* Try to guess and load particle file for FLASH dataset
* Sped up top-level yt import
* Set the field type correctly for fields added as particle fields
* Added a position location method for octrees
* Fixed a copy/paste error in uhstack function
* Made trig functions give correct results when supplied data with dimensions of
angle but units that aren't radian
* Print out some useful diagnostic information if check_for_openmp() fails
* Give user-added derived fields a default field type
* Added support for periodicity in annotate_particles.
* Added a check for whether returned field has units in volume-weighted smoothed
fields
* Casting array indices as ints in colormaps infrastructure
* Fixed a bug where the standard particle fields weren't getting set up
correctly for the Orion frontends
* Enabled LightRay to accept loaded datasets instead of just filenames
* Allowed for adding or subtracting arrays filled with zeros without checking
units.
* Fixed a bug in selection for semistructured meshes.
* Removed 'io' from enzo particle types for active particle datasets
* Added support for FLASH particle datasets.
* Silenced a deprecation warning from IPython
* Eliminated segfaults in KDTree construction
* Fixed add_field handling when passed a tuple
* Ensure field parameters are correct for fields that need ghost zones
* Made it possible to use DerivedField instances to access data
* Added ds.particle_type_counts
* Bug fix and improvement for generating Google Cardboard VR in
StereoSphericalLens
* Made DarkMatterARTDataset more robust in its _is_valid
* Added Earth radius to units
* Deposit hydrogen fields to grid in gizmo frontend
* Switch to index values being int64
* ValidateParameter ensures parameter values are used during field detection
* Switched to using cythonize to manage dependencies in the setup script
* ProfilePlot style changes and refactoring
* Cancel terms with identical LaTeX representations in a LaTeX representation of
a unit
* Only return early from comparison validation if base values are equal
* Enabled particle fields for clump objects
* Added validation checks for data types in callbacks
* Enabled modification of image axis names in coordinate handlers
* Only add OWLS/EAGLE ion fields if they are present
* Ensured that PlotWindow plots continue to look the same under matplotlib 2.0
* Fixed bug in quiver callbacks for off-axis slice plots
* Only visit octree children if going to next level
* Check that CIC always gets at least two cells
* Fixed compatibility with matplotlib 1.4.3 and earlier
* Fixed two EnzoSimulation bugs
* Moved extraction code from YTSearchCmd to its own utility module
* Changed amr_kdtree functions to be Node class methods
* Sort block indices in order of ascending levels to match order of grid patches
* MKS code unit system fixes
* Disabled bounds checking on pixelize_element_mesh
* Updated light_ray.py for domain width != 1
* Implemented a DOAP file generator
* Fixed bugs for 2D and 1D enzo IO
* Converted mutable Dataset attributes to be properties that return copies
* Allowing LightRay segments to extend further than one box length
* Fixed a divide-by-zero error that occasionally happens in
triangle_plane_intersect
* Make sure we have an index in subclassed derived quantities
* Added an initial draft of an extensions document
* Made it possible to pass field tuples to command-line plotting
* Ensured the positions of coordinate vector lines are in code units
* Added a minus sign to definition of sz_kinetic field
* Added grid_levels and grid_indices fields to octrees
* Added a morton_index derived field
* Added Exception to AMRKDTree in the case of particle of oct-based data
Version 3.2
-----------
Major enhancements
^^^^^^^^^^^^^^^^^^
* Particle-Only Plots - a series of new plotting functions for visualizing
particle data. See here for more information.
* Late-stage beta support for Python 3 - unit tests and answer tests pass for
all the major frontends under python 3.4, and yt should now be mostly if not
fully usable. Because many of the yt developers are still on Python 2 at
this point, this should be considered a "late stage beta" as there may be
remaining issues yet to be identified or worked out.
* Now supporting Gadget Friend-of-Friends/Subfind catalogs - see here to learn
how to load halo catalogs as regular yt datasets.
* Custom colormaps can now be easily defined and added - see here to learn how!
* Now supporting Fargo3D data
* Performance improvements throughout the code base for memory and speed
Minor enhancements
^^^^^^^^^^^^^^^^^^
* Various updates to the following frontends: ART, Athena, Castro, Chombo,
Gadget, GDF, Maestro, Pluto, RAMSES, Rockstar, SDF, Tipsy
* Numerous documentation updates
* Generic hexahedral mesh pixelizer
* Adding annotate_ray() callback for plots
* AbsorptionSpectrum returned to full functionality and now using faster SciPy
Voigt profile
* Add a color_field argument to annotate_streamline
* Smoothing lengths auto-calculated for Tipsy Datasets
* Adding SimulationTimeSeries support for Gadget and OWLS.
* Generalizing derived quantity outputs to all be YTArrays or lists of
YTArrays as appropriate
* Star analysis returned to full functionality
* FITS image writing refactor
* Adding gradient fields on the fly
* Adding support for Gadget Nx4 metallicity fields
* Updating value of solar metal mass fraction to be consistent with Cloudy.
* Gadget raw binary snapshot handling & non-cosmological simulation units
* Adding support for LightRay class to work with Gadget+Tipsy
* Add support for subclasses of frontends
* Dependencies updated
* Serialization for projections using minimal representation
* Adding Grid visitors in Cython
* Improved semantics for derived field units
* Add a yaw() method for the PerspectiveCamera + switch back to LHS
* Adding annotate_clear() function to remove previous callbacks from a plot
* Added documentation for hexahedral mesh on website
* Speed up nearest neighbor evaluation
* Add a convenience method to create deposited particle fields
* UI and docs updates for 3D streamlines
* Ensure particle fields are tested in the field unit tests
* Allow a suffix to be specified to save()
* Add profiling using airspeed velocity
* Various plotting enhancements and bugfixes
* Use hglib to update
* Various minor updates to halo_analysis toolkit
* Docker-based tests for install_script.sh
* Adding support for single and non-cosmological datasets to LightRay
* Adding the Pascal unit
* Add weight_field to PPVCube
* FITS reader: allow HDU in auxiliary
* Fixing electromagnetic units
* Specific Angular Momentum [xyz] computed relative to a normal vector
Bugfixes
^^^^^^^^
* Adding ability to create union fields from alias fields
* Small fix to allow enzo AP datasets to load in parallel when no APs present
* Use proper cell dimension in gradient function.
* Minor memory optimization for smoothed particle fields
* Fix thermal_energy for Enzo HydroMethod==6
* Make sure annotate_particles handles unitful widths properly
* Improvements for add_particle_filter and particle_filter
* Specify registry in off_axis_projection's image finalization
* Apply fix for particle momentum units to the boxlib frontend
* Avoid traceback in "yt version" when python-hglib is not installed
* Expose no_ghost from export_sketchfab down to _extract_isocontours_from_grid
* Fix broken magnetic_unit attribute
* Fixing an off-by-one error in the set x/y lim methods for profile plots
* Providing better error messages to PlotWindow callbacks
* Updating annotate_timestamp to avoid auto-override
* Updating callbacks to consistently define coordinate system
* Fixing species fields for OWLS and tipsy
* Fix extrapolation for vertex-centered data
* Fix periodicity check in FRBs
* Rewrote project_to_plane() in PerspectiveCamera for draw_domain()
* Fix intermittent failure in test_add_deposited_particle_field
* Improve minorticks for a symlog plot with one-sided data
* Fix smoothed covering grid cell computation
* Absorption spectrum generator now 3.0 compliant
* Fix off-by-one-or-more in particle smallest dx
* Fix dimensionality mismatch error in covering grid
* Fix curvature term in cosmology calculator
* Fix geographic axes and pixelization
* Ensure axes aspect ratios respect the user-selected plot aspect ratio
* Avoid clobbering field_map when calling profile.add_fields
* Fixing the arbitrary grid deposit code
* Fix spherical plotting centering
* Make the behavior of to_frb consistent with the docstring
* Ensure projected units are initialized when there are no chunks.
* Removing "field already exists" warnings from the Owls and Gadget frontends
* Various photon simulator bugs
* Fixed use of LaTeX math mode
* Fix upload_image
* Enforce plot width in CSS when displayed in a notebook
* Fix cStringIO.StringIO -> cStringIO in png_writer
* Add some input sanitizing and error checking to covering_grid initializer
* Fix for geographic plotting
* Use the correct filename template for single-file OWLS datasets.
* Fix Enzo IO performance for 32 bit datasets
* Adding a number density field for Enzo MultiSpecies=0 datasets.
* Fix RAMSES block ordering
* Updating ragged array tests for NumPy 1.9.1
* Force returning lists for HDF5FileHandler
Version 3.1
-----------
This is a scheduled feature release. Below are the itemized, aggregate changes
since version 3.0.
Major changes:
^^^^^^^^^^^^^^
* The RADMC-3D export analysis module has been updated. `PR 1358 <https://bitbucket.org/yt_analysis/yt/pull-requests/1358>`_, `PR 1332 <https://bitbucket.org/yt_analysis/yt/pull-requests/1332>`_.
* Performance improvements for grid frontends. `PR 1350 <https://bitbucket.org/yt_analysis/yt/pull-requests/1350>`_. `PR 1382 <https://bitbucket.org/yt_analysis/yt/pull-requests/1382>`_, `PR 1322 <https://bitbucket.org/yt_analysis/yt/pull-requests/1322>`_.
* Added a frontend for Dark Matter-only NMSU Art simulations. `PR 1258 <https://bitbucket.org/yt_analysis/yt/pull-requests/1258>`_.
* The absorption spectrum generator has been updated. `PR 1356 <https://bitbucket.org/yt_analysis/yt/pull-requests/1356>`_.
* The PerspectiveCamera has been updated and a new SphericalCamera has been
added. `PR 1346 <https://bitbucket.org/yt_analysis/yt/pull-requests/1346>`_, `PR 1299 <https://bitbucket.org/yt_analysis/yt/pull-requests/1299>`_.
* The unit system now supports unit equivalencies and has improved support for MKS units. See :ref:`unit_equivalencies`. `PR 1291 <https://bitbucket.org/yt_analysis/yt/pull-requests/1291>`_, `PR 1286 <https://bitbucket.org/yt_analysis/yt/pull-requests/1286>`_.
* Data object selection can now be chained, allowing selecting based on multiple constraints. `PR 1264 <https://bitbucket.org/yt_analysis/yt/pull-requests/1264>`_.
* Added the ability to manually override the simulation unit system. `PR 1236 <https://bitbucket.org/yt_analysis/yt/pull-requests/1236>`_.
* The documentation has been reorganized and has seen substantial improvements. `PR 1383 <https://bitbucket.org/yt_analysis/yt/pull-requests/1383>`_, `PR 1373 <https://bitbucket.org/yt_analysis/yt/pull-requests/1373>`_, `PR 1364 <https://bitbucket.org/yt_analysis/yt/pull-requests/1364>`_, `PR 1351 <https://bitbucket.org/yt_analysis/yt/pull-requests/1351>`_, `PR 1345 <https://bitbucket.org/yt_analysis/yt/pull-requests/1345>`_. `PR 1333 <https://bitbucket.org/yt_analysis/yt/pull-requests/1333>`_, `PR 1342 <https://bitbucket.org/yt_analysis/yt/pull-requests/1342>`_, `PR 1338 <https://bitbucket.org/yt_analysis/yt/pull-requests/1338>`_, `PR 1330 <https://bitbucket.org/yt_analysis/yt/pull-requests/1330>`_, `PR 1326 <https://bitbucket.org/yt_analysis/yt/pull-requests/1326>`_, `PR 1323 <https://bitbucket.org/yt_analysis/yt/pull-requests/1323>`_, `PR 1315 <https://bitbucket.org/yt_analysis/yt/pull-requests/1315>`_, `PR 1305 <https://bitbucket.org/yt_analysis/yt/pull-requests/1305>`_, `PR 1289 <https://bitbucket.org/yt_analysis/yt/pull-requests/1289>`_, `PR 1276 <https://bitbucket.org/yt_analysis/yt/pull-requests/1276>`_.
Minor or bugfix changes:
^^^^^^^^^^^^^^^^^^^^^^^^
* The Ampere unit now accepts SI prefixes. `PR 1393 <https://bitbucket.org/yt_analysis/yt/pull-requests/1393>`_.
* The Gadget InternalEnergy and StarFormationRate fields are now read in with the correct units. `PR 1392 <https://bitbucket.org/yt_analysis/yt/pull-requests/1392>`_, `PR 1379 <https://bitbucket.org/yt_analysis/yt/pull-requests/1379>`_.
* Substantial improvements for the PPVCube analysis module and support for FITS dataset. `PR 1390 <https://bitbucket.org/yt_analysis/yt/pull-requests/1390>`_, `PR 1367 <https://bitbucket.org/yt_analysis/yt/pull-requests/1367>`_, `PR 1347 <https://bitbucket.org/yt_analysis/yt/pull-requests/1347>`_, `PR 1326 <https://bitbucket.org/yt_analysis/yt/pull-requests/1326>`_, `PR 1280 <https://bitbucket.org/yt_analysis/yt/pull-requests/1280>`_, `PR 1336 <https://bitbucket.org/yt_analysis/yt/pull-requests/1336>`_.
* The center of a PlotWindow plot can now be set to the maximum or minimum of any field. `PR 1280 <https://bitbucket.org/yt_analysis/yt/pull-requests/1280>`_.
* Fixes for yt testing infrastructure. `PR 1388 <https://bitbucket.org/yt_analysis/yt/pull-requests/1388>`_, `PR 1348 <https://bitbucket.org/yt_analysis/yt/pull-requests/1348>`_.
* Projections are now performed using an explicit path length field for all
coordinate systems. `PR 1307 <https://bitbucket.org/yt_analysis/yt/pull-requests/1307>`_.
* An example notebook for simulations using the OWLS data format has been added
to the documentation. `PR 1386 <https://bitbucket.org/yt_analysis/yt/pull-requests/1386>`_.
* Fix for the camera.draw_line function. `PR 1380 <https://bitbucket.org/yt_analysis/yt/pull-requests/1380>`_.
* Minor fixes and improvements for yt plots. `PR 1376 <https://bitbucket.org/yt_analysis/yt/pull-requests/1376>`_, `PR 1374 <https://bitbucket.org/yt_analysis/yt/pull-requests/1374>`_, `PR 1288 <https://bitbucket.org/yt_analysis/yt/pull-requests/1288>`_, `PR 1290 <https://bitbucket.org/yt_analysis/yt/pull-requests/1290>`_.
* Significant documentation reorganization and improvement. `PR 1375 <https://bitbucket.org/yt_analysis/yt/pull-requests/1375>`_, `PR 1359 <https://bitbucket.org/yt_analysis/yt/pull-requests/1359>`_.
* Fixed a conflict in the CFITSIO library used by the x-ray analysis module. `PR 1365 <https://bitbucket.org/yt_analysis/yt/pull-requests/1365>`_.
* Miscellaneous code cleanup. `PR 1371 <https://bitbucket.org/yt_analysis/yt/pull-requests/1371>`_, `PR 1361 <https://bitbucket.org/yt_analysis/yt/pull-requests/1361>`_.
* yt now hooks up to the python logging infrastructure in a more standard
fashion, avoiding issues with yt logging showing up with using other
libraries. `PR 1355 <https://bitbucket.org/yt_analysis/yt/pull-requests/1355>`_, `PR 1362 <https://bitbucket.org/yt_analysis/yt/pull-requests/1362>`_, `PR 1360 <https://bitbucket.org/yt_analysis/yt/pull-requests/1360>`_.
* The docstring for the projection data object has been corrected. `PR 1366 <https://bitbucket.org/yt_analysis/yt/pull-requests/1366>`_
* A bug in the calculation of the plot bounds for off-axis slice plots has been fixed. `PR 1357 <https://bitbucket.org/yt_analysis/yt/pull-requests/1357>`_.
* Improvements for the yt-rockstar interface. `PR 1352 <https://bitbucket.org/yt_analysis/yt/pull-requests/1352>`_, `PR 1317 <https://bitbucket.org/yt_analysis/yt/pull-requests/1317>`_.
* Fix issues with plot positioning with saving to postscript or encapsulated postscript. `PR 1353 <https://bitbucket.org/yt_analysis/yt/pull-requests/1353>`_.
* It is now possible to supply a default value for get_field_parameter. `PR 1343 <https://bitbucket.org/yt_analysis/yt/pull-requests/1343>`_.
* A bug in the interpretation of the units of RAMSES simulations has been fixed. `PR 1335 <https://bitbucket.org/yt_analysis/yt/pull-requests/1335>`_.
* Plot callbacks are now only executed once before the plot is saved. `PR 1328 <https://bitbucket.org/yt_analysis/yt/pull-requests/1328>`_.
* Performance improvements for smoothed covering grid alias fields. `PR 1331 <https://bitbucket.org/yt_analysis/yt/pull-requests/1331>`_.
* Improvements and bugfixes for the halo analysis framework. `PR 1349 <https://bitbucket.org/yt_analysis/yt/pull-requests/1349>`_, `PR 1325 <https://bitbucket.org/yt_analysis/yt/pull-requests/1325>`_.
* Fix issues with the default setting for the ``center`` field parameter. `PR 1327 <https://bitbucket.org/yt_analysis/yt/pull-requests/1327>`_.
* Avoid triggering warnings in numpy and matplotlib. `PR 1334 <https://bitbucket.org/yt_analysis/yt/pull-requests/1334>`_, `PR 1300 <https://bitbucket.org/yt_analysis/yt/pull-requests/1300>`_.
* Updates for the field list reference. `PR 1344 <https://bitbucket.org/yt_analysis/yt/pull-requests/1344>`_, `PR 1321 <https://bitbucket.org/yt_analysis/yt/pull-requests/1321>`_, `PR 1318 <https://bitbucket.org/yt_analysis/yt/pull-requests/1318>`_.
* yt can now be run in parallel on a subset of available processors using an MPI subcommunicator. `PR 1340 <https://bitbucket.org/yt_analysis/yt/pull-requests/1340>`_
* Fix for incorrect units when loading an Athena simulation as a time series. `PR 1341 <https://bitbucket.org/yt_analysis/yt/pull-requests/1341>`_.
* Improved support for Enzo 3.0 simulations that have not produced any active particles. `PR 1329 <https://bitbucket.org/yt_analysis/yt/pull-requests/1329>`_.
* Fix for parsing OWLS outputs with periods in the file path. `PR 1320 <https://bitbucket.org/yt_analysis/yt/pull-requests/1320>`_.
* Fix for periodic radius vector calculation. `PR 1311 <https://bitbucket.org/yt_analysis/yt/pull-requests/1311>`_.
* Improvements for the Maestro and Castro frontends. `PR 1319 <https://bitbucket.org/yt_analysis/yt/pull-requests/1319>`_.
* Clump finding is now supported for more generic types of data. `PR 1314 <https://bitbucket.org/yt_analysis/yt/pull-requests/1314>`_
* Fix unit consistency issue when mixing dimensionless unit symbols. `PR 1300 <https://bitbucket.org/yt_analysis/yt/pull-requests/1300>`_.
* Improved memory footprint in the photon_simulator. `PR 1304 <https://bitbucket.org/yt_analysis/yt/pull-requests/1304>`_.
* Large grids in Athena datasets produced by the join_vtk script can now be optionally split, improving parallel performance. `PR 1304 <https://bitbucket.org/yt_analysis/yt/pull-requests/1304>`_.
* Slice plots now accept a ``data_source`` keyword argument. `PR 1310 <https://bitbucket.org/yt_analysis/yt/pull-requests/1310>`_.
* Corrected inconsistent octrees in the RAMSES frontend. `PR 1302 <https://bitbucket.org/yt_analysis/yt/pull-requests/1302>`_
* Nearest neighbor distance field added. `PR 1138 <https://bitbucket.org/yt_analysis/yt/pull-requests/1138>`_.
* Improvements for the ORION2 frontend. `PR 1303 <https://bitbucket.org/yt_analysis/yt/pull-requests/1303>`_
* Enzo 3.0 frontend can now read active particle attributes that are arrays of any shape. `PR 1248 <https://bitbucket.org/yt_analysis/yt/pull-requests/1248>`_.
* Answer tests added for halo finders. `PR 1253 <https://bitbucket.org/yt_analysis/yt/pull-requests/1253>`_
* A ``setup_function`` has been added to the LightRay initializer. `PR 1295 <https://bitbucket.org/yt_analysis/yt/pull-requests/1295>`_.
* The SPH code frontends have been reorganized into separate frontend directories. `PR 1281 <https://bitbucket.org/yt_analysis/yt/pull-requests/1281>`_.
* Fixes for accessing deposit fields for FLASH data. `PR 1294 <https://bitbucket.org/yt_analysis/yt/pull-requests/1294>`_
* Added tests for ORION datasets containing sink and star particles. `PR 1252 <https://bitbucket.org/yt_analysis/yt/pull-requests/1252>`_
* Fix for field names in the particle generator. `PR 1278 <https://bitbucket.org/yt_analysis/yt/pull-requests/1278>`_.
* Added wrapper functions for numpy array manipulation functions. `PR 1287 <https://bitbucket.org/yt_analysis/yt/pull-requests/1287>`_.
* Added support for packed HDF5 Enzo datasets. `PR 1282 <https://bitbucket.org/yt_analysis/yt/pull-requests/1282>`_.
Version 3.0
-----------
This release of yt features an entirely rewritten infrastructure for
data ingestion, indexing, and representation. While past versions of
yt were focused on analysis and visualization of data structured as
regular grids, this release features full support for particle
(discrete point) data such as N-body and SPH data, irregular
hexahedral mesh data, and data organized via octrees. This
infrastructure will be extended in future versions for high-fidelity
representation of unstructured mesh datasets.
Highlighted changes in yt 3.0:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* Units now permeate the code base, enabling self-consistent unit
transformations of all arrays and quantities returned by yt.
* Particle data is now supported using a lightweight octree. SPH
data can be smoothed onto an adaptively-defined mesh using standard
SPH smoothing
* Support for octree AMR codes
* Preliminary Support for non-Cartesian data, such as cylindrical,
spherical, and geographical
* Revamped analysis framework for halos and halo catalogs, including
direct ingestion and analysis of halo catalogs of several different
formats
* Support for multi-fluid datasets and datasets containing multiple
particle types
* Flexible support for dynamically defining new particle types using
filters on existing particle types or by combining different particle
types.
* Vastly improved support for loading generic grid, AMR, hexahedral
mesh, and particle without hand-coding a frontend for a particular
data format.
* New frontends for ART, ARTIO, Boxlib, Chombo, FITS, GDF, Subfind,
Rockstar, Pluto, RAMSES, SDF, Gadget, OWLS, PyNE, Tipsy, as well as
rewritten frontends for Enzo, FLASH, Athena, and generic data.
* First release to support installation of yt on Windows
* Extended capabilities for construction of simulated observations,
and new facilities for analyzing and visualizing FITS images and cube
data
* Many performance improvements
This release is the first of several; while most functionality from
the previous generation of yt has been updated to work with yt 3.0, it
does not yet have feature parity in all respects. While the core of
yt is stable, we suggest the support for analysis modules and volume
rendering be viewed as a late-stage beta, with a series of additional
releases (3.1, 3.2, etc) appearing over the course of the next year to
improve support in these areas.
For a description of how to bring your 2.x scripts up to date to 3.0,
and a summary of common gotchas in this transition, please see
:ref:`yt3differences`.
Version 2.6
-----------
This is a scheduled release, bringing to a close the development in the 2.x
series. Below are the itemized, aggregate changes since version 2.5.
Major changes:
^^^^^^^^^^^^^^
* yt is now licensed under the 3-clause BSD license.
* HEALPix has been removed for the time being, as a result of licensing
incompatibility.
* The addition of a frontend for the Pluto code
* The addition of an OBJ exporter to enable transparent and multi-surface
exports of surfaces to Blender and Sketchfab
* New absorption spectrum analysis module with documentation
* Adding ability to draw lines with Grey Opacity in volume rendering
* Updated physical constants to reflect 2010 CODATA data
* Dependency updates (including IPython 1.0)
* Better notebook support for yt plots
* Considerably (10x+) faster kD-tree building for volume rendering
* yt can now export to RADMC3D
* Athena frontend now supports Static Mesh Refinement and units (
http://hub.yt-project.org/nb/7l1zua )
* Fix long-standing bug for plotting arrays with range of zero
* Adding option to have interpolation based on non-uniform bins in
interpolator code
* Upgrades to most of the dependencies in the install script
* ProjectionPlot now accepts a data_source keyword argument
Minor or bugfix changes:
^^^^^^^^^^^^^^^^^^^^^^^^
* Fix for volume rendering on the command line
* map_to_colormap will no longer return out-of-bounds errors
* Fixes for dds in covering grid calculations
* Library searching for build process is now more reliable
* Unit fix for "VorticityGrowthTimescale" field
* Pyflakes stylistic fixes
* Number density added to FLASH
* Many fixes for Athena frontend
* Radius and ParticleRadius now work for reduced-dimensionality datasets
* Source distributions now work again!
* Athena data now 64 bits everywhere
* Grids displays on plots are now shaded to reflect the level of refinement
* show_colormaps() is a new function for displaying all known colormaps
* PhasePlotter by default now adds a colormap.
* System build fix for POSIX systems
* Fixing domain offsets for halo centers-of-mass
* Removing some Enzo-specific terminology in the Halo Mass Function
* Addition of coordinate vectors on volume render
* Pickling fix for extracted regions
* Addition of some tracer particle annotation functions
* Better error message for "yt" command
* Fix for radial vs poloidal fields
* Piernik 2D data handling fix
* Fixes for FLASH current redshift
* PlotWindows now have a set_font function and a new default font setting
* Colorbars less likely to extend off the edge of a PlotWindow
* Clumps overplotted on PlotWindows are now correctly contoured
* Many fixes to light ray and profiles for integrated cosmological analysis
* Improvements to OpenMP compilation
* Typo in value for km_per_pc (not used elsewhere in the code base) has been
fixed
* Enable parallel IPython notebook sessions (
http://hub.yt-project.org/nb/qgn19h )
* Change (~1e-6) to particle_density deposition, enabling it to be used by
FLASH and other frontends
* Addition of is_root function for convenience in parallel analysis sessions
* Additions to Orion particle reader
* Fixing TotalMass for case when particles not present
* Fixing the density threshold or HOP and pHOP to match the merger tree
* Reason can now plot with latest plot window
* Issues with VelocityMagnitude and aliases with velo have been corrected in
the FLASH frontend
* Halo radii are calculated correctly for domains that do not start at 0,0,0.
* Halo mass function now works for non-Enzo frontends.
* Bug fixes for directory creation, typos in docstrings
* Speed improvements to ellipsoidal particle detection
* Updates to FLASH fields
* CASTRO frontend bug fixes
* Fisheye camera bug fixes
* Answer testing now includes plot window answer testing
* Athena data serialization
* load_uniform_grid can now decompose dims >= 1024. (#537)
* Axis unit setting works correctly for unit names (#534)
* ThermalEnergy is now calculated correctly for Enzo MHD simulations (#535)
* Radius fields had an asymmetry in periodicity calculation (#531)
* Boolean regions can now be pickled (#517)
Version 2.5
-----------
Many below-the-surface changes happened in yt 2.5 to improve reliability,
fidelity of the answers, and streamlined user interface. The major change in
this release has been the immense expansion in testing of yt. We now have over
2000 unit tests (run on every commit, thanks to both Kacper Kowalik and Shining
Panda) as well as answer testing for FLASH, Enzo, Chombo and Orion data.
The Stream frontend, which can construct datasets in memory, has been improved
considerably. It's now easier than ever to load data from disk. If you know
how to get volumetric data into Python, you can use either the
``load_uniform_grid`` function or the ``load_amr_grid`` function to create an
in-memory dataset that yt can analyze.
yt now supports the Athena code.
yt is now focusing on providing first class support for the IPython notebook.
In this release, plots can be displayed inline. The Reason HTML5 GUI will be
merged with the IPython notebook in a future release.
Install Script Changes:
^^^^^^^^^^^^^^^^^^^^^^^
* SciPy can now be installed
* Rockstar can now be installed
* Dependencies can be updated with "yt update --all"
* Cython has been upgraded to 0.17.1
* Python has been upgraded to 2.7.3
* h5py has been upgraded to 2.1.0
* hdf5 has been upgraded to 1.8.9
* matplotlib has been upgraded to 1.2.0
* IPython has been upgraded to 0.13.1
* Forthon has been upgraded to 0.8.10
* nose has been added
* sympy has been added
* python-hglib has been added
We've also improved support for installing on OSX, Ubuntu and OpenSUSE.
Most Visible Improvements
^^^^^^^^^^^^^^^^^^^^^^^^^
* Nearly 200 pull requests and over 1000 changesets have been merged since yt
2.4 was release on August 2nd, 2012.
* numpy is now imported as np, not na. na will continue to work for the
foreseeable future.
* You can now get a `yt cheat sheet <http://yt-project.org/docs/2.5/cheatsheet.pdf>`_!
* yt can now load simulation data created by Athena.
* The Rockstar halo finder can now be installed by the install script
* SciPy can now be installed by the install script
* Data can now be written out in two ways:
* Sidecar files containing expensive derived fields can be written and
implicitly loaded from.
* GDF files, which are portable yt-specific representations of full
simulations, can be created from any dataset. Work is underway on
a pure C library that can be linked against to load these files into
simulations.
* The "Stream" frontend, for loading raw data in memory, has been greatly
expanded and now includes initial conditions generation functionality,
particle fields, and simple loading of AMR grids with ``load_amr_grids``.
* Spherical and Cylindrical fields have been sped up and made to have a
uniform interface. These fields can be the building blocks of more advanced
fields.
* Coordinate transformations have been sped up and streamlined. It is now
possible to convert any scalar or vector field to a new cartesian, spherical,
or cylindrical coordinate system with an arbitrary orientation. This makes it
possible to do novel analyses like profiling the toroidal and poloidal
velocity as a function of radius in an inclined disk.
* Many improvements to the EnzoSimulation class, which can now find many
different types of data.
* Image data is now encapsulated in an ImageArray class, which carries with it
provenance information about its trajectory through yt.
* Streamlines now query at every step along the streamline, not just at every
cell.
* Surfaces can now be extracted and examined, as well as uploaded to
Sketchfab.com for interactive visualization in a web browser.
* allsky_projection can now accept a datasource, making it easier to cut out
regions to examine.
* Many, many improvements to PlotWindow. If you're still using
PlotCollection, check out ``ProjectionPlot``, ``SlicePlot``,
``OffAxisProjectionPlot`` and ``OffAxisSlicePlot``.
* PlotWindow can now accept a timeseries instead of a dataset.
* Many fixes for 1D and 2D data, especially in FLASH datasets.
* Vast improvements to the particle file handling for FLASH datasets.
* Particles can now be created ex nihilo with CICSample_3.
* Rockstar halo finding is now a targeted goal. Support for using Rockstar
has improved dramatically.
* Increased support for tracking halos across time using the FOF halo finder.
* The command ``yt notebook`` has been added to spawn an IPython notebook
server, and the ``yt.imods`` module can replace ``yt.mods`` in the IPython
Notebook to enable better integration.
* Metallicity-dependent X-ray fields have now been added.
* Grid lines can now be added to volume renderings.
* Volume rendering backend has been updated to use an alpha channel, fixing
parallel opaque volume renderings. This also enables easier blending of
multiple images and annotations to the rendering. Users are encouraged
to look at the capabilities of the ``ImageArray`` for writing out renders,
as updated in the cookbook examples. Volume renders can now be saved with
an arbitrary background color.
* Periodicity, or alternately non-periodicity, is now a part of radius
calculations.
* The AMRKDTree has been rewritten. This allows parallelism with other than
power-of-2 MPI processes, arbitrary sets of grids, and splitting of
unigrids.
* Fixed Resolution Buffers and volume rendering images now utilize a new
ImageArray class that stores information such as data source, field names,
and other information in a .info dictionary. See the ``ImageArray``
docstrings for more information on how they can be used to save to a bitmap
or hdf5 file.
Version 2.4
-----------
The 2.4 release was particularly large, encompassing nearly a thousand
changesets and a number of new features.
To help you get up to speed, we've made an IPython notebook file demonstrating
a few of the changes to the scripting API. You can
`download it here <http://yt-project.org/files/yt24.ipynb>`_.
Most Visible Improvements
^^^^^^^^^^^^^^^^^^^^^^^^^
* Threaded volume renderer, completely refactored from the ground up for
speed and parallelism.
* The Plot Window (see :ref:`simple-inspection`) is now fully functional! No
more PlotCollections, and full, easy access to Matplotlib axes objects.
* Many improvements to Time Series analysis:
* EnzoSimulation now integrates with TimeSeries analysis!
* Auto-parallelization of analysis and parallel iteration
* Memory usage when iterating over datasets reduced substantially
* Many improvements to Reason, the yt GUI
* Addition of "yt reason" as a startup command
* Keyboard shortcuts in projection & slice mode: z, Z, x, X for zooms,
hjkl, HJKL for motion
* Drag to move in projection & slice mode
* Contours and vector fields in projection & slice mode
* Color map selection in projection & slice mode
* 3D Scene
* Integration with the all new yt Hub ( http://hub.yt-project.org/ ): upload
variable resolution projections, slices, project information, vertices and
plot collections right from the yt command line!
Other Changes
^^^^^^^^^^^^^
* :class:`~yt.visualization.plot_window.ProjectionPlot` and
:class:`~yt.visualization.plot_window.SlicePlot` supplant the functionality
of PlotCollection.
* Camera path creation from keyframes and splines
* Ellipsoidal data containers and ellipsoidal parameter calculation for halos
* PyX and ZeroMQ now available in the install script
* Consolidation of unit handling
* HDF5 updated to 1.8.7, Mercurial updated to 2.2, IPython updated to 0.12
* Preview of integration with Rockstar halo finder
* Improvements to merger tree speed and memory usage
* Sunrise exporter now compatible with Sunrise 4.0
* Particle trajectory calculator now available!
* Speed and parallel scalability improvements in projections, profiles and HOP
* New Vorticity-related fields
* Vast improvements to the ART frontend
* Many improvements to the FLASH frontend, including full parameter reads,
speedups, and support for more corner cases of FLASH 2, 2.5 and 3 data.
* Integration of the Grid Data Format frontend, and a converter for Athena
data to this format.
* Improvements to command line parsing
* Parallel import improvements on parallel filesystems
(``from yt.pmods import *``)
* proj_style keyword for projections, for Maximum Intensity Projections
(``proj_style = "mip"``)
* Fisheye rendering for planetarium rendering
* Profiles now provide \*_std fields for standard deviation of values
* Generalized Orientation class, providing 6DOF motion control
* parallel_objects iteration now more robust, provides optional barrier.
(Also now being used as underlying iteration mechanism in many internal
routines.)
* Dynamic load balancing in parallel_objects iteration.
* Parallel-aware objects can now be pickled.
* Many new colormaps included
* Numerous improvements to the PyX-based eps_writer module
* FixedResolutionBuffer to FITS export.
* Generic image to FITS export.
* Multi-level parallelism for extremely large cameras in volume rendering
* Light cone and light ray updates to fit with current best practices for
parallelism
Version 2.3
-----------
`(yt 2.3 docs) <http://yt-project.org/docs/2.3>`_
* Multi-level parallelism
* Real, extensive answer tests
* Boolean data regions (see :ref:`boolean_data_objects`)
* Isocontours / flux calculations (see :ref:`extracting-isocontour-information`)
* Field reorganization
* PHOP memory improvements
* Bug fixes for tests
* Parallel data loading for RAMSES, along with other speedups and improvements
there
* WebGL interface for isocontours and a pannable map widget added to Reason
* Performance improvements for volume rendering
* Adaptive HEALPix support
* Column density calculations
* Massive speedup for 1D profiles
* Lots more, bug fixes etc.
* Substantial improvements to the documentation, including
:ref:`manual-plotting` and a revamped orientation.
Version 2.2
-----------
`(yt 2.2 docs) <http://yt-project.org/docs/2.2>`_
* Command-line submission to the yt Hub (http://hub.yt-project.org/)
* Initial release of the web-based GUI Reason, designed for efficient remote
usage over SSH tunnels
* Absorption line spectrum generator for cosmological simulations (see
:ref:`absorption_spectrum`)
* Interoperability with ParaView for volume rendering, slicing, and so forth
* Support for the Nyx code
* An order of magnitude speed improvement in the RAMSES support
* Quad-tree projections, speeding up the process of projecting by up to an
order of magnitude and providing better load balancing
* "mapserver" for in-browser, Google Maps-style slice and projection
visualization (see :ref:`mapserver`)
* Many bug fixes and performance improvements
* Halo loader
Version 2.1
-----------
`(yt 2.1 docs) <http://yt-project.org/docs/2.1>`_
* HEALPix-based volume rendering for 4pi, allsky volume rendering
* libconfig is now included
* SQLite3 and Forthon now included by default in the install script
* Development guide has been lengthened substantially and a development
bootstrap script is now included.
* Installation script now installs Python 2.7 and HDF5 1.8.6
* iyt now tab-completes field names
* Halos can now be stored on-disk much more easily between HaloFinding runs.
* Halos found inline in Enzo can be loaded and merger trees calculated
* Support for CASTRO particles has been added
* Chombo support updated and fixed
* New code contributions
* Contour finder has been sped up by a factor of a few
* Constrained two-point functions are now possible, for LOS power spectra
* Time series analysis (:ref:`time-series-analysis`) now much easier
* Stream Lines now a supported 1D data type
* Stream Lines now able to be calculated and plotted (:ref:`streamlines`)
* In situ Enzo visualization now much faster
* "gui" source directory reorganized and cleaned up
* Cython now a compile-time dependency, reducing the size of source tree
updates substantially
* ``yt-supplemental`` repository now checked out by default, containing
cookbook, documentation, handy mercurial extensions, and advanced plotting
examples and helper scripts.
* Pasteboards now supported and available
* Parallel yt efficiency improved by removal of barriers and improvement of
collective operations
Version 2.0
-----------
* Major reorganization of the codebase for speed, ease of modification, and maintainability
* Re-organization of documentation and addition of Orientation Session
* Support for FLASH code
* Preliminary support for MAESTRO, CASTRO, ART, and RAMSES (contributions welcome!)
* Perspective projection for volume rendering
* Exporting to Sunrise
* Preliminary particle rendering in volume rendering visualization
* Drastically improved parallel volume rendering, via kD-tree decomposition
* Simple merger tree calculation for FOF catalogs
* New and greatly expanded documentation, with a "source" button
Version 1.7
-----------
* Direct writing of PNGs
* Multi-band image writing
* Parallel halo merger tree (see :ref:`merger_tree`)
* Parallel structure function generator (see :ref:`two_point_functions`)
* Image pan and zoom object and display widget.
* Parallel volume rendering (see :ref:`volume_rendering`)
* Multivariate volume rendering, allowing for multiple forms of emission and
absorption, including approximate scattering and Planck emissions. (see
:ref:`volume_rendering`)
* Added Camera interface to volume rendering (See :ref:`volume_rendering`)
* Off-axis projection (See :ref:`volume_rendering`)
* Stereo (toe-in) volume rendering (See :ref:`volume_rendering`)
* DualEPS extension for better EPS construction
* yt now uses Distribute instead of SetupTools
* Better ``iyt`` initialization for GUI support
* Rewritten, memory conservative and speed-improved contour finding algorithm
* Speed improvements to volume rendering
* Preliminary support for the Tiger code
* Default colormap is now ``algae``
* Lightweight projection loading with ``projload``
* Improvements to ``yt.data_objects.time_series``
* Improvements to :class:`yt.extensions.EnzoSimulation` (See
:ref:`analyzing-an-entire-simulation`)
* Removed ``direct_ray_cast``
* Fixed bug causing double data-read in projections
* Added Cylinder support to ParticleIO
* Fixes for 1- and 2-D Enzo datasets
* Preliminary, largely non-functional Gadget support
* Speed improvements to basic HOP
* Added physical constants module
* Beginning to standardize and enforce docstring requirements, changing to
``autosummary``-based API documentation.
Version 1.6.1
-------------
* Critical fixes to ParticleIO
* Halo mass function fixes for comoving coordinates
* Fixes to halo finding
* Fixes to the installation script
* "yt instinfo" command to report current installation information as well as
auto-update some types of installations
* Optimizations to the volume renderer (2x-26x reported speedups)
Version 1.6
-----------
Version 1.6 is a point release, primarily notable for the new parallel halo
finder (see :ref:`halo-analysis`)
* (New) Parallel HOP ( https://arxiv.org/abs/1001.3411 , :ref:`halo-analysis` )
* (Beta) Software ray casting and volume rendering
(see :ref:`volume_rendering`)
* Rewritten, faster and better contouring engine for clump identification
* Spectral Energy Distribution calculation for stellar populations
(see :ref:`synthetic_spectrum`)
* Optimized data structures such as the index
* Star particle analysis routines
(see :ref:`star_analysis`)
* Halo mass function routines
* Completely rewritten, massively faster and more memory efficient Particle IO
* Fixes for plots, including normalized phase plots
* Better collective communication in parallel routines
* Consolidation of optimized C routines into ``amr_utils``
* Many bug fixes and minor optimizations
Version 1.5
-----------
Version 1.5 features many new improvements, most prominently that of the
addition of parallel computing abilities (see :ref:`parallel-computation`) and
generalization for multiple AMR data formats, specifically both Enzo and Orion.
* Rewritten documentation
* Fully parallel slices, projections, cutting planes, profiles,
quantities
* Parallel HOP
* Friends-of-friends halo finder
* Object storage and serialization
* Major performance improvements to the clump finder (factor of five)
* Generalized domain sizes
* Generalized field info containers
* Dark Matter-only simulations
* 1D and 2D simulations
* Better IO for HDF5 sets
* Support for the Orion AMR code
* Spherical re-gridding
* Halo profiler
* Disk image stacker
* Light cone generator
* Callback interface improved
* Several new callbacks
* New data objects -- ortho and non-ortho rays, limited ray-tracing
* Fixed resolution buffers
* Spectral integrator for CLOUDY data
* Substantially better interactive interface
* Performance improvements *everywhere*
* Command-line interface to *many* common tasks
* Isolated plot handling, independent of PlotCollections
Version 1.0
-----------
* Initial release!
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/reference/changelog.rst | changelog.rst |
Customizing yt: The Configuration and Plugin Files
==================================================
yt features ways to customize it to your personal preferences in terms of
how much output it displays, loading custom fields, loading custom colormaps,
accessing test datasets regardless of where you are in the file system, etc.
This customization is done through :ref:`configuration-file` and
:ref:`plugin-file` both of which exist in your ``$HOME/.config/yt`` directory.
.. _configuration-file:
The Configuration
-----------------
The configuration is stored in simple text files (in the `toml <https://github.com/toml-lang/toml>`_ format).
The files allow to set internal yt variables to custom default values to be used in future sessions.
The configuration can either be stored :ref:`globally <global-conf>` or :ref:`locally <local-conf>`.
.. _global-conf:
Global Configuration
^^^^^^^^^^^^^^^^^^^^
If no local configuration file exists, yt will look for and recognize the file
``$HOME/.config/yt/yt.toml`` as a configuration file, containing several options
that can be modified and adjusted to control runtime behavior. For example, a sample
``$HOME/.config/yt/yt.toml`` file could look
like:
.. code-block:: none
[yt]
log_level = 1
maximum_stored_datasets = 10000
This configuration file would set the logging threshold much lower, enabling
much more voluminous output from yt. Additionally, it increases the number of
datasets tracked between instantiations of yt. The configuration file can be
managed using the ``yt config --global`` helper. It can list, add, modify and remove
options from the configuration file, e.g.:
.. code-block:: none
$ yt config -h
$ yt config list
$ yt config set yt log_level 1
$ yt config rm yt maximum_stored_datasets
.. _local-conf:
Local Configuration
^^^^^^^^^^^^^^^^^^^
yt will look for a file named ``yt.toml`` in the current directory, and upwards
in the file tree until a match is found. If so, its options are loaded and any
global configuration is ignored. Local configuration files can contain the same
options as the global one.
Local configuration files can either be edited manually, or alternatively they
can be managed using ``yt config --local``. It can list, add, modify and remove
options, and display the path to the local configuration file, e.g.:
.. code-block:: none
$ yt config -h
$ yt config list --local
$ yt config set --local yt log_level 1
$ yt config rm --local yt maximum_stored_datasets
$ yt config print-path --local
If no local configuration file is present, these commands will create an (empty) one
in the current working directory.
Configuration Options At Runtime
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In addition to setting parameters in the configuration file itself, you can set
them at runtime.
.. warning:: Several parameters are only accessed when yt starts up: therefore,
if you want to modify any configuration parameters at runtime, you should
execute the appropriate commands at the *very top* of your script!
This involves importing the configuration object and then setting a given
parameter to be equal to a specific string. Note that even for items that
accept integers, floating points and other non-string types, you *must* set
them to be a string or else the configuration object will consider them broken.
Here is an example script, where we adjust the logging at startup:
.. code-block:: python
import yt
yt.set_log_level(1)
ds = yt.load("my_data0001")
ds.print_stats()
This has the same effect as setting ``log_level = 1`` in the configuration
file. Note that a log level of 1 means that all log messages are printed to
stdout. To disable logging, set the log level to 50.
.. _config-options:
Available Configuration Options
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The following external parameters are available. A number of parameters are
used internally.
* ``colored_logs`` (default: ``False``): Should logs be colored?
* ``default_colormap`` (default: ``cmyt.arbre``): What colormap should be used by
default for yt-produced images?
* ``plugin_filename`` (default ``my_plugins.py``) The name of our plugin file.
* ``log_level`` (default: ``20``): What is the threshold (0 to 50) for
outputting log files?
* ``test_data_dir`` (default: ``/does/not/exist``): The default path the
``load()`` function searches for datasets when it cannot find a dataset in the
current directory.
* ``reconstruct_index`` (default: ``True``): If true, grid edges for patch AMR
datasets will be adjusted such that they fall as close as possible to an
integer multiple of the local cell width. If you are working with a dataset
with a large number of grids, setting this to False can speed up loading
your dataset possibly at the cost of grid-aligned artifacts showing up in
slice visualizations.
* ``notebook_password`` (default: empty): If set, this will be fed to the
IPython notebook created by ``yt notebook``. Note that this should be an
sha512 hash, not a plaintext password. Starting ``yt notebook`` with no
setting will provide instructions for setting this.
* ``requires_ds_strict`` (default: ``True``): If true, answer tests wrapped
with :func:`~yt.utilities.answer_testing.framework.requires_ds` will raise
:class:`~yt.utilities.exceptions.YTUnidentifiedDataType` rather than consuming
it if required dataset is not present.
* ``serialize`` (default: ``False``): If true, perform automatic
:ref:`object serialization <object-serialization>`
* ``sketchfab_api_key`` (default: empty): API key for https://sketchfab.com/ for
uploading AMRSurface objects.
* ``suppress_stream_logging`` (default: ``False``): If true, execution mode will be
quiet.
* ``stdout_stream_logging`` (default: ``False``): If true, logging is directed
to stdout rather than stderr
* ``skip_dataset_cache`` (default: ``False``): If true, automatic caching of datasets
is turned off.
* ``supp_data_dir`` (default: ``/does/not/exist``): The default path certain
submodules of yt look in for supplemental data files.
.. _per-field-plotconfig:
Available per-field Plot Options
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
It is possible to customize the default behaviour of plots using per-field configuration.
The default options for plotting a given field can be specified in the configuration file
in ``[plot.field_type.field_name]`` blocks. The available keys are
* ``cmap`` (default: ``yt.default_colormap``, see :ref:`config-options`): the colormap to
use for the field.
* ``log`` (default: ``True``): use a log scale (or symlog if ``linthresh`` is also set).
* ``linthresh`` (default: ``None``): if set to a float different than ``None`` and ``log`` is
``True``, use a symlog normalization with the given linear threshold.
* ``units`` (defaults to the units of the field): the units to use to represent the field.
* ``path_length_units`` (default: ``cm``): the unit of the integration length when doing
e.g. projections. This always has the dimensions of a length. Note that this will only
be used if ``units`` is also set for the field. The final units will then be
``units*path_length_units``.
You can also set defaults for all fields of a given field type by omitting the field name,
as illustrated below in the deposit block.
.. code-block:: toml
[plot.gas.density]
cmap = "plasma"
log = true
units = "mp/cm**3"
[plot.gas.velocity_divergence]
cmap = "bwr" # use a diverging colormap
log = false # and a linear scale
[plot.deposit]
path_length_units = "kpc" # use kpc for deposition projections
.. _per-field-config:
Available per-Field Configuration Options
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
It is possible to set attributes for fields that would typically be set by the
frontend source code, such as the aliases for field, the units that field
should be expected in, and the display name. This allows individuals to
customize what yt expects of a given dataset without modifying the yt source
code. For instance, if your dataset has an on-disk field called
"particle_extra_field_1" you could specify its units, display name, and what yt
should think of it as with:
.. code-block:: toml
[fields.nbody.particle_extra_field_1]
aliases = ["particle_other_fancy_name", "particle_alternative_fancy_name"]
units = "code_time"
display_name = "Dinosaurs Density"
.. _plugin-file:
Plugin Files
------------
Plugin files are a means of creating custom fields, quantities, data objects,
colormaps, and other code executable functions or classes to be used in future
yt sessions without modifying the source code directly.
To enable a plugin file, call the function
:func:`~yt.funcs.enable_plugins` at the top of your script.
Global system plugin file
^^^^^^^^^^^^^^^^^^^^^^^^^
yt will look for and recognize the file ``$HOME/.config/yt/my_plugins.py`` as a
plugin file. It is possible to rename this file to ``$HOME/.config/yt/<plugin_filename>.py``
by defining ``plugin_filename`` in your ``yt.toml`` file, as mentioned above.
.. note::
You can tell that your system plugin file is being parsed by watching for a logging
message when you import yt. Note that the ``yt load`` command line entry point parses
the plugin file.
Local project plugin file
^^^^^^^^^^^^^^^^^^^^^^^^^
Optionally, :func:`~yt.funcs.enable_plugins` can be passed an argument to specify
a custom location for a plugin file. This can be useful to define project wise customizations.
In that use case, any system-level plugin file will be ignored.
Plugin File Format
^^^^^^^^^^^^^^^^^^
Plugin files should contain pure Python code. If accessing yt functions and classes
they will not require the ``yt.`` prefix, because of how they are loaded.
For example, if one created a plugin file containing:
.. code-block:: python
def _myfunc(field, data):
return np.random.random(data["density"].shape)
add_field(
"random",
function=_myfunc,
sampling_type="cell",
dimensions="dimensionless",
units="auto",
)
then all of my data objects would have access to the field ``random``.
You can also define other convenience functions in your plugin file. For
instance, you could define some variables or functions, and even import common
modules:
.. code-block:: python
import os
HOMEDIR = "/home/username/"
RUNDIR = "/scratch/runs/"
def load_run(fn):
if not os.path.exists(RUNDIR + fn):
return None
return load(RUNDIR + fn)
In this case, we've written ``load_run`` to look in a specific directory to see
if it can find an output with the given name. So now we can write scripts that
use this function:
.. code-block:: python
import yt
yt.enable_plugins()
my_run = yt.load_run("hotgasflow/DD0040/DD0040")
And because we have used ``yt.enable_plugins`` we have access to the
``load_run`` function defined in our plugin file.
.. note::
if your convenience function's name colliding with an existing object
within yt's namespace, it will be ignored.
Note that using the plugins file implies that your script is no longer fully
reproducible. If you share your script with someone else and use some of the
functionality if your plugins file, you will also need to share your plugins
file for someone else to re-run your script properly.
Adding Custom Colormaps
^^^^^^^^^^^^^^^^^^^^^^^
To add custom :ref:`colormaps` to your plugin file, you must use the
:func:`~yt.visualization.color_maps.make_colormap` function to generate a
colormap of your choice and then add it to the plugin file. You can see
an example of this in :ref:`custom-colormaps`. Remember that you don't need
to prefix commands in your plugin file with ``yt.``, but you'll only be
able to access the colormaps when you load the ``yt.mods`` module, not simply
``yt``.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/reference/configuration.rst | configuration.rst |
.. _demeshening:
How Particles are Indexed
=========================
With yt-4.0, the method by which particles are indexed changed considerably.
Whereas in previous versions, particles were indexed based on their position in
an octree (the structure of which was determined by particle number density),
in yt-4.0 this system was overhauled to utilize a `bitmap
index <https://en.wikipedia.org/wiki/Bitmap_index>`_ based on a space-filling
curve, using a `enhanced word-aligned
hybrid <https://github.com/lemire/ewahboolarray>`_ boolean array as their
backend.
.. note::
You may see scattered references to "the demeshening" (including in the
filename for this document!). This was a humorous name used in the yt
development process to refer to removing a global (octree) mesh for
particle codes.
By avoiding the use of octrees as a base mesh, yt is able to create *much* more
accurate SPH visualizations. We have a `gallery demonstrating
this <https://matthewturk.github.io/yt4-gallery/>`_ but even in this
side-by-side comparison the differences can be seen quite easily, with the left
image being from the old, octree-based approach and the right image the new,
meshless approach.
.. image:: _images/yt3_p0010_proj_density_None_x_z002.png
:width: 45 %
.. image:: _images/yt4_p0010_proj_density_None_x_z002.png
:width: 45 %
Effectively, what "the demeshening" does is allow yt to treat the particles as
discrete objects (or with an area of influence) and use their positions in a
multi-level index to optimize and minimize the disk operations necessary to
load only those particles it needs.
.. note::
The theory and implementation of yt's bitmap indexing system is described in
some detail in the `yt 4.0 paper <https://yt-project.github.io/yt-4.0-paper/>`_
in the section entitled `Indexing Discrete-Point Datasets <https://yt-project.github.io/yt-4.0-paper/#sec:point_indexing>`_.
In brief, however, what this relies on is two numbers, ``index_order1`` and
``index_order2``. These control the "coarse" and "refined" sets of indices,
and they are supplied to any particle dataset ``load()`` in the form of a tuple
as the argument ``index_order``. By default these are set to 5 and 7,
respectively, but it is entirely likely that a different set of values will
work better for your purposes.
For example, if you were to use the sample Gadget-3 dataset, you could override
the default values and use values of 5 and 5 by specifying this argument to the
``load_sample`` function; this works with ``load`` as well.
.. code-block:: python
ds = yt.load_sample("Gadget3-snap-format2", index_order=(5, 5))
So this is how you *change* the index order, but it doesn't explain precisely
what this "index order" actually is.
Indexing and Why yt Does it
---------------------------
yt is based on the idea that data should be selected and read only when it is
needed. So for instance, if you only want particles or grid cells from a small
region in the center of your dataset, yt wants to avoid any reading of the data
*outside* of that region. Now, in practice, this isn't entirely possible --
particularly with particles, you can't actually tell when something is inside
or outside of a region *until* you read it, because the particle locations are
*stored in the dataset*.
One way to avoid this is to have an index of the data, so that yt can know that
some of the data that is located *here* in space is located *there* in the file
or files on disk. So if you're able to say, I only care about data in "region
A", you can look for those files that contain data within "region A," read
those, and discard the parts of them that are *not* within "region A."
The finer grained the index, the longer it takes to build that index -- and the
larger than index is, and the longer it takes to query. The cost of having too
*coarse* an index, on the other hand, is that the IO conducted to read a given
region is likely to be *too much*, and more particles will be discarded after
being read, before being "selected" by the data selector (sphere, region, etc).
An important note about all of this is that the index system is not meant to
*replace* the positions stored on disk, but instead to speed up queries of
those positions -- the index is meant to be lossy in representation, and only
provides means of generating IO information. Additionally, the atomic unit
that yt considers when conducting IO or selection queries is called a "chunk"
internally. For situations where the individual *files* are very, very large,
yt will "sub-chunk" these into smaller bits, which are by-default set to $64^3$
particles. Whenever indexing is done, it is done at this granular level, with
offsets to individual particle collections stored. For instance, if you had a
(single) file with $1024^3$ particles in it, yt would instead regard this as a
series of $64^3$ particle files, and index each one individually.
Index Order
-----------
The bitmap index system is based on a two-level scheme for assigning positions
in three-dimensional space to integer values. What this means is that each
particle is assigned a "coarse" index, which is global to the full domain of
the collection of particles, and *if necessary* an additional "refined" index
is assigned to the particle, within that coarse index.
The index "order" values refer to the number of entries on a side that each
index system is allowed. For instance, if we allow the particles to be
subdivided into 8 "bins" in each direction, this would correspond to an index
order of 3 (as $2^3 = 8$); correspondingly, an index order of 5 would be 32
bins in each direction, and an index order of 7 would be 128 bins in each
direction. Each particle is then assigned a set of i, j, k values for the bin
value in each dimension, and these i, j, k values are combined into a single
(64-bit) integer according to a space-filling curve.
The process by which this is done by yt is as follows:
1. For each "chunk" of data -- which may be a file, or a subset of a file in
which particles are contained -- assign each particle to an integer value
according to the space-filling curve and the coarse index order. Set the
"bit" in an array of boolean values that each of these integers correspond
to. Note that this is almost certainly *reductive* -- there will be fewer
bits set than there are particles, which is *by design*.
2. Once all chunks or files have been assigned an array of bits that
correspond to the places where, according to the coarse indexing scheme,
they have particles, identify all those "bits" that have been set by more
than one chunk. All of these bits correspond to locations where more than
one file contains particles -- so if you want to select something from
this region, you'd need to read more than one file.
3. For each "collision" location, apply a *second-order* index, to identify
which sub-regions are touched by more than one file.
At the end of this process, each file will be associated with a single "coarse"
index (which covers the entire domain of the data), as well as a set of
"collision" locations, and in each "collision" location a set of bitarrays that
correspond to that subregion.
When reading data, yt will identify which "coarse" index regions are necessary
to read. If any of those coarse index regions are covered by more than one
file, it will examine the "refined" index for those regions and see if it is able
to subset more efficiently. Because all of these operations can be done with
logical operations, this considerably reduces the amount of data that needs to
be read from disk before expensive selection operations are conducted.
For those situations that involve particles with regions of influence -- such
as smoothed particle hydrodynamics, where particles have associated smoothing
lengths -- these are taken into account when conducting the indexing system.
Efficiency of Index Orders
--------------------------
What this can lead to, however, is situations where (particularly at the edges
of regions populated by SPH particles) the indexing system identifies
collisions, but the relatively small number of particles and correspondingly
large "smoothing lengths" result in a large number of "refined" index values that
need to be set.
Counterintuitively, this actually means that occasionally the "refined" indexing
process can take an inordinately long amount of time for *small* datasets,
rather than large datasets.
In these situations, it is typically sufficient to set the "refined" index order
to be much lower than its default value. For instance, setting the
``index_order`` to (5, 3) means that the full domain will be subdivided into 32
bins in each dimension, and any "collision" zones will be further subdivided
into 8 bins in each dimension (corresponding to an effective 256 bins across
the full domain).
If you are experiencing very long index times, this may be a productive
parameter to modify. For instance, if you are seeing very rapid "coarse"
indexing followed by very, very slow "refined" indexing, this likely plays a
part; often this will be most obvious in small-ish (i.e., $256^3$ or smaller)
datasets.
Index Caching
-------------
The index values are cached between instantiation, in a sidecar file named with
the name of the dataset file and the suffix ``.indexII_JJ.ewah``, where ``II``
and ``JJ`` are ``index_order1`` and ``index_order2``. So for instance, if
``index_order`` is set to (5, 7), and you are loading a dataset file named
"snapshot_200.hdf5", after indexing, you will have an index sidecar file named
``snapshot_200.hdf5.index5_7.ewah``. On subsequent loads, this index file will
be reused, rather than re-generated.
By *default* these sidecars are stored next to the dataset itself, in the same
directory. However, the filename scheme (and thus location) can be changed by
supplying an alternate filename to the ``load`` command with the argument
``index_filename``. For instance, if you are accessing data in a read-only
location, you can specify that the index will be cached in a location that is
write-accessible to you.
These files contain the *compressed* bitmap index values, along with some
metadata that describes the version of the indexing system they use and so
forth. If the version of the index that yt uses has changed, they will be
regenerated; in general this will not vary very often (and should be much less
frequent than, for instance, yt releases) and yt will provide a message to let
you know it is doing it.
The file size of these cached index files can be difficult to estimate; because
it is based on a compressed bitmap arrays, it will depend on the spatial
organization of the particles it is indexing, and how co-located they are
according to the space filling curve. For very small datasets it will be
small, but we do not expect these index files to grow beyond a few hundred
megabytes even in the extreme case of large datasets that have little to no
coherence in their clustering.
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/reference/demeshening.rst | demeshening.rst |
.. _api-reference:
API Reference
=============
Plots and the Plotting Interface
--------------------------------
SlicePlot and ProjectionPlot
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. autosummary::
~yt.visualization.plot_window.SlicePlot
~yt.visualization.plot_window.AxisAlignedSlicePlot
~yt.visualization.plot_window.OffAxisSlicePlot
~yt.visualization.plot_window.ProjectionPlot
~yt.visualization.plot_window.AxisAlignedProjectionPlot
~yt.visualization.plot_window.OffAxisProjectionPlot
~yt.visualization.plot_window.WindowPlotMPL
~yt.visualization.plot_window.PlotWindow
~yt.visualization.plot_window.plot_2d
ProfilePlot and PhasePlot
^^^^^^^^^^^^^^^^^^^^^^^^^
.. autosummary::
~yt.visualization.profile_plotter.ProfilePlot
~yt.visualization.profile_plotter.PhasePlot
~yt.visualization.profile_plotter.PhasePlotMPL
Particle Plots
^^^^^^^^^^^^^^
.. autosummary::
~yt.visualization.particle_plots.ParticleProjectionPlot
~yt.visualization.particle_plots.ParticlePhasePlot
~yt.visualization.particle_plots.ParticlePlot
Fixed Resolution Pixelization
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. autosummary::
~yt.visualization.fixed_resolution.FixedResolutionBuffer
~yt.visualization.fixed_resolution.ParticleImageBuffer
~yt.visualization.fixed_resolution.CylindricalFixedResolutionBuffer
~yt.visualization.fixed_resolution.OffAxisProjectionFixedResolutionBuffer
Writing FITS images
^^^^^^^^^^^^^^^^^^^
.. autosummary::
~yt.visualization.fits_image.FITSImageData
~yt.visualization.fits_image.FITSSlice
~yt.visualization.fits_image.FITSProjection
~yt.visualization.fits_image.FITSOffAxisSlice
~yt.visualization.fits_image.FITSOffAxisProjection
~yt.visualization.fits_image.FITSParticleProjection
Data Sources
------------
.. _physical-object-api:
Physical Objects
^^^^^^^^^^^^^^^^
These are the objects that act as physical selections of data, describing a
region in space. These are not typically addressed directly; see
:ref:`available-objects` for more information.
Base Classes
++++++++++++
These will almost never need to be instantiated on their own.
.. autosummary::
~yt.data_objects.data_containers.YTDataContainer
~yt.data_objects.selection_objects.data_selection_objects.YTSelectionContainer
~yt.data_objects.selection_objects.data_selection_objects.YTSelectionContainer0D
~yt.data_objects.selection_objects.data_selection_objects.YTSelectionContainer1D
~yt.data_objects.selection_objects.data_selection_objects.YTSelectionContainer2D
~yt.data_objects.selection_objects.data_selection_objects.YTSelectionContainer3D
Selection Objects
+++++++++++++++++
These objects are defined by some selection method or mechanism. Most are
geometric.
.. autosummary::
~yt.data_objects.selection_objects.point.YTPoint
~yt.data_objects.selection_objects.ray.YTOrthoRay
~yt.data_objects.selection_objects.ray.YTRay
~yt.data_objects.selection_objects.slices.YTSlice
~yt.data_objects.selection_objects.slices.YTCuttingPlane
~yt.data_objects.selection_objects.disk.YTDisk
~yt.data_objects.selection_objects.region.YTRegion
~yt.data_objects.selection_objects.object_collection.YTDataCollection
~yt.data_objects.selection_objects.spheroids.YTSphere
~yt.data_objects.selection_objects.spheroids.YTEllipsoid
~yt.data_objects.selection_objects.cut_region.YTCutRegion
~yt.data_objects.index_subobjects.grid_patch.AMRGridPatch
~yt.data_objects.index_subobjects.octree_subset.OctreeSubset
~yt.data_objects.index_subobjects.particle_container.ParticleContainer
~yt.data_objects.index_subobjects.unstructured_mesh.UnstructuredMesh
~yt.data_objects.index_subobjects.unstructured_mesh.SemiStructuredMesh
Construction Objects
++++++++++++++++++++
These objects typically require some effort to build. Often this means
integrating through the simulation in some way, or creating some large or
expensive set of intermediate data.
.. autosummary::
~yt.data_objects.construction_data_containers.YTStreamline
~yt.data_objects.construction_data_containers.YTQuadTreeProj
~yt.data_objects.construction_data_containers.YTCoveringGrid
~yt.data_objects.construction_data_containers.YTArbitraryGrid
~yt.data_objects.construction_data_containers.YTSmoothedCoveringGrid
~yt.data_objects.construction_data_containers.YTSurface
Time Series Objects
^^^^^^^^^^^^^^^^^^^
These are objects that either contain and represent or operate on series of
datasets.
.. autosummary::
~yt.data_objects.time_series.DatasetSeries
~yt.data_objects.time_series.DatasetSeriesObject
~yt.data_objects.time_series.SimulationTimeSeries
~yt.data_objects.time_series.TimeSeriesQuantitiesContainer
~yt.data_objects.time_series.AnalysisTaskProxy
~yt.data_objects.particle_trajectories.ParticleTrajectories
Geometry Handlers
-----------------
These objects generate an "index" into multiresolution data.
.. autosummary::
~yt.geometry.geometry_handler.Index
~yt.geometry.grid_geometry_handler.GridIndex
~yt.geometry.oct_geometry_handler.OctreeIndex
~yt.geometry.particle_geometry_handler.ParticleIndex
~yt.geometry.unstructured_mesh_handler.UnstructuredIndex
Units
-----
yt's symbolic unit handling system is now based on the external library unyt. In
complement, Dataset objects support the following methods to build arrays and
scalars with physical dimensions.
.. autosummary::
yt.data_objects.static_output.Dataset.arr
yt.data_objects.static_output.Dataset.quan
Frontends
---------
.. autosummary::
AMRVAC
^^^^^^
.. autosummary::
~yt.frontends.amrvac.data_structures.AMRVACGrid
~yt.frontends.amrvac.data_structures.AMRVACHierarchy
~yt.frontends.amrvac.data_structures.AMRVACDataset
~yt.frontends.amrvac.fields.AMRVACFieldInfo
~yt.frontends.amrvac.io.AMRVACIOHandler
~yt.frontends.amrvac.io.read_amrvac_namelist
ARTIO
^^^^^
.. autosummary::
~yt.frontends.artio.data_structures.ARTIOIndex
~yt.frontends.artio.data_structures.ARTIOOctreeSubset
~yt.frontends.artio.data_structures.ARTIORootMeshSubset
~yt.frontends.artio.data_structures.ARTIODataset
~yt.frontends.artio.definitions.ARTIOconstants
~yt.frontends.artio.fields.ARTIOFieldInfo
~yt.frontends.artio.io.IOHandlerARTIO
Athena
^^^^^^
.. autosummary::
~yt.frontends.athena.data_structures.AthenaGrid
~yt.frontends.athena.data_structures.AthenaHierarchy
~yt.frontends.athena.data_structures.AthenaDataset
~yt.frontends.athena.fields.AthenaFieldInfo
~yt.frontends.athena.io.IOHandlerAthena
AMReX/Boxlib
^^^^^^^^^^^^
.. autosummary::
~yt.frontends.boxlib.data_structures.BoxlibGrid
~yt.frontends.boxlib.data_structures.BoxlibHierarchy
~yt.frontends.boxlib.data_structures.BoxlibDataset
~yt.frontends.boxlib.data_structures.CastroDataset
~yt.frontends.boxlib.data_structures.MaestroDataset
~yt.frontends.boxlib.data_structures.NyxHierarchy
~yt.frontends.boxlib.data_structures.NyxDataset
~yt.frontends.boxlib.data_structures.OrionHierarchy
~yt.frontends.boxlib.data_structures.OrionDataset
~yt.frontends.boxlib.fields.BoxlibFieldInfo
~yt.frontends.boxlib.io.IOHandlerBoxlib
~yt.frontends.boxlib.io.IOHandlerOrion
CfRadial
^^^^^^^^
.. autosummary::
~yt.frontends.cf_radial.data_structures.CFRadialGrid
~yt.frontends.cf_radial.data_structures.CFRadialHierarchy
~yt.frontends.cf_radial.data_structures.CFRadialDataset
~yt.frontends.cf_radial.fields.CFRadialFieldInfo
~yt.frontends.cf_radial.io.CFRadialIOHandler
Chombo
^^^^^^
.. autosummary::
~yt.frontends.chombo.data_structures.ChomboGrid
~yt.frontends.chombo.data_structures.ChomboHierarchy
~yt.frontends.chombo.data_structures.ChomboDataset
~yt.frontends.chombo.data_structures.Orion2Hierarchy
~yt.frontends.chombo.data_structures.Orion2Dataset
~yt.frontends.chombo.io.IOHandlerChomboHDF5
~yt.frontends.chombo.io.IOHandlerOrion2HDF5
Enzo
^^^^
.. autosummary::
~yt.frontends.enzo.answer_testing_support.ShockTubeTest
~yt.frontends.enzo.data_structures.EnzoGrid
~yt.frontends.enzo.data_structures.EnzoGridGZ
~yt.frontends.enzo.data_structures.EnzoGridInMemory
~yt.frontends.enzo.data_structures.EnzoHierarchy1D
~yt.frontends.enzo.data_structures.EnzoHierarchy2D
~yt.frontends.enzo.data_structures.EnzoHierarchy
~yt.frontends.enzo.data_structures.EnzoHierarchyInMemory
~yt.frontends.enzo.data_structures.EnzoDatasetInMemory
~yt.frontends.enzo.data_structures.EnzoDataset
~yt.frontends.enzo.fields.EnzoFieldInfo
~yt.frontends.enzo.io.IOHandlerInMemory
~yt.frontends.enzo.io.IOHandlerPacked1D
~yt.frontends.enzo.io.IOHandlerPacked2D
~yt.frontends.enzo.io.IOHandlerPackedHDF5
~yt.frontends.enzo.io.IOHandlerPackedHDF5GhostZones
~yt.frontends.enzo.simulation_handling.EnzoCosmology
~yt.frontends.enzo.simulation_handling.EnzoSimulation
FITS
^^^^
.. autosummary::
~yt.frontends.fits.data_structures.FITSGrid
~yt.frontends.fits.data_structures.FITSHierarchy
~yt.frontends.fits.data_structures.FITSDataset
~yt.frontends.fits.fields.FITSFieldInfo
~yt.frontends.fits.io.IOHandlerFITS
FLASH
^^^^^
.. autosummary::
~yt.frontends.flash.data_structures.FLASHGrid
~yt.frontends.flash.data_structures.FLASHHierarchy
~yt.frontends.flash.data_structures.FLASHDataset
~yt.frontends.flash.fields.FLASHFieldInfo
~yt.frontends.flash.io.IOHandlerFLASH
GDF
^^^
.. autosummary::
~yt.frontends.gdf.data_structures.GDFGrid
~yt.frontends.gdf.data_structures.GDFHierarchy
~yt.frontends.gdf.data_structures.GDFDataset
~yt.frontends.gdf.io.IOHandlerGDFHDF5
Halo Catalogs
^^^^^^^^^^^^^
.. autosummary::
~yt.frontends.ahf.data_structures.AHFHalosDataset
~yt.frontends.ahf.fields.AHFHalosFieldInfo
~yt.frontends.ahf.io.IOHandlerAHFHalos
~yt.frontends.gadget_fof.data_structures.GadgetFOFDataset
~yt.frontends.gadget_fof.data_structures.GadgetFOFHDF5File
~yt.frontends.gadget_fof.data_structures.GadgetFOFHaloDataset
~yt.frontends.gadget_fof.io.IOHandlerGadgetFOFHDF5
~yt.frontends.gadget_fof.io.IOHandlerGadgetFOFHaloHDF5
~yt.frontends.gadget_fof.fields.GadgetFOFFieldInfo
~yt.frontends.gadget_fof.fields.GadgetFOFHaloFieldInfo
~yt.frontends.halo_catalog.data_structures.YTHaloCatalogFile
~yt.frontends.halo_catalog.data_structures.YTHaloCatalogDataset
~yt.frontends.halo_catalog.fields.YTHaloCatalogFieldInfo
~yt.frontends.halo_catalog.io.IOHandlerYTHaloCatalog
~yt.frontends.owls_subfind.data_structures.OWLSSubfindParticleIndex
~yt.frontends.owls_subfind.data_structures.OWLSSubfindHDF5File
~yt.frontends.owls_subfind.data_structures.OWLSSubfindDataset
~yt.frontends.owls_subfind.fields.OWLSSubfindFieldInfo
~yt.frontends.owls_subfind.io.IOHandlerOWLSSubfindHDF5
~yt.frontends.rockstar.data_structures.RockstarBinaryFile
~yt.frontends.rockstar.data_structures.RockstarDataset
~yt.frontends.rockstar.fields.RockstarFieldInfo
~yt.frontends.rockstar.io.IOHandlerRockstarBinary
MOAB
^^^^
.. autosummary::
~yt.frontends.moab.data_structures.MoabHex8Hierarchy
~yt.frontends.moab.data_structures.MoabHex8Mesh
~yt.frontends.moab.data_structures.MoabHex8Dataset
~yt.frontends.moab.data_structures.PyneHex8Mesh
~yt.frontends.moab.data_structures.PyneMeshHex8Hierarchy
~yt.frontends.moab.data_structures.PyneMoabHex8Dataset
~yt.frontends.moab.io.IOHandlerMoabH5MHex8
~yt.frontends.moab.io.IOHandlerMoabPyneHex8
OpenPMD
^^^^^^^
.. autosummary::
~yt.frontends.open_pmd.data_structures.OpenPMDGrid
~yt.frontends.open_pmd.data_structures.OpenPMDHierarchy
~yt.frontends.open_pmd.data_structures.OpenPMDDataset
~yt.frontends.open_pmd.fields.OpenPMDFieldInfo
~yt.frontends.open_pmd.io.IOHandlerOpenPMDHDF5
~yt.frontends.open_pmd.misc.parse_unit_dimension
~yt.frontends.open_pmd.misc.is_const_component
~yt.frontends.open_pmd.misc.get_component
RAMSES
^^^^^^
.. autosummary::
~yt.frontends.ramses.data_structures.RAMSESDomainFile
~yt.frontends.ramses.data_structures.RAMSESDomainSubset
~yt.frontends.ramses.data_structures.RAMSESIndex
~yt.frontends.ramses.data_structures.RAMSESDataset
~yt.frontends.ramses.fields.RAMSESFieldInfo
~yt.frontends.ramses.io.IOHandlerRAMSES
SPH and Particle Codes
^^^^^^^^^^^^^^^^^^^^^^
.. autosummary::
~yt.frontends.gadget.data_structures.GadgetBinaryFile
~yt.frontends.gadget.data_structures.GadgetHDF5Dataset
~yt.frontends.gadget.data_structures.GadgetDataset
~yt.frontends.http_stream.data_structures.HTTPParticleFile
~yt.frontends.http_stream.data_structures.HTTPStreamDataset
~yt.frontends.owls.data_structures.OWLSDataset
~yt.frontends.sph.data_structures.ParticleDataset
~yt.frontends.tipsy.data_structures.TipsyFile
~yt.frontends.tipsy.data_structures.TipsyDataset
~yt.frontends.sph.fields.SPHFieldInfo
~yt.frontends.gadget.io.IOHandlerGadgetBinary
~yt.frontends.gadget.io.IOHandlerGadgetHDF5
~yt.frontends.http_stream.io.IOHandlerHTTPStream
~yt.frontends.owls.io.IOHandlerOWLS
~yt.frontends.tipsy.io.IOHandlerTipsyBinary
Stream
^^^^^^
.. autosummary::
~yt.frontends.stream.data_structures.StreamDictFieldHandler
~yt.frontends.stream.data_structures.StreamGrid
~yt.frontends.stream.data_structures.StreamHandler
~yt.frontends.stream.data_structures.StreamHexahedralHierarchy
~yt.frontends.stream.data_structures.StreamHexahedralMesh
~yt.frontends.stream.data_structures.StreamHexahedralDataset
~yt.frontends.stream.data_structures.StreamHierarchy
~yt.frontends.stream.data_structures.StreamOctreeHandler
~yt.frontends.stream.data_structures.StreamOctreeDataset
~yt.frontends.stream.data_structures.StreamOctreeSubset
~yt.frontends.stream.data_structures.StreamParticleFile
~yt.frontends.stream.data_structures.StreamParticleIndex
~yt.frontends.stream.data_structures.StreamParticlesDataset
~yt.frontends.stream.data_structures.StreamDataset
~yt.frontends.stream.fields.StreamFieldInfo
~yt.frontends.stream.io.IOHandlerStream
~yt.frontends.stream.io.IOHandlerStreamHexahedral
~yt.frontends.stream.io.IOHandlerStreamOctree
~yt.frontends.stream.io.StreamParticleIOHandler
ytdata
^^^^^^
.. autosummary::
~yt.frontends.ytdata.data_structures.YTDataContainerDataset
~yt.frontends.ytdata.data_structures.YTSpatialPlotDataset
~yt.frontends.ytdata.data_structures.YTGridDataset
~yt.frontends.ytdata.data_structures.YTGridHierarchy
~yt.frontends.ytdata.data_structures.YTGrid
~yt.frontends.ytdata.data_structures.YTNonspatialDataset
~yt.frontends.ytdata.data_structures.YTNonspatialHierarchy
~yt.frontends.ytdata.data_structures.YTNonspatialGrid
~yt.frontends.ytdata.data_structures.YTProfileDataset
~yt.frontends.ytdata.data_structures.YTClumpTreeDataset
~yt.frontends.ytdata.data_structures.YTClumpContainer
~yt.frontends.ytdata.fields.YTDataContainerFieldInfo
~yt.frontends.ytdata.fields.YTGridFieldInfo
~yt.frontends.ytdata.io.IOHandlerYTDataContainerHDF5
~yt.frontends.ytdata.io.IOHandlerYTGridHDF5
~yt.frontends.ytdata.io.IOHandlerYTSpatialPlotHDF5
~yt.frontends.ytdata.io.IOHandlerYTNonspatialhdf5
Loading Data
------------
.. autosummary::
~yt.loaders.load
~yt.loaders.load_uniform_grid
~yt.loaders.load_amr_grids
~yt.loaders.load_particles
~yt.loaders.load_octree
~yt.loaders.load_hexahedral_mesh
~yt.loaders.load_unstructured_mesh
~yt.loaders.load_sample
Derived Datatypes
-----------------
Profiles and Histograms
^^^^^^^^^^^^^^^^^^^^^^^
These types are used to sum data up and either return that sum or return an
average. Typically they are more easily used through the ``ProfilePlot``
``PhasePlot`` interface. We also provide the ``create_profile`` function
to create these objects in a uniform manner.
.. autosummary::
~yt.data_objects.profiles.ProfileND
~yt.data_objects.profiles.Profile1D
~yt.data_objects.profiles.Profile2D
~yt.data_objects.profiles.Profile3D
~yt.data_objects.profiles.ParticleProfile
~yt.data_objects.profiles.create_profile
.. _clump_finding_ref:
Clump Finding
^^^^^^^^^^^^^
The ``Clump`` object and associated functions can be used for identification
of topologically disconnected structures, i.e., clump finding.
.. autosummary::
~yt.data_objects.level_sets.clump_handling.Clump
~yt.data_objects.level_sets.clump_handling.Clump.add_info_item
~yt.data_objects.level_sets.clump_handling.Clump.add_validator
~yt.data_objects.level_sets.clump_handling.Clump.save_as_dataset
~yt.data_objects.level_sets.clump_handling.find_clumps
~yt.data_objects.level_sets.clump_info_items.add_clump_info
~yt.data_objects.level_sets.clump_validators.add_validator
X-ray Emission Fields
^^^^^^^^^^^^^^^^^^^^^
This can be used to create derived fields of X-ray emission in
different energy bands.
.. autosummary::
~yt.fields.xray_emission_fields.XrayEmissivityIntegrator
~yt.fields.xray_emission_fields.add_xray_emissivity_field
Field Types
-----------
.. autosummary::
~yt.fields.field_info_container.FieldInfoContainer
~yt.fields.derived_field.DerivedField
~yt.fields.derived_field.ValidateDataField
~yt.fields.derived_field.ValidateGridType
~yt.fields.derived_field.ValidateParameter
~yt.fields.derived_field.ValidateProperty
~yt.fields.derived_field.ValidateSpatial
Field Functions
---------------
.. autosummary::
~yt.fields.field_info_container.FieldInfoContainer.add_field
~yt.data_objects.static_output.Dataset.add_field
~yt.data_objects.static_output.Dataset.add_deposited_particle_field
~yt.data_objects.static_output.Dataset.add_mesh_sampling_particle_field
~yt.data_objects.static_output.Dataset.add_gradient_fields
~yt.frontends.stream.data_structures.StreamParticlesDataset.add_sph_fields
Particle Filters
----------------
.. autosummary::
~yt.data_objects.particle_filters.add_particle_filter
~yt.data_objects.particle_filters.particle_filter
Image Handling
--------------
For volume renderings and fixed resolution buffers the image object returned is
an ``ImageArray`` object, which has useful functions for image saving and
writing to bitmaps.
.. autosummary::
~yt.data_objects.image_array.ImageArray
Volume Rendering
^^^^^^^^^^^^^^^^
See also :ref:`volume_rendering`.
Here are the primary entry points and the main classes involved in the
Scene infrastructure:
.. autosummary::
~yt.visualization.volume_rendering.volume_rendering.volume_render
~yt.visualization.volume_rendering.volume_rendering.create_scene
~yt.visualization.volume_rendering.off_axis_projection.off_axis_projection
~yt.visualization.volume_rendering.scene.Scene
~yt.visualization.volume_rendering.camera.Camera
~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree
The different kinds of sources:
.. autosummary::
~yt.visualization.volume_rendering.render_source.RenderSource
~yt.visualization.volume_rendering.render_source.VolumeSource
~yt.visualization.volume_rendering.render_source.PointSource
~yt.visualization.volume_rendering.render_source.LineSource
~yt.visualization.volume_rendering.render_source.BoxSource
~yt.visualization.volume_rendering.render_source.GridSource
~yt.visualization.volume_rendering.render_source.CoordinateVectorSource
~yt.visualization.volume_rendering.render_source.MeshSource
The different kinds of transfer functions:
.. autosummary::
~yt.visualization.volume_rendering.transfer_functions.TransferFunction
~yt.visualization.volume_rendering.transfer_functions.ColorTransferFunction
~yt.visualization.volume_rendering.transfer_functions.ProjectionTransferFunction
~yt.visualization.volume_rendering.transfer_functions.PlanckTransferFunction
~yt.visualization.volume_rendering.transfer_functions.MultiVariateTransferFunction
~yt.visualization.volume_rendering.transfer_function_helper.TransferFunctionHelper
The different kinds of lenses:
.. autosummary::
~yt.visualization.volume_rendering.lens.Lens
~yt.visualization.volume_rendering.lens.PlaneParallelLens
~yt.visualization.volume_rendering.lens.PerspectiveLens
~yt.visualization.volume_rendering.lens.StereoPerspectiveLens
~yt.visualization.volume_rendering.lens.FisheyeLens
~yt.visualization.volume_rendering.lens.SphericalLens
~yt.visualization.volume_rendering.lens.StereoSphericalLens
Streamlining
^^^^^^^^^^^^
See also :ref:`streamlines`.
.. autosummary::
~yt.visualization.streamlines.Streamlines
Image Writing
^^^^^^^^^^^^^
These functions are all used for fast writing of images directly to disk,
without calling matplotlib. This can be very useful for high-cadence outputs
where colorbars are unnecessary or for volume rendering.
.. autosummary::
~yt.visualization.image_writer.multi_image_composite
~yt.visualization.image_writer.write_bitmap
~yt.visualization.image_writer.write_projection
~yt.visualization.image_writer.write_image
~yt.visualization.image_writer.map_to_colors
~yt.visualization.image_writer.strip_colormap_data
~yt.visualization.image_writer.splat_points
~yt.visualization.image_writer.scale_image
We also provide a module that is very good for generating EPS figures,
particularly with complicated layouts.
.. autosummary::
~yt.visualization.eps_writer.DualEPS
~yt.visualization.eps_writer.single_plot
~yt.visualization.eps_writer.multiplot
~yt.visualization.eps_writer.multiplot_yt
~yt.visualization.eps_writer.return_colormap
.. _derived-quantities-api:
Derived Quantities
------------------
See :ref:`derived-quantities`.
.. autosummary::
~yt.data_objects.derived_quantities.DerivedQuantity
~yt.data_objects.derived_quantities.DerivedQuantityCollection
~yt.data_objects.derived_quantities.WeightedAverageQuantity
~yt.data_objects.derived_quantities.AngularMomentumVector
~yt.data_objects.derived_quantities.BulkVelocity
~yt.data_objects.derived_quantities.CenterOfMass
~yt.data_objects.derived_quantities.Extrema
~yt.data_objects.derived_quantities.MaxLocation
~yt.data_objects.derived_quantities.MinLocation
~yt.data_objects.derived_quantities.SpinParameter
~yt.data_objects.derived_quantities.TotalMass
~yt.data_objects.derived_quantities.TotalQuantity
~yt.data_objects.derived_quantities.WeightedAverageQuantity
.. _callback-api:
Callback List
-------------
See also :ref:`callbacks`.
.. autosummary::
~yt.visualization.plot_window.PWViewerMPL.clear_annotations
~yt.visualization.plot_modifications.ArrowCallback
~yt.visualization.plot_modifications.CellEdgesCallback
~yt.visualization.plot_modifications.ClumpContourCallback
~yt.visualization.plot_modifications.ContourCallback
~yt.visualization.plot_modifications.CuttingQuiverCallback
~yt.visualization.plot_modifications.GridBoundaryCallback
~yt.visualization.plot_modifications.ImageLineCallback
~yt.visualization.plot_modifications.LinePlotCallback
~yt.visualization.plot_modifications.MagFieldCallback
~yt.visualization.plot_modifications.MarkerAnnotateCallback
~yt.visualization.plot_modifications.ParticleCallback
~yt.visualization.plot_modifications.PointAnnotateCallback
~yt.visualization.plot_modifications.QuiverCallback
~yt.visualization.plot_modifications.RayCallback
~yt.visualization.plot_modifications.ScaleCallback
~yt.visualization.plot_modifications.SphereCallback
~yt.visualization.plot_modifications.StreamlineCallback
~yt.visualization.plot_modifications.TextLabelCallback
~yt.visualization.plot_modifications.TimestampCallback
~yt.visualization.plot_modifications.TitleCallback
~yt.visualization.plot_modifications.TriangleFacetsCallback
~yt.visualization.plot_modifications.VelocityCallback
Colormap Functions
------------------
See also :ref:`colormaps`.
.. autosummary::
~yt.visualization.color_maps.add_colormap
~yt.visualization.color_maps.make_colormap
~yt.visualization.color_maps.show_colormaps
Function List
-------------
.. autosummary::
~yt.frontends.ytdata.utilities.save_as_dataset
~yt.data_objects.data_containers.YTDataContainer.save_as_dataset
~yt.data_objects.static_output.Dataset.all_data
~yt.data_objects.static_output.Dataset.box
~yt.funcs.enable_plugins
~yt.funcs.get_pbar
~yt.funcs.humanize_time
~yt.funcs.insert_ipython
~yt.funcs.is_root
~yt.funcs.is_sequence
~yt.funcs.iter_fields
~yt.funcs.just_one
~yt.funcs.only_on_root
~yt.funcs.paste_traceback
~yt.funcs.pdb_run
~yt.funcs.print_tb
~yt.funcs.rootonly
~yt.funcs.time_execution
~yt.data_objects.level_sets.contour_finder.identify_contours
~yt.utilities.parallel_tools.parallel_analysis_interface.enable_parallelism
~yt.utilities.parallel_tools.parallel_analysis_interface.parallel_blocking_call
~yt.utilities.parallel_tools.parallel_analysis_interface.parallel_objects
~yt.utilities.parallel_tools.parallel_analysis_interface.parallel_passthrough
~yt.utilities.parallel_tools.parallel_analysis_interface.parallel_root_only
~yt.utilities.parallel_tools.parallel_analysis_interface.parallel_simple_proxy
~yt.data_objects.data_containers.YTDataContainer.get_field_parameter
~yt.data_objects.data_containers.YTDataContainer.set_field_parameter
Math Utilities
--------------
.. autosummary::
~yt.utilities.math_utils.periodic_position
~yt.utilities.math_utils.periodic_dist
~yt.utilities.math_utils.euclidean_dist
~yt.utilities.math_utils.rotate_vector_3D
~yt.utilities.math_utils.modify_reference_frame
~yt.utilities.math_utils.compute_rotational_velocity
~yt.utilities.math_utils.compute_parallel_velocity
~yt.utilities.math_utils.compute_radial_velocity
~yt.utilities.math_utils.compute_cylindrical_radius
~yt.utilities.math_utils.ortho_find
~yt.utilities.math_utils.quartiles
~yt.utilities.math_utils.get_rotation_matrix
~yt.utilities.math_utils.get_sph_r
~yt.utilities.math_utils.resize_vector
~yt.utilities.math_utils.get_sph_theta
~yt.utilities.math_utils.get_sph_phi
~yt.utilities.math_utils.get_cyl_r
~yt.utilities.math_utils.get_cyl_z
~yt.utilities.math_utils.get_cyl_theta
~yt.utilities.math_utils.get_cyl_r_component
~yt.utilities.math_utils.get_cyl_theta_component
~yt.utilities.math_utils.get_cyl_z_component
~yt.utilities.math_utils.get_sph_r_component
~yt.utilities.math_utils.get_sph_phi_component
~yt.utilities.math_utils.get_sph_theta_component
Miscellaneous Types
-------------------
.. autosummary::
~yt.config.YTConfig
~yt.utilities.parameter_file_storage.ParameterFileStore
~yt.utilities.parallel_tools.parallel_analysis_interface.ObjectIterator
~yt.utilities.parallel_tools.parallel_analysis_interface.ParallelAnalysisInterface
~yt.utilities.parallel_tools.parallel_analysis_interface.ParallelObjectIterator
.. _cosmology-calculator-ref:
Cosmology Calculator
--------------------
.. autosummary::
~yt.utilities.cosmology.Cosmology
~yt.utilities.cosmology.Cosmology.hubble_distance
~yt.utilities.cosmology.Cosmology.comoving_radial_distance
~yt.utilities.cosmology.Cosmology.comoving_transverse_distance
~yt.utilities.cosmology.Cosmology.comoving_volume
~yt.utilities.cosmology.Cosmology.angular_diameter_distance
~yt.utilities.cosmology.Cosmology.angular_scale
~yt.utilities.cosmology.Cosmology.luminosity_distance
~yt.utilities.cosmology.Cosmology.lookback_time
~yt.utilities.cosmology.Cosmology.critical_density
~yt.utilities.cosmology.Cosmology.hubble_parameter
~yt.utilities.cosmology.Cosmology.expansion_factor
~yt.utilities.cosmology.Cosmology.z_from_t
~yt.utilities.cosmology.Cosmology.t_from_z
~yt.utilities.cosmology.Cosmology.get_dark_factor
Testing Infrastructure
----------------------
The core set of testing functions are re-exported from NumPy,
and are deprecated (prefer using
`numpy.testing <https://numpy.org/doc/stable/reference/routines.testing.html>`_
directly).
.. autosummary::
~yt.testing.assert_array_equal
~yt.testing.assert_almost_equal
~yt.testing.assert_approx_equal
~yt.testing.assert_array_almost_equal
~yt.testing.assert_equal
~yt.testing.assert_array_less
~yt.testing.assert_string_equal
~yt.testing.assert_array_almost_equal_nulp
~yt.testing.assert_allclose
~yt.testing.assert_raises
`unyt.testing <https://unyt.readthedocs.io/en/stable/modules/unyt.testing.html>`_
also provides some specialized functions for comparing arrays in a units-aware
fashion.
Finally, yt provides the following functions:
.. autosummary::
~yt.testing.assert_rel_equal
~yt.testing.amrspace
~yt.testing.expand_keywords
~yt.testing.fake_random_ds
~yt.testing.fake_amr_ds
~yt.testing.fake_particle_ds
~yt.testing.fake_tetrahedral_ds
~yt.testing.fake_hexahedral_ds
~yt.testing.small_fake_hexahedral_ds
~yt.testing.fake_stretched_ds
~yt.testing.fake_vr_orientation_test_ds
~yt.testing.fake_sph_orientation_ds
~yt.testing.fake_sph_grid_ds
~yt.testing.fake_octree_ds
These are for the pytest infrastructure:
.. autosummary::
~conftest.hashing
~yt.utilities.answer_testing.answer_tests.grid_hierarchy
~yt.utilities.answer_testing.answer_tests.parentage_relationships
~yt.utilities.answer_testing.answer_tests.grid_values
~yt.utilities.answer_testing.answer_tests.projection_values
~yt.utilities.answer_testing.answer_tests.field_values
~yt.utilities.answer_testing.answer_tests.pixelized_projection_values
~yt.utilities.answer_testing.answer_tests.small_patch_amr
~yt.utilities.answer_testing.answer_tests.big_patch_amr
~yt.utilities.answer_testing.answer_tests.generic_array
~yt.utilities.answer_testing.answer_tests.sph_answer
~yt.utilities.answer_testing.answer_tests.get_field_size_and_mean
~yt.utilities.answer_testing.answer_tests.plot_window_attribute
~yt.utilities.answer_testing.answer_tests.phase_plot_attribute
~yt.utilities.answer_testing.answer_tests.generic_image
~yt.utilities.answer_testing.answer_tests.axial_pixelization
~yt.utilities.answer_testing.answer_tests.extract_connected_sets
~yt.utilities.answer_testing.answer_tests.VR_image_comparison
| yt | /yt-4.2.2.tar.gz/yt-4.2.2/doc/source/reference/api/api.rst | api.rst |
# YouTube to MP3
<p align="right">
<!-- CI Status -->
<a href="https://travis-ci.org/tterb/yt2mp3"><img src="https://travis-ci.org/tterb/yt2mp3.svg?branch=master" alt="Build Status"/></a>
<!-- Docs Status -->
<a href='https://yt2mp3.readthedocs.io/en/latest/?badge=latest'><img src='https://readthedocs.org/projects/yt2mp3/badge/?version=latest' alt='Documentation Status'/></a>
<!-- CodeCov -->
<a href="https://codecov.io/gh/tterb/yt2mp3"><img src="https://codecov.io/gh/tterb/yt2mp3/branch/master/graph/badge.svg"/></a>
<!--Project version-->
<a href="https://pypi.python.org/pypi/yt2mp3/"><img src="https://badge.fury.io/py/yt2mp3.svg" alt="PyPi Version"/></a>
<!-- Python version -->
<a href="https://pypi.python.org/pypi/yt2mp3/"><img src="https://img.shields.io/pypi/pyversions/yt2mp3.svg" alt="PyPI Python Versions"/></a>
<!--License-->
<a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/License-MIT-yellow.svg" alt="License"/></a>
</p>
<br>
<p align="center">
<img src="https://cdn.rawgit.com/tterb/yt2mp3/d96b8c70/docs/images/terminal.svg" width="700"/>
</p>
## Description
This program simplifies the process of searching, downloading and converting Youtube videos to MP3 files from the command-line. All you need is the video URL or the name of the artist/track you're looking for.
The program will attempt to retrieve data for a song matching the provided input by querying the iTunes API and use the data to find a corresponding YouTube video, if a URL is not provided. The video will then be downloaded, converted, and the gathered data will be used to populate the metadata of the MP3.
Once finished, the resulting MP3 file will be saved to your *Downloads* directory, with the following file-structure `Music/{artist}/{track}.mp3`.
***Note:*** If a URL is provided and no match is found for the song data, the program will prompt the user for the track/artist and the YouTube thumbnail will be used as the album artwork.
## Getting Started
### Prerequisites
The program only requires that you have Python 3.4+ and [ffmpeg](https://www.ffmpeg.org/) or [libav](https://www.libav.org/) installed. For more information, check out the [additional setup](https://yt2mp3.readthedocs.io/en/latest/additional_setup.html).
### Install
You can install the program with the following command:
```sh
$ pip install yt2mp3
```
## Usage
The program can be executed via the as follows:
```sh
$ yt2mp3 [-options]
```
#### Options:
| Arguments | |
|-------------------|-------------------------------------------------------|
| `-t, --track` | Specify the track name query |
| `-a, --artist` | Specify the artist name query |
| `-c, --collection`| Specify the album name query
| `-u, --url` | Specify a Youtube URL or ID |
| `-p, --playlist` | Specify a Youtube playlist URL or ID |
| `-o, --overwrite` | Overwrite the file if one exists in output directory |
| `-r, --resolution`| Specify the resolution for the cover-art |
| `-q, --quiet` | Suppress program command-line output |
| `-v, --verbose` | Display a command-line progress bar |
| `--version` | Show the version number and exit |
| `-h, --help` | Display information on usage and functionality |
## Documentation
Further documentation is available on [Read The Docs](https://yt2mp3.readthedocs.io/en/latest/)
## Contributing
If you'd like to contribute to the project, feel free to suggest a [feature request](https://github.com/tterb/yt2mp3/issues/new?template=feature_request.md) and/or submit a [pull request](https://github.com/tterb/yt2mp3/pulls?q=is%3Apr+is%3Aopen+sort%3Aupdated-desc).
| yt2mp3 | /yt2mp3-1.2.4.tar.gz/yt2mp3-1.2.4/README.md | README.md |
# yt2mp4
Download YouTube videos and playlists as MP4 files (and other formats)
## Get geckodriver
### Linux (Debian)
```sh
sudo apt install wget ffmpeg firefox-esr -y
wget https://github.com/mozilla/geckodriver/releases/download/v0.30.0/geckodriver-v0.30.0-linux64.tar.gz
sudo tar xvzf geckodriver-v0.30.0-linux64.tar.gz -C /usr/bin/
chmod +x /usr/bin/geckodriver
rm geckodriver-v0.30.0-linux64.tar.gz
```
### Other
Figure it yourself
## Installation
### From PyPI
```sh
pip3 install yt2mp4
```
### From GitHub
```sh
pip3 install git+https://github.com/donno2048/yt2mp4
```
## Usage
### In Python
```py
from yt2mp4 import download
download("dQw4w9WgXcQ", outname='output.mp4') # dowload video from https://www.youtube.com/watch?v=dQw4w9WgXcQ and name it output.mp4
# will also work:
# download("dQw4w9WgXcQ", outname='output.mp4', binary_path=path) # use a different binary path
# download("youtube.com/watch?v=dQw4w9WgXcQ", output="output.mov")
# download("youtu.be/dQw4w9WgXcQ")
# download("www.youtube.com/watch?v=dQw4w9WgXcQ", output="output.mov")
# download("music.youtube.com/watch?v=dQw4w9WgXcQ", output="output.mov")
# download("https://www.youtube.com/watch?v=dQw4w9WgXcQ", output="output.mov")
# download("https://music.youtube.com/watch?v=dQw4w9WgXcQ", output="output.mov")
# download("https://youtu.be/dQw4w9WgXcQ")
```
### In cmd
```sh
# each of those will convert to another format
yt2mp4 # or python3 -m yt2mp4
yt2webm
yt2mkv
yt2flv
yt2wmv
yt2avi
yt2mov
yt2m4v
yt2mp3
```
### Download playlist
For this you will have to configure a YouTube API key
#### Get API key
1. Go to the [Developer console dashboard](https://console.cloud.google.com/home/dashboard) and click on _CREATE PROJECT_, you can name the project and then press _CREATE_
1. Now go to the [Credentials tab](https://console.cloud.google.com/apis/credentials) and click on _CREATE CREDENTIALS_ and choose _API key_, copy the API key you see and save it somewhere safe, then you can click on _CLOSE_
1. Now go to the [YouTube API tab](https://console.cloud.google.com/apis/api/youtube.googleapis.com) and click on _ENABLE_
#### In Python
```py
from yt2mp4 import download_playlist
download_playlist(id, api_key)
'''
- the first argument is the id of the playlist, you can pass either of the following forms
- https://www.youtube.com/watch?v=***********&list=PLAYLIST_ID
- https://www.youtube.com/playlist?list=PLAYLIST_ID
- PLAYLIST_ID
- the second argument is the API key
- the third is the extension, the default value is 'mp4'
- the fourth one is the fps, the default value is 60
- the last one is the binary path to the geckodriver
'''
```
#### In cmd
```sh
# each of those will convert to other formats
ytp2mp4
ytp2webm
ytp2mkv
ytp2flv
ytp2wmv
ytp2avi
ytp2mov
ytp2m4v
ytp2mp3
```
## Supported formats
- mp4
- webm
- mkv
- flv
- wmv
- avi
- mov
- m4v
- mp3 (auto format as audio)
| yt2mp4 | /yt2mp4-1.0.5.tar.gz/yt2mp4-1.0.5/README.md | README.md |
项目介绍
==========================
此扩展包通过读取yaml文件配置接口请求参数
通过GET、POST方法获取接口返回值并断言
安装和使用
============
| 安装命令如下:
::
pip install ytApiTest
| 支持功能如下:
- 支持 GET 请求
- 支持 POST 请求
- 支持 yaml文件内请求参数或断言数据使用JOSNPATH 语法获取接口返回值
- 支持 断言接口返回值,字段某部分或某个键值对是否存在返回值中
- 支持 断言接口返回值所有字段是否与断言值相等
- 支持 断言失败发送对应错误接口信息到钉钉群
- 支持 用例前置、后置操作
- 支持 对用例指定用户做请求。
| 使用:
- 使用扩展包项目根目录下必须新建.yaml文件管理测试用例数据
- .yaml文件必须遵循以下格式并包含以下关键字
.. code:: python
YAML文件用例支持关键字:
DING_TALK_URL:钉钉群机器人url此项必须配置
OBJECT_HOST: 项目host,可配置多个不同host例:OBJECT_HOST:
host_key: host
interface_name(你接口名称,可自己命名):
url(接口路径,此关键字必须包含): 值可为空
assert_key(断言key,可自己命名):
des(此关键字必须包含): 用例说明,值可为空,用于断言错误时展示
req_data(此关键字必须包含): 接口请求参数,值可为空
ast_data(此关键字必须包含): 接口断言值,值可为空
json_expr(此关键值必须包含): 返回查找路径,值可为空
setup:用例前置操作,以列表形式保存对应接口请求参数,支持传入interface_name,assert_key,host_key
teardown:以列表形式保存对应接口请求参数,支持传入interface_name,assert_key,host_key
- .yaml文件内使用JSONPATH语法示例
.. code:: python
interface_name:
url:
assert_key:
des:
req_data:
key: $.interface_name.data.XXX
ast_data:
key: $.interface_name.data.XXX
json_expr: $.interface_name.data.XXX
setup: [{interface_name:interface_name,assert_key:assert_key,host_key:host:key},{...}]
teardown: [{interface_name:interface_name,assert_key:assert_key,host_key:host:key},{...}]
方法说明及使用示例
======================
.. code:: python
#POST 请求
import ytApiTest
#读取.yaml文件内对应的接口值并发送post请求到后台,返回response对象
response = ytApiTest.post(interface_name,assert_key)
#参数说明:interface_name(.yaml你接口名称),assert_key(.yaml文件内与接口对应的assert_key值)
.. code:: python
#GET 请求
import ytApiTest
#读取.yaml文件内对应的接口值并发送get请求到后台,返回response对象
response = ytApiTest.get(interface_name,assert_key)
#参数说明:interface_name(.yaml你接口名称),assert_key(.yaml文件内与接口对应的assert_key值)
.. code:: python
#获取接口断言数据
import ytApiTest
#读取.yaml文件内对应的接口值并发送post请求到后台,返回response对象
response = ytApiTest.get_interface_case_assert_data(interface_name,assert_key)
#参数说明:interface_name(.yaml你接口名称),assert_key(.yaml文件内与接口对应的assert_key值)
.. code:: python
#获取接口请求数据
import ytApiTest
#读取.yaml文件内对应的接口值并发送post请求到后台,返回response对象
response = ytApiTest.get_interface_request_data(interface_name,assert_key)
#参数说明:interface_name(.yaml你接口名称),assert_key(.yaml文件内与接口对应的assert_key值)
.. code:: python
#获取接口完整URL
import ytApiTest
#读取.yaml文件内对应的接口值并发送post请求到后台,返回response对象
response = ytApiTest.get_interface_url(interface_name,assert_key)
#参数说明:interface_name(.yaml你接口名称),assert_key(.yaml文件内与接口对应的assert_key值)
.. code:: python
#执行相等断言方法
import ytApiTest
#读取.yaml文件内对应的接口值并发送post请求到后台,返回response对象
ytApiTest.assert_body_eq_assert_value(response,assert_value,json_expr)
#参数说明:response(接口返回response对象),assert_value(.yaml文件内断言值),json_expr(.yaml文件内json_expr值)
.. code:: python
#断言返回值中URL状态是否为200方法
import ytApiTest
#读取.yaml文件内对应的接口值并发送post请求到后台,返回response对象
ytApiTest.assert_response_url_status(response)
#参数说明:response(接口返回response对象)
.. code:: python
#修改请求参数
import ytApiTest
#读取.yaml文件内对应的接口值并发送post请求到后台,返回response对象
ytApiTest.update_case_req_data(interface_key=None, assert_key=None,new_request_data=None)
参数:interface_key=接口名称, assert_key=断言值,req_data=请求字典
| ytApiTest | /ytApiTest-1.0.53.tar.gz/ytApiTest-1.0.53/README.rst | README.rst |
# YouTube Automation
Using YouTube's API, this repository will work toward automating tedious tasks for YouTube channels. **The main objective is to update YouTube Descriptions with the number of likes, dislikes, likes/(likes+dislikes), timestamp** up to the YouTube API limit. Check out my [YouTube Like, Dislike Counter Playlist](https://youtube.com/playlist?list=PLHT3ZrWZ1pcSFjYuMPwa0m0pjB4fUP5c_)!
# Requirements:
- Python >= 3.7
- [PyPi distribution](https://pypi.org/project/ytad/)
````
pip install ytad
````
# Setup Requirements
- Need YouTube account with at least one public video
- Register your account with [Google console developers](https://console.developers.google.com)
- Need to enable API (YouTube Data API V3)
- **You have the choice to choose which OAuth to use; In production, I used Web App**
- Create OAuth Client ID: > Web App > Name Web App App > Create > Download OAuth Client (JSON)
- This will be your web app secret file. **(Rename downloaded OAuth Client to client_secret_web_app.json)**
- Setup up OAuth Consent Screen
- Make sure to add your testing email as a test user to access your YouTube account (need to manually do this)
- Ensure to verify your application! Run the following:
````
# creates a token.pickle file for authentication
from ytad.authentication import Authenticate
auth = Authenticate()
youtube = auth.check_token_web_app_data_api()
````
# Command Line Interface (CLI) capability:
- In base environment, you need the following files to run: **U**pdate **V**ideo **D**escription (**uvd**) successfully:
- client_secret_web_app.json
- token.pickle
````
uvd -h # input your arguments for ease of use.
# example (Using my YouTube Channel ID as an example...)
uvd --id=UCoCToADdJRd3u-ACz4e_iCw
````
# [Deprecated] [Command Line Interface (CLI) explained](https://youtu.be/yrzP762gV1I)
````
"""This has been deprecated; see above for more integrated CLI capabilities"""
python update_notifications.py --help
python .\update_notifications.py --update_df=Yes --verify_each_update=Yes
````
# Cloud
I used AWS services to ensure a cron job (in production) is enacted. This is a low cost solution, utlizing lambda functions and minimial time on Ec2 instances.
- The cloud setup and infrastructure can be found [here](https://youtu.be/Q3mIrtMw_3E)
# Goals of Library:
- <strike> Scrape personal Likes/Dislikes from backend (YouTube Studio) </strike>
- <strike> Automating the update(s) of descriptions in videos </strike>
- <strike> Executable with parameters and pip installable </strike>
- <strike> cron job </strike>
- <strike> Minimization of YouTube Requests </strike>
- <strike> Keygen for API calls </strike>
- <strike> argparse: CLI enabled. </strike>
- <strike> setup.py (on pypi) for installation </strike>
# Remaining TODOs:
- [x] Verify setup with @SpencerPao
- [x] Overhaul CLI (maybe this should be another issue and version though)
- [x] Test API and authentication
- [x] Create PyPI and TestPyPI accounts
- [x] Build package wheel
- [x] Test package with `twine`
- [x] Upload package with `twine`
- [x] Try installing with `pip`
| ytad | /ytad-0.0.8.tar.gz/ytad-0.0.8/README.md | README.md |
# pylint:disable=invalid-name,import-outside-toplevel,missing-function-docstring
# pylint:disable=missing-class-docstring,too-many-branches,too-many-statements
# pylint:disable=raise-missing-from,too-many-lines,too-many-locals,import-error
# pylint:disable=too-few-public-methods,redefined-outer-name,consider-using-with
# pylint:disable=attribute-defined-outside-init,too-many-arguments
import configparser
import errno
import json
import os
import re
import subprocess
import sys
from typing import Callable, Dict
import functools
class VersioneerConfig:
"""Container for Versioneer configuration parameters."""
def get_root():
"""Get the project root directory.
We require that all commands are run from the project root, i.e. the
directory that contains setup.py, setup.cfg, and versioneer.py .
"""
root = os.path.realpath(os.path.abspath(os.getcwd()))
setup_py = os.path.join(root, "setup.py")
versioneer_py = os.path.join(root, "versioneer.py")
if not (os.path.exists(setup_py) or os.path.exists(versioneer_py)):
# allow 'python path/to/setup.py COMMAND'
root = os.path.dirname(os.path.realpath(os.path.abspath(sys.argv[0])))
setup_py = os.path.join(root, "setup.py")
versioneer_py = os.path.join(root, "versioneer.py")
if not (os.path.exists(setup_py) or os.path.exists(versioneer_py)):
err = ("Versioneer was unable to run the project root directory. "
"Versioneer requires setup.py to be executed from "
"its immediate directory (like 'python setup.py COMMAND'), "
"or in a way that lets it use sys.argv[0] to find the root "
"(like 'python path/to/setup.py COMMAND').")
raise VersioneerBadRootError(err)
try:
# Certain runtime workflows (setup.py install/develop in a setuptools
# tree) execute all dependencies in a single python process, so
# "versioneer" may be imported multiple times, and python's shared
# module-import table will cache the first one. So we can't use
# os.path.dirname(__file__), as that will find whichever
# versioneer.py was first imported, even in later projects.
my_path = os.path.realpath(os.path.abspath(__file__))
me_dir = os.path.normcase(os.path.splitext(my_path)[0])
vsr_dir = os.path.normcase(os.path.splitext(versioneer_py)[0])
if me_dir != vsr_dir:
print("Warning: build in %s is using versioneer.py from %s"
% (os.path.dirname(my_path), versioneer_py))
except NameError:
pass
return root
def get_config_from_root(root):
"""Read the project setup.cfg file to determine Versioneer config."""
# This might raise OSError (if setup.cfg is missing), or
# configparser.NoSectionError (if it lacks a [versioneer] section), or
# configparser.NoOptionError (if it lacks "VCS="). See the docstring at
# the top of versioneer.py for instructions on writing your setup.cfg .
setup_cfg = os.path.join(root, "setup.cfg")
parser = configparser.ConfigParser()
with open(setup_cfg, "r") as cfg_file:
parser.read_file(cfg_file)
VCS = parser.get("versioneer", "VCS") # mandatory
# Dict-like interface for non-mandatory entries
section = parser["versioneer"]
cfg = VersioneerConfig()
cfg.VCS = VCS
cfg.style = section.get("style", "")
cfg.versionfile_source = section.get("versionfile_source")
cfg.versionfile_build = section.get("versionfile_build")
cfg.tag_prefix = section.get("tag_prefix")
if cfg.tag_prefix in ("''", '""'):
cfg.tag_prefix = ""
cfg.parentdir_prefix = section.get("parentdir_prefix")
cfg.verbose = section.get("verbose")
return cfg
class NotThisMethod(Exception):
"""Exception raised if a method is not valid for the current scenario."""
# these dictionaries contain VCS-specific tools
LONG_VERSION_PY: Dict[str, str] = {}
HANDLERS: Dict[str, Dict[str, Callable]] = {}
def register_vcs_handler(vcs, method): # decorator
"""Create decorator to mark a method as the handler of a VCS."""
def decorate(f):
"""Store f in HANDLERS[vcs][method]."""
HANDLERS.setdefault(vcs, {})[method] = f
return f
return decorate
def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False,
env=None):
"""Call the given command(s)."""
assert isinstance(commands, list)
process = None
popen_kwargs = {}
if sys.platform == "win32":
# This hides the console window if pythonw.exe is used
startupinfo = subprocess.STARTUPINFO()
startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
popen_kwargs["startupinfo"] = startupinfo
for command in commands:
try:
dispcmd = str([command] + args)
# remember shell=False, so use git.cmd on windows, not just git
process = subprocess.Popen([command] + args, cwd=cwd, env=env,
stdout=subprocess.PIPE,
stderr=(subprocess.PIPE if hide_stderr
else None), **popen_kwargs)
break
except OSError:
e = sys.exc_info()[1]
if e.errno == errno.ENOENT:
continue
if verbose:
print("unable to run %s" % dispcmd)
print(e)
return None, None
else:
if verbose:
print("unable to find command, tried %s" % (commands,))
return None, None
stdout = process.communicate()[0].strip().decode()
if process.returncode != 0:
if verbose:
print("unable to run %s (error)" % dispcmd)
print("stdout was %s" % stdout)
return None, process.returncode
return stdout, process.returncode
LONG_VERSION_PY['git'] = r'''
# This file helps to compute a version number in source trees obtained from
# git-archive tarball (such as those provided by githubs download-from-tag
# feature). Distribution tarballs (built by setup.py sdist) and build
# directories (produced by setup.py build) will contain a much shorter file
# that just contains the computed version number.
# This file is released into the public domain. Generated by
# versioneer-0.22 (https://github.com/python-versioneer/python-versioneer)
"""Git implementation of _version.py."""
import errno
import os
import re
import subprocess
import sys
from typing import Callable, Dict
import functools
def get_keywords():
"""Get the keywords needed to look up the version information."""
# these strings will be replaced by git during git-archive.
# setup.py/versioneer.py will grep for the variable names, so they must
# each be defined on a line of their own. _version.py will just call
# get_keywords().
git_refnames = "%(DOLLAR)sFormat:%%d%(DOLLAR)s"
git_full = "%(DOLLAR)sFormat:%%H%(DOLLAR)s"
git_date = "%(DOLLAR)sFormat:%%ci%(DOLLAR)s"
keywords = {"refnames": git_refnames, "full": git_full, "date": git_date}
return keywords
class VersioneerConfig:
"""Container for Versioneer configuration parameters."""
def get_config():
"""Create, populate and return the VersioneerConfig() object."""
# these strings are filled in when 'setup.py versioneer' creates
# _version.py
cfg = VersioneerConfig()
cfg.VCS = "git"
cfg.style = "%(STYLE)s"
cfg.tag_prefix = "%(TAG_PREFIX)s"
cfg.parentdir_prefix = "%(PARENTDIR_PREFIX)s"
cfg.versionfile_source = "%(VERSIONFILE_SOURCE)s"
cfg.verbose = False
return cfg
class NotThisMethod(Exception):
"""Exception raised if a method is not valid for the current scenario."""
LONG_VERSION_PY: Dict[str, str] = {}
HANDLERS: Dict[str, Dict[str, Callable]] = {}
def register_vcs_handler(vcs, method): # decorator
"""Create decorator to mark a method as the handler of a VCS."""
def decorate(f):
"""Store f in HANDLERS[vcs][method]."""
if vcs not in HANDLERS:
HANDLERS[vcs] = {}
HANDLERS[vcs][method] = f
return f
return decorate
def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False,
env=None):
"""Call the given command(s)."""
assert isinstance(commands, list)
process = None
popen_kwargs = {}
if sys.platform == "win32":
# This hides the console window if pythonw.exe is used
startupinfo = subprocess.STARTUPINFO()
startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
popen_kwargs["startupinfo"] = startupinfo
for command in commands:
try:
dispcmd = str([command] + args)
# remember shell=False, so use git.cmd on windows, not just git
process = subprocess.Popen([command] + args, cwd=cwd, env=env,
stdout=subprocess.PIPE,
stderr=(subprocess.PIPE if hide_stderr
else None), **popen_kwargs)
break
except OSError:
e = sys.exc_info()[1]
if e.errno == errno.ENOENT:
continue
if verbose:
print("unable to run %%s" %% dispcmd)
print(e)
return None, None
else:
if verbose:
print("unable to find command, tried %%s" %% (commands,))
return None, None
stdout = process.communicate()[0].strip().decode()
if process.returncode != 0:
if verbose:
print("unable to run %%s (error)" %% dispcmd)
print("stdout was %%s" %% stdout)
return None, process.returncode
return stdout, process.returncode
def versions_from_parentdir(parentdir_prefix, root, verbose):
"""Try to determine the version from the parent directory name.
Source tarballs conventionally unpack into a directory that includes both
the project name and a version string. We will also support searching up
two directory levels for an appropriately named parent directory
"""
rootdirs = []
for _ in range(3):
dirname = os.path.basename(root)
if dirname.startswith(parentdir_prefix):
return {"version": dirname[len(parentdir_prefix):],
"full-revisionid": None,
"dirty": False, "error": None, "date": None}
rootdirs.append(root)
root = os.path.dirname(root) # up a level
if verbose:
print("Tried directories %%s but none started with prefix %%s" %%
(str(rootdirs), parentdir_prefix))
raise NotThisMethod("rootdir doesn't start with parentdir_prefix")
@register_vcs_handler("git", "get_keywords")
def git_get_keywords(versionfile_abs):
"""Extract version information from the given file."""
# the code embedded in _version.py can just fetch the value of these
# keywords. When used from setup.py, we don't want to import _version.py,
# so we do it with a regexp instead. This function is not used from
# _version.py.
keywords = {}
try:
with open(versionfile_abs, "r") as fobj:
for line in fobj:
if line.strip().startswith("git_refnames ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["refnames"] = mo.group(1)
if line.strip().startswith("git_full ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["full"] = mo.group(1)
if line.strip().startswith("git_date ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["date"] = mo.group(1)
except OSError:
pass
return keywords
@register_vcs_handler("git", "keywords")
def git_versions_from_keywords(keywords, tag_prefix, verbose):
"""Get version information from git keywords."""
if "refnames" not in keywords:
raise NotThisMethod("Short version file found")
date = keywords.get("date")
if date is not None:
# Use only the last line. Previous lines may contain GPG signature
# information.
date = date.splitlines()[-1]
# git-2.2.0 added "%%cI", which expands to an ISO-8601 -compliant
# datestamp. However we prefer "%%ci" (which expands to an "ISO-8601
# -like" string, which we must then edit to make compliant), because
# it's been around since git-1.5.3, and it's too difficult to
# discover which version we're using, or to work around using an
# older one.
date = date.strip().replace(" ", "T", 1).replace(" ", "", 1)
refnames = keywords["refnames"].strip()
if refnames.startswith("$Format"):
if verbose:
print("keywords are unexpanded, not using")
raise NotThisMethod("unexpanded keywords, not a git-archive tarball")
refs = {r.strip() for r in refnames.strip("()").split(",")}
# starting in git-1.8.3, tags are listed as "tag: foo-1.0" instead of
# just "foo-1.0". If we see a "tag: " prefix, prefer those.
TAG = "tag: "
tags = {r[len(TAG):] for r in refs if r.startswith(TAG)}
if not tags:
# Either we're using git < 1.8.3, or there really are no tags. We use
# a heuristic: assume all version tags have a digit. The old git %%d
# expansion behaves like git log --decorate=short and strips out the
# refs/heads/ and refs/tags/ prefixes that would let us distinguish
# between branches and tags. By ignoring refnames without digits, we
# filter out many common branch names like "release" and
# "stabilization", as well as "HEAD" and "master".
tags = {r for r in refs if re.search(r'\d', r)}
if verbose:
print("discarding '%%s', no digits" %% ",".join(refs - tags))
if verbose:
print("likely tags: %%s" %% ",".join(sorted(tags)))
for ref in sorted(tags):
# sorting will prefer e.g. "2.0" over "2.0rc1"
if ref.startswith(tag_prefix):
r = ref[len(tag_prefix):]
# Filter out refs that exactly match prefix or that don't start
# with a number once the prefix is stripped (mostly a concern
# when prefix is '')
if not re.match(r'\d', r):
continue
if verbose:
print("picking %%s" %% r)
return {"version": r,
"full-revisionid": keywords["full"].strip(),
"dirty": False, "error": None,
"date": date}
# no suitable tags, so version is "0+unknown", but full hex is still there
if verbose:
print("no suitable tags, using unknown + full revision id")
return {"version": "0+unknown",
"full-revisionid": keywords["full"].strip(),
"dirty": False, "error": "no suitable tags", "date": None}
@register_vcs_handler("git", "pieces_from_vcs")
def git_pieces_from_vcs(tag_prefix, root, verbose, runner=run_command):
"""Get version from 'git describe' in the root of the source tree.
This only gets called if the git-archive 'subst' keywords were *not*
expanded, and _version.py hasn't already been rewritten with a short
version string, meaning we're inside a checked out source tree.
"""
GITS = ["git"]
if sys.platform == "win32":
GITS = ["git.cmd", "git.exe"]
# GIT_DIR can interfere with correct operation of Versioneer.
# It may be intended to be passed to the Versioneer-versioned project,
# but that should not change where we get our version from.
env = os.environ.copy()
env.pop("GIT_DIR", None)
runner = functools.partial(runner, env=env)
_, rc = runner(GITS, ["rev-parse", "--git-dir"], cwd=root,
hide_stderr=True)
if rc != 0:
if verbose:
print("Directory %%s not under git control" %% root)
raise NotThisMethod("'git rev-parse --git-dir' returned error")
MATCH_ARGS = ["--match", "%%s*" %% tag_prefix] if tag_prefix else []
# if there is a tag matching tag_prefix, this yields TAG-NUM-gHEX[-dirty]
# if there isn't one, this yields HEX[-dirty] (no NUM)
describe_out, rc = runner(GITS, ["describe", "--tags", "--dirty",
"--always", "--long", *MATCH_ARGS],
cwd=root)
# --long was added in git-1.5.5
if describe_out is None:
raise NotThisMethod("'git describe' failed")
describe_out = describe_out.strip()
full_out, rc = runner(GITS, ["rev-parse", "HEAD"], cwd=root)
if full_out is None:
raise NotThisMethod("'git rev-parse' failed")
full_out = full_out.strip()
pieces = {}
pieces["long"] = full_out
pieces["short"] = full_out[:7] # maybe improved later
pieces["error"] = None
branch_name, rc = runner(GITS, ["rev-parse", "--abbrev-ref", "HEAD"],
cwd=root)
# --abbrev-ref was added in git-1.6.3
if rc != 0 or branch_name is None:
raise NotThisMethod("'git rev-parse --abbrev-ref' returned error")
branch_name = branch_name.strip()
if branch_name == "HEAD":
# If we aren't exactly on a branch, pick a branch which represents
# the current commit. If all else fails, we are on a branchless
# commit.
branches, rc = runner(GITS, ["branch", "--contains"], cwd=root)
# --contains was added in git-1.5.4
if rc != 0 or branches is None:
raise NotThisMethod("'git branch --contains' returned error")
branches = branches.split("\n")
# Remove the first line if we're running detached
if "(" in branches[0]:
branches.pop(0)
# Strip off the leading "* " from the list of branches.
branches = [branch[2:] for branch in branches]
if "master" in branches:
branch_name = "master"
elif not branches:
branch_name = None
else:
# Pick the first branch that is returned. Good or bad.
branch_name = branches[0]
pieces["branch"] = branch_name
# parse describe_out. It will be like TAG-NUM-gHEX[-dirty] or HEX[-dirty]
# TAG might have hyphens.
git_describe = describe_out
# look for -dirty suffix
dirty = git_describe.endswith("-dirty")
pieces["dirty"] = dirty
if dirty:
git_describe = git_describe[:git_describe.rindex("-dirty")]
# now we have TAG-NUM-gHEX or HEX
if "-" in git_describe:
# TAG-NUM-gHEX
mo = re.search(r'^(.+)-(\d+)-g([0-9a-f]+)$', git_describe)
if not mo:
# unparsable. Maybe git-describe is misbehaving?
pieces["error"] = ("unable to parse git-describe output: '%%s'"
%% describe_out)
return pieces
# tag
full_tag = mo.group(1)
if not full_tag.startswith(tag_prefix):
if verbose:
fmt = "tag '%%s' doesn't start with prefix '%%s'"
print(fmt %% (full_tag, tag_prefix))
pieces["error"] = ("tag '%%s' doesn't start with prefix '%%s'"
%% (full_tag, tag_prefix))
return pieces
pieces["closest-tag"] = full_tag[len(tag_prefix):]
# distance: number of commits since tag
pieces["distance"] = int(mo.group(2))
# commit: short hex revision ID
pieces["short"] = mo.group(3)
else:
# HEX: no tags
pieces["closest-tag"] = None
count_out, rc = runner(GITS, ["rev-list", "HEAD", "--count"], cwd=root)
pieces["distance"] = int(count_out) # total number of commits
# commit date: see ISO-8601 comment in git_versions_from_keywords()
date = runner(GITS, ["show", "-s", "--format=%%ci", "HEAD"], cwd=root)[0].strip()
# Use only the last line. Previous lines may contain GPG signature
# information.
date = date.splitlines()[-1]
pieces["date"] = date.strip().replace(" ", "T", 1).replace(" ", "", 1)
return pieces
def plus_or_dot(pieces):
"""Return a + if we don't already have one, else return a ."""
if "+" in pieces.get("closest-tag", ""):
return "."
return "+"
def render_pep440(pieces):
"""Build up version string, with post-release "local version identifier".
Our goal: TAG[+DISTANCE.gHEX[.dirty]] . Note that if you
get a tagged build and then dirty it, you'll get TAG+0.gHEX.dirty
Exceptions:
1: no tags. git_describe was just HEX. 0+untagged.DISTANCE.gHEX[.dirty]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += plus_or_dot(pieces)
rendered += "%%d.g%%s" %% (pieces["distance"], pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
else:
# exception #1
rendered = "0+untagged.%%d.g%%s" %% (pieces["distance"],
pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
return rendered
def render_pep440_branch(pieces):
"""TAG[[.dev0]+DISTANCE.gHEX[.dirty]] .
The ".dev0" means not master branch. Note that .dev0 sorts backwards
(a feature branch will appear "older" than the master branch).
Exceptions:
1: no tags. 0[.dev0]+untagged.DISTANCE.gHEX[.dirty]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
if pieces["branch"] != "master":
rendered += ".dev0"
rendered += plus_or_dot(pieces)
rendered += "%%d.g%%s" %% (pieces["distance"], pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
else:
# exception #1
rendered = "0"
if pieces["branch"] != "master":
rendered += ".dev0"
rendered += "+untagged.%%d.g%%s" %% (pieces["distance"],
pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
return rendered
def pep440_split_post(ver):
"""Split pep440 version string at the post-release segment.
Returns the release segments before the post-release and the
post-release version number (or -1 if no post-release segment is present).
"""
vc = str.split(ver, ".post")
return vc[0], int(vc[1] or 0) if len(vc) == 2 else None
def render_pep440_pre(pieces):
"""TAG[.postN.devDISTANCE] -- No -dirty.
Exceptions:
1: no tags. 0.post0.devDISTANCE
"""
if pieces["closest-tag"]:
if pieces["distance"]:
# update the post release segment
tag_version, post_version = pep440_split_post(pieces["closest-tag"])
rendered = tag_version
if post_version is not None:
rendered += ".post%%d.dev%%d" %% (post_version+1, pieces["distance"])
else:
rendered += ".post0.dev%%d" %% (pieces["distance"])
else:
# no commits, use the tag as the version
rendered = pieces["closest-tag"]
else:
# exception #1
rendered = "0.post0.dev%%d" %% pieces["distance"]
return rendered
def render_pep440_post(pieces):
"""TAG[.postDISTANCE[.dev0]+gHEX] .
The ".dev0" means dirty. Note that .dev0 sorts backwards
(a dirty tree will appear "older" than the corresponding clean one),
but you shouldn't be releasing software with -dirty anyways.
Exceptions:
1: no tags. 0.postDISTANCE[.dev0]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += ".post%%d" %% pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
rendered += plus_or_dot(pieces)
rendered += "g%%s" %% pieces["short"]
else:
# exception #1
rendered = "0.post%%d" %% pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
rendered += "+g%%s" %% pieces["short"]
return rendered
def render_pep440_post_branch(pieces):
"""TAG[.postDISTANCE[.dev0]+gHEX[.dirty]] .
The ".dev0" means not master branch.
Exceptions:
1: no tags. 0.postDISTANCE[.dev0]+gHEX[.dirty]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += ".post%%d" %% pieces["distance"]
if pieces["branch"] != "master":
rendered += ".dev0"
rendered += plus_or_dot(pieces)
rendered += "g%%s" %% pieces["short"]
if pieces["dirty"]:
rendered += ".dirty"
else:
# exception #1
rendered = "0.post%%d" %% pieces["distance"]
if pieces["branch"] != "master":
rendered += ".dev0"
rendered += "+g%%s" %% pieces["short"]
if pieces["dirty"]:
rendered += ".dirty"
return rendered
def render_pep440_old(pieces):
"""TAG[.postDISTANCE[.dev0]] .
The ".dev0" means dirty.
Exceptions:
1: no tags. 0.postDISTANCE[.dev0]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += ".post%%d" %% pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
else:
# exception #1
rendered = "0.post%%d" %% pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
return rendered
def render_git_describe(pieces):
"""TAG[-DISTANCE-gHEX][-dirty].
Like 'git describe --tags --dirty --always'.
Exceptions:
1: no tags. HEX[-dirty] (note: no 'g' prefix)
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"]:
rendered += "-%%d-g%%s" %% (pieces["distance"], pieces["short"])
else:
# exception #1
rendered = pieces["short"]
if pieces["dirty"]:
rendered += "-dirty"
return rendered
def render_git_describe_long(pieces):
"""TAG-DISTANCE-gHEX[-dirty].
Like 'git describe --tags --dirty --always -long'.
The distance/hash is unconditional.
Exceptions:
1: no tags. HEX[-dirty] (note: no 'g' prefix)
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
rendered += "-%%d-g%%s" %% (pieces["distance"], pieces["short"])
else:
# exception #1
rendered = pieces["short"]
if pieces["dirty"]:
rendered += "-dirty"
return rendered
def render(pieces, style):
"""Render the given version pieces into the requested style."""
if pieces["error"]:
return {"version": "unknown",
"full-revisionid": pieces.get("long"),
"dirty": None,
"error": pieces["error"],
"date": None}
if not style or style == "default":
style = "pep440" # the default
if style == "pep440":
rendered = render_pep440(pieces)
elif style == "pep440-branch":
rendered = render_pep440_branch(pieces)
elif style == "pep440-pre":
rendered = render_pep440_pre(pieces)
elif style == "pep440-post":
rendered = render_pep440_post(pieces)
elif style == "pep440-post-branch":
rendered = render_pep440_post_branch(pieces)
elif style == "pep440-old":
rendered = render_pep440_old(pieces)
elif style == "git-describe":
rendered = render_git_describe(pieces)
elif style == "git-describe-long":
rendered = render_git_describe_long(pieces)
else:
raise ValueError("unknown style '%%s'" %% style)
return {"version": rendered, "full-revisionid": pieces["long"],
"dirty": pieces["dirty"], "error": None,
"date": pieces.get("date")}
def get_versions():
"""Get version information or return default if unable to do so."""
# I am in _version.py, which lives at ROOT/VERSIONFILE_SOURCE. If we have
# __file__, we can work backwards from there to the root. Some
# py2exe/bbfreeze/non-CPython implementations don't do __file__, in which
# case we can only use expanded keywords.
cfg = get_config()
verbose = cfg.verbose
try:
return git_versions_from_keywords(get_keywords(), cfg.tag_prefix,
verbose)
except NotThisMethod:
pass
try:
root = os.path.realpath(__file__)
# versionfile_source is the relative path from the top of the source
# tree (where the .git directory might live) to this file. Invert
# this to find the root from __file__.
for _ in cfg.versionfile_source.split('/'):
root = os.path.dirname(root)
except NameError:
return {"version": "0+unknown", "full-revisionid": None,
"dirty": None,
"error": "unable to find root of source tree",
"date": None}
try:
pieces = git_pieces_from_vcs(cfg.tag_prefix, root, verbose)
return render(pieces, cfg.style)
except NotThisMethod:
pass
try:
if cfg.parentdir_prefix:
return versions_from_parentdir(cfg.parentdir_prefix, root, verbose)
except NotThisMethod:
pass
return {"version": "0+unknown", "full-revisionid": None,
"dirty": None,
"error": "unable to compute version", "date": None}
'''
@register_vcs_handler("git", "get_keywords")
def git_get_keywords(versionfile_abs):
"""Extract version information from the given file."""
# the code embedded in _version.py can just fetch the value of these
# keywords. When used from setup.py, we don't want to import _version.py,
# so we do it with a regexp instead. This function is not used from
# _version.py.
keywords = {}
try:
with open(versionfile_abs, "r") as fobj:
for line in fobj:
if line.strip().startswith("git_refnames ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["refnames"] = mo.group(1)
if line.strip().startswith("git_full ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["full"] = mo.group(1)
if line.strip().startswith("git_date ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["date"] = mo.group(1)
except OSError:
pass
return keywords
@register_vcs_handler("git", "keywords")
def git_versions_from_keywords(keywords, tag_prefix, verbose):
"""Get version information from git keywords."""
if "refnames" not in keywords:
raise NotThisMethod("Short version file found")
date = keywords.get("date")
if date is not None:
# Use only the last line. Previous lines may contain GPG signature
# information.
date = date.splitlines()[-1]
# git-2.2.0 added "%cI", which expands to an ISO-8601 -compliant
# datestamp. However we prefer "%ci" (which expands to an "ISO-8601
# -like" string, which we must then edit to make compliant), because
# it's been around since git-1.5.3, and it's too difficult to
# discover which version we're using, or to work around using an
# older one.
date = date.strip().replace(" ", "T", 1).replace(" ", "", 1)
refnames = keywords["refnames"].strip()
if refnames.startswith("$Format"):
if verbose:
print("keywords are unexpanded, not using")
raise NotThisMethod("unexpanded keywords, not a git-archive tarball")
refs = {r.strip() for r in refnames.strip("()").split(",")}
# starting in git-1.8.3, tags are listed as "tag: foo-1.0" instead of
# just "foo-1.0". If we see a "tag: " prefix, prefer those.
TAG = "tag: "
tags = {r[len(TAG):] for r in refs if r.startswith(TAG)}
if not tags:
# Either we're using git < 1.8.3, or there really are no tags. We use
# a heuristic: assume all version tags have a digit. The old git %d
# expansion behaves like git log --decorate=short and strips out the
# refs/heads/ and refs/tags/ prefixes that would let us distinguish
# between branches and tags. By ignoring refnames without digits, we
# filter out many common branch names like "release" and
# "stabilization", as well as "HEAD" and "master".
tags = {r for r in refs if re.search(r'\d', r)}
if verbose:
print("discarding '%s', no digits" % ",".join(refs - tags))
if verbose:
print("likely tags: %s" % ",".join(sorted(tags)))
for ref in sorted(tags):
# sorting will prefer e.g. "2.0" over "2.0rc1"
if ref.startswith(tag_prefix):
r = ref[len(tag_prefix):]
# Filter out refs that exactly match prefix or that don't start
# with a number once the prefix is stripped (mostly a concern
# when prefix is '')
if not re.match(r'\d', r):
continue
if verbose:
print("picking %s" % r)
return {"version": r,
"full-revisionid": keywords["full"].strip(),
"dirty": False, "error": None,
"date": date}
# no suitable tags, so version is "0+unknown", but full hex is still there
if verbose:
print("no suitable tags, using unknown + full revision id")
return {"version": "0+unknown",
"full-revisionid": keywords["full"].strip(),
"dirty": False, "error": "no suitable tags", "date": None}
@register_vcs_handler("git", "pieces_from_vcs")
def git_pieces_from_vcs(tag_prefix, root, verbose, runner=run_command):
"""Get version from 'git describe' in the root of the source tree.
This only gets called if the git-archive 'subst' keywords were *not*
expanded, and _version.py hasn't already been rewritten with a short
version string, meaning we're inside a checked out source tree.
"""
GITS = ["git"]
if sys.platform == "win32":
GITS = ["git.cmd", "git.exe"]
# GIT_DIR can interfere with correct operation of Versioneer.
# It may be intended to be passed to the Versioneer-versioned project,
# but that should not change where we get our version from.
env = os.environ.copy()
env.pop("GIT_DIR", None)
runner = functools.partial(runner, env=env)
_, rc = runner(GITS, ["rev-parse", "--git-dir"], cwd=root,
hide_stderr=True)
if rc != 0:
if verbose:
print("Directory %s not under git control" % root)
raise NotThisMethod("'git rev-parse --git-dir' returned error")
MATCH_ARGS = ["--match", "%s*" % tag_prefix] if tag_prefix else []
# if there is a tag matching tag_prefix, this yields TAG-NUM-gHEX[-dirty]
# if there isn't one, this yields HEX[-dirty] (no NUM)
describe_out, rc = runner(GITS, ["describe", "--tags", "--dirty",
"--always", "--long", *MATCH_ARGS],
cwd=root)
# --long was added in git-1.5.5
if describe_out is None:
raise NotThisMethod("'git describe' failed")
describe_out = describe_out.strip()
full_out, rc = runner(GITS, ["rev-parse", "HEAD"], cwd=root)
if full_out is None:
raise NotThisMethod("'git rev-parse' failed")
full_out = full_out.strip()
pieces = {}
pieces["long"] = full_out
pieces["short"] = full_out[:7] # maybe improved later
pieces["error"] = None
branch_name, rc = runner(GITS, ["rev-parse", "--abbrev-ref", "HEAD"],
cwd=root)
# --abbrev-ref was added in git-1.6.3
if rc != 0 or branch_name is None:
raise NotThisMethod("'git rev-parse --abbrev-ref' returned error")
branch_name = branch_name.strip()
if branch_name == "HEAD":
# If we aren't exactly on a branch, pick a branch which represents
# the current commit. If all else fails, we are on a branchless
# commit.
branches, rc = runner(GITS, ["branch", "--contains"], cwd=root)
# --contains was added in git-1.5.4
if rc != 0 or branches is None:
raise NotThisMethod("'git branch --contains' returned error")
branches = branches.split("\n")
# Remove the first line if we're running detached
if "(" in branches[0]:
branches.pop(0)
# Strip off the leading "* " from the list of branches.
branches = [branch[2:] for branch in branches]
if "master" in branches:
branch_name = "master"
elif not branches:
branch_name = None
else:
# Pick the first branch that is returned. Good or bad.
branch_name = branches[0]
pieces["branch"] = branch_name
# parse describe_out. It will be like TAG-NUM-gHEX[-dirty] or HEX[-dirty]
# TAG might have hyphens.
git_describe = describe_out
# look for -dirty suffix
dirty = git_describe.endswith("-dirty")
pieces["dirty"] = dirty
if dirty:
git_describe = git_describe[:git_describe.rindex("-dirty")]
# now we have TAG-NUM-gHEX or HEX
if "-" in git_describe:
# TAG-NUM-gHEX
mo = re.search(r'^(.+)-(\d+)-g([0-9a-f]+)$', git_describe)
if not mo:
# unparsable. Maybe git-describe is misbehaving?
pieces["error"] = ("unable to parse git-describe output: '%s'"
% describe_out)
return pieces
# tag
full_tag = mo.group(1)
if not full_tag.startswith(tag_prefix):
if verbose:
fmt = "tag '%s' doesn't start with prefix '%s'"
print(fmt % (full_tag, tag_prefix))
pieces["error"] = ("tag '%s' doesn't start with prefix '%s'"
% (full_tag, tag_prefix))
return pieces
pieces["closest-tag"] = full_tag[len(tag_prefix):]
# distance: number of commits since tag
pieces["distance"] = int(mo.group(2))
# commit: short hex revision ID
pieces["short"] = mo.group(3)
else:
# HEX: no tags
pieces["closest-tag"] = None
count_out, rc = runner(GITS, ["rev-list", "HEAD", "--count"], cwd=root)
pieces["distance"] = int(count_out) # total number of commits
# commit date: see ISO-8601 comment in git_versions_from_keywords()
date = runner(GITS, ["show", "-s", "--format=%ci", "HEAD"], cwd=root)[0].strip()
# Use only the last line. Previous lines may contain GPG signature
# information.
date = date.splitlines()[-1]
pieces["date"] = date.strip().replace(" ", "T", 1).replace(" ", "", 1)
return pieces
def do_vcs_install(manifest_in, versionfile_source, ipy):
"""Git-specific installation logic for Versioneer.
For Git, this means creating/changing .gitattributes to mark _version.py
for export-subst keyword substitution.
"""
GITS = ["git"]
if sys.platform == "win32":
GITS = ["git.cmd", "git.exe"]
files = [manifest_in, versionfile_source]
if ipy:
files.append(ipy)
try:
my_path = __file__
if my_path.endswith(".pyc") or my_path.endswith(".pyo"):
my_path = os.path.splitext(my_path)[0] + ".py"
versioneer_file = os.path.relpath(my_path)
except NameError:
versioneer_file = "versioneer.py"
files.append(versioneer_file)
present = False
try:
with open(".gitattributes", "r") as fobj:
for line in fobj:
if line.strip().startswith(versionfile_source):
if "export-subst" in line.strip().split()[1:]:
present = True
break
except OSError:
pass
if not present:
with open(".gitattributes", "a+") as fobj:
fobj.write(f"{versionfile_source} export-subst\n")
files.append(".gitattributes")
run_command(GITS, ["add", "--"] + files)
def versions_from_parentdir(parentdir_prefix, root, verbose):
"""Try to determine the version from the parent directory name.
Source tarballs conventionally unpack into a directory that includes both
the project name and a version string. We will also support searching up
two directory levels for an appropriately named parent directory
"""
rootdirs = []
for _ in range(3):
dirname = os.path.basename(root)
if dirname.startswith(parentdir_prefix):
return {"version": dirname[len(parentdir_prefix):],
"full-revisionid": None,
"dirty": False, "error": None, "date": None}
rootdirs.append(root)
root = os.path.dirname(root) # up a level
if verbose:
print("Tried directories %s but none started with prefix %s" %
(str(rootdirs), parentdir_prefix))
raise NotThisMethod("rootdir doesn't start with parentdir_prefix")
SHORT_VERSION_PY = """
# This file was generated by 'versioneer.py' (0.22) from
# revision-control system data, or from the parent directory name of an
# unpacked source archive. Distribution tarballs contain a pre-generated copy
# of this file.
import json
version_json = '''
%s
''' # END VERSION_JSON
def get_versions():
return json.loads(version_json)
"""
def versions_from_file(filename):
"""Try to determine the version from _version.py if present."""
try:
with open(filename) as f:
contents = f.read()
except OSError:
raise NotThisMethod("unable to read _version.py")
mo = re.search(r"version_json = '''\n(.*)''' # END VERSION_JSON",
contents, re.M | re.S)
if not mo:
mo = re.search(r"version_json = '''\r\n(.*)''' # END VERSION_JSON",
contents, re.M | re.S)
if not mo:
raise NotThisMethod("no version_json in _version.py")
return json.loads(mo.group(1))
def write_to_version_file(filename, versions):
"""Write the given version number to the given _version.py file."""
os.unlink(filename)
contents = json.dumps(versions, sort_keys=True,
indent=1, separators=(",", ": "))
with open(filename, "w") as f:
f.write(SHORT_VERSION_PY % contents)
print("set %s to '%s'" % (filename, versions["version"]))
def plus_or_dot(pieces):
"""Return a + if we don't already have one, else return a ."""
if "+" in pieces.get("closest-tag", ""):
return "."
return "+"
def render_pep440(pieces):
"""Build up version string, with post-release "local version identifier".
Our goal: TAG[+DISTANCE.gHEX[.dirty]] . Note that if you
get a tagged build and then dirty it, you'll get TAG+0.gHEX.dirty
Exceptions:
1: no tags. git_describe was just HEX. 0+untagged.DISTANCE.gHEX[.dirty]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += plus_or_dot(pieces)
rendered += "%d.g%s" % (pieces["distance"], pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
else:
# exception #1
rendered = "0+untagged.%d.g%s" % (pieces["distance"],
pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
return rendered
def render_pep440_branch(pieces):
"""TAG[[.dev0]+DISTANCE.gHEX[.dirty]] .
The ".dev0" means not master branch. Note that .dev0 sorts backwards
(a feature branch will appear "older" than the master branch).
Exceptions:
1: no tags. 0[.dev0]+untagged.DISTANCE.gHEX[.dirty]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
if pieces["branch"] != "master":
rendered += ".dev0"
rendered += plus_or_dot(pieces)
rendered += "%d.g%s" % (pieces["distance"], pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
else:
# exception #1
rendered = "0"
if pieces["branch"] != "master":
rendered += ".dev0"
rendered += "+untagged.%d.g%s" % (pieces["distance"],
pieces["short"])
if pieces["dirty"]:
rendered += ".dirty"
return rendered
def pep440_split_post(ver):
"""Split pep440 version string at the post-release segment.
Returns the release segments before the post-release and the
post-release version number (or -1 if no post-release segment is present).
"""
vc = str.split(ver, ".post")
return vc[0], int(vc[1] or 0) if len(vc) == 2 else None
def render_pep440_pre(pieces):
"""TAG[.postN.devDISTANCE] -- No -dirty.
Exceptions:
1: no tags. 0.post0.devDISTANCE
"""
if pieces["closest-tag"]:
if pieces["distance"]:
# update the post release segment
tag_version, post_version = pep440_split_post(pieces["closest-tag"])
rendered = tag_version
if post_version is not None:
rendered += ".post%d.dev%d" % (post_version+1, pieces["distance"])
else:
rendered += ".post0.dev%d" % (pieces["distance"])
else:
# no commits, use the tag as the version
rendered = pieces["closest-tag"]
else:
# exception #1
rendered = "0.post0.dev%d" % pieces["distance"]
return rendered
def render_pep440_post(pieces):
"""TAG[.postDISTANCE[.dev0]+gHEX] .
The ".dev0" means dirty. Note that .dev0 sorts backwards
(a dirty tree will appear "older" than the corresponding clean one),
but you shouldn't be releasing software with -dirty anyways.
Exceptions:
1: no tags. 0.postDISTANCE[.dev0]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += ".post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
rendered += plus_or_dot(pieces)
rendered += "g%s" % pieces["short"]
else:
# exception #1
rendered = "0.post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
rendered += "+g%s" % pieces["short"]
return rendered
def render_pep440_post_branch(pieces):
"""TAG[.postDISTANCE[.dev0]+gHEX[.dirty]] .
The ".dev0" means not master branch.
Exceptions:
1: no tags. 0.postDISTANCE[.dev0]+gHEX[.dirty]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += ".post%d" % pieces["distance"]
if pieces["branch"] != "master":
rendered += ".dev0"
rendered += plus_or_dot(pieces)
rendered += "g%s" % pieces["short"]
if pieces["dirty"]:
rendered += ".dirty"
else:
# exception #1
rendered = "0.post%d" % pieces["distance"]
if pieces["branch"] != "master":
rendered += ".dev0"
rendered += "+g%s" % pieces["short"]
if pieces["dirty"]:
rendered += ".dirty"
return rendered
def render_pep440_old(pieces):
"""TAG[.postDISTANCE[.dev0]] .
The ".dev0" means dirty.
Exceptions:
1: no tags. 0.postDISTANCE[.dev0]
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"] or pieces["dirty"]:
rendered += ".post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
else:
# exception #1
rendered = "0.post%d" % pieces["distance"]
if pieces["dirty"]:
rendered += ".dev0"
return rendered
def render_git_describe(pieces):
"""TAG[-DISTANCE-gHEX][-dirty].
Like 'git describe --tags --dirty --always'.
Exceptions:
1: no tags. HEX[-dirty] (note: no 'g' prefix)
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
if pieces["distance"]:
rendered += "-%d-g%s" % (pieces["distance"], pieces["short"])
else:
# exception #1
rendered = pieces["short"]
if pieces["dirty"]:
rendered += "-dirty"
return rendered
def render_git_describe_long(pieces):
"""TAG-DISTANCE-gHEX[-dirty].
Like 'git describe --tags --dirty --always -long'.
The distance/hash is unconditional.
Exceptions:
1: no tags. HEX[-dirty] (note: no 'g' prefix)
"""
if pieces["closest-tag"]:
rendered = pieces["closest-tag"]
rendered += "-%d-g%s" % (pieces["distance"], pieces["short"])
else:
# exception #1
rendered = pieces["short"]
if pieces["dirty"]:
rendered += "-dirty"
return rendered
def render(pieces, style):
"""Render the given version pieces into the requested style."""
if pieces["error"]:
return {"version": "unknown",
"full-revisionid": pieces.get("long"),
"dirty": None,
"error": pieces["error"],
"date": None}
if not style or style == "default":
style = "pep440" # the default
if style == "pep440":
rendered = render_pep440(pieces)
elif style == "pep440-branch":
rendered = render_pep440_branch(pieces)
elif style == "pep440-pre":
rendered = render_pep440_pre(pieces)
elif style == "pep440-post":
rendered = render_pep440_post(pieces)
elif style == "pep440-post-branch":
rendered = render_pep440_post_branch(pieces)
elif style == "pep440-old":
rendered = render_pep440_old(pieces)
elif style == "git-describe":
rendered = render_git_describe(pieces)
elif style == "git-describe-long":
rendered = render_git_describe_long(pieces)
else:
raise ValueError("unknown style '%s'" % style)
return {"version": rendered, "full-revisionid": pieces["long"],
"dirty": pieces["dirty"], "error": None,
"date": pieces.get("date")}
class VersioneerBadRootError(Exception):
"""The project root directory is unknown or missing key files."""
def get_versions(verbose=False):
"""Get the project version from whatever source is available.
Returns dict with two keys: 'version' and 'full'.
"""
if "versioneer" in sys.modules:
# see the discussion in cmdclass.py:get_cmdclass()
del sys.modules["versioneer"]
root = get_root()
cfg = get_config_from_root(root)
assert cfg.VCS is not None, "please set [versioneer]VCS= in setup.cfg"
handlers = HANDLERS.get(cfg.VCS)
assert handlers, "unrecognized VCS '%s'" % cfg.VCS
verbose = verbose or cfg.verbose
assert cfg.versionfile_source is not None, \
"please set versioneer.versionfile_source"
assert cfg.tag_prefix is not None, "please set versioneer.tag_prefix"
versionfile_abs = os.path.join(root, cfg.versionfile_source)
# extract version from first of: _version.py, VCS command (e.g. 'git
# describe'), parentdir. This is meant to work for developers using a
# source checkout, for users of a tarball created by 'setup.py sdist',
# and for users of a tarball/zipball created by 'git archive' or github's
# download-from-tag feature or the equivalent in other VCSes.
get_keywords_f = handlers.get("get_keywords")
from_keywords_f = handlers.get("keywords")
if get_keywords_f and from_keywords_f:
try:
keywords = get_keywords_f(versionfile_abs)
ver = from_keywords_f(keywords, cfg.tag_prefix, verbose)
if verbose:
print("got version from expanded keyword %s" % ver)
return ver
except NotThisMethod:
pass
try:
ver = versions_from_file(versionfile_abs)
if verbose:
print("got version from file %s %s" % (versionfile_abs, ver))
return ver
except NotThisMethod:
pass
from_vcs_f = handlers.get("pieces_from_vcs")
if from_vcs_f:
try:
pieces = from_vcs_f(cfg.tag_prefix, root, verbose)
ver = render(pieces, cfg.style)
if verbose:
print("got version from VCS %s" % ver)
return ver
except NotThisMethod:
pass
try:
if cfg.parentdir_prefix:
ver = versions_from_parentdir(cfg.parentdir_prefix, root, verbose)
if verbose:
print("got version from parentdir %s" % ver)
return ver
except NotThisMethod:
pass
if verbose:
print("unable to compute version")
return {"version": "0+unknown", "full-revisionid": None,
"dirty": None, "error": "unable to compute version",
"date": None}
def get_version():
"""Get the short version string for this project."""
return get_versions()["version"]
def get_cmdclass(cmdclass=None):
"""Get the custom setuptools/distutils subclasses used by Versioneer.
If the package uses a different cmdclass (e.g. one from numpy), it
should be provide as an argument.
"""
if "versioneer" in sys.modules:
del sys.modules["versioneer"]
# this fixes the "python setup.py develop" case (also 'install' and
# 'easy_install .'), in which subdependencies of the main project are
# built (using setup.py bdist_egg) in the same python process. Assume
# a main project A and a dependency B, which use different versions
# of Versioneer. A's setup.py imports A's Versioneer, leaving it in
# sys.modules by the time B's setup.py is executed, causing B to run
# with the wrong versioneer. Setuptools wraps the sub-dep builds in a
# sandbox that restores sys.modules to it's pre-build state, so the
# parent is protected against the child's "import versioneer". By
# removing ourselves from sys.modules here, before the child build
# happens, we protect the child from the parent's versioneer too.
# Also see https://github.com/python-versioneer/python-versioneer/issues/52
cmds = {} if cmdclass is None else cmdclass.copy()
# we add "version" to both distutils and setuptools
try:
from setuptools import Command
except ImportError:
from distutils.core import Command
class cmd_version(Command):
description = "report generated version string"
user_options = []
boolean_options = []
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
vers = get_versions(verbose=True)
print("Version: %s" % vers["version"])
print(" full-revisionid: %s" % vers.get("full-revisionid"))
print(" dirty: %s" % vers.get("dirty"))
print(" date: %s" % vers.get("date"))
if vers["error"]:
print(" error: %s" % vers["error"])
cmds["version"] = cmd_version
# we override "build_py" in both distutils and setuptools
#
# most invocation pathways end up running build_py:
# distutils/build -> build_py
# distutils/install -> distutils/build ->..
# setuptools/bdist_wheel -> distutils/install ->..
# setuptools/bdist_egg -> distutils/install_lib -> build_py
# setuptools/install -> bdist_egg ->..
# setuptools/develop -> ?
# pip install:
# copies source tree to a tempdir before running egg_info/etc
# if .git isn't copied too, 'git describe' will fail
# then does setup.py bdist_wheel, or sometimes setup.py install
# setup.py egg_info -> ?
# we override different "build_py" commands for both environments
if 'build_py' in cmds:
_build_py = cmds['build_py']
elif "setuptools" in sys.modules:
from setuptools.command.build_py import build_py as _build_py
else:
from distutils.command.build_py import build_py as _build_py
class cmd_build_py(_build_py):
def run(self):
root = get_root()
cfg = get_config_from_root(root)
versions = get_versions()
_build_py.run(self)
# now locate _version.py in the new build/ directory and replace
# it with an updated value
if cfg.versionfile_build:
target_versionfile = os.path.join(self.build_lib,
cfg.versionfile_build)
print("UPDATING %s" % target_versionfile)
write_to_version_file(target_versionfile, versions)
cmds["build_py"] = cmd_build_py
if 'build_ext' in cmds:
_build_ext = cmds['build_ext']
elif "setuptools" in sys.modules:
from setuptools.command.build_ext import build_ext as _build_ext
else:
from distutils.command.build_ext import build_ext as _build_ext
class cmd_build_ext(_build_ext):
def run(self):
root = get_root()
cfg = get_config_from_root(root)
versions = get_versions()
_build_ext.run(self)
if self.inplace:
# build_ext --inplace will only build extensions in
# build/lib<..> dir with no _version.py to write to.
# As in place builds will already have a _version.py
# in the module dir, we do not need to write one.
return
# now locate _version.py in the new build/ directory and replace
# it with an updated value
target_versionfile = os.path.join(self.build_lib,
cfg.versionfile_build)
print("UPDATING %s" % target_versionfile)
write_to_version_file(target_versionfile, versions)
cmds["build_ext"] = cmd_build_ext
if "cx_Freeze" in sys.modules: # cx_freeze enabled?
from cx_Freeze.dist import build_exe as _build_exe
# nczeczulin reports that py2exe won't like the pep440-style string
# as FILEVERSION, but it can be used for PRODUCTVERSION, e.g.
# setup(console=[{
# "version": versioneer.get_version().split("+", 1)[0], # FILEVERSION
# "product_version": versioneer.get_version(),
# ...
class cmd_build_exe(_build_exe):
def run(self):
root = get_root()
cfg = get_config_from_root(root)
versions = get_versions()
target_versionfile = cfg.versionfile_source
print("UPDATING %s" % target_versionfile)
write_to_version_file(target_versionfile, versions)
_build_exe.run(self)
os.unlink(target_versionfile)
with open(cfg.versionfile_source, "w") as f:
LONG = LONG_VERSION_PY[cfg.VCS]
f.write(LONG %
{"DOLLAR": "$",
"STYLE": cfg.style,
"TAG_PREFIX": cfg.tag_prefix,
"PARENTDIR_PREFIX": cfg.parentdir_prefix,
"VERSIONFILE_SOURCE": cfg.versionfile_source,
})
cmds["build_exe"] = cmd_build_exe
del cmds["build_py"]
if 'py2exe' in sys.modules: # py2exe enabled?
from py2exe.distutils_buildexe import py2exe as _py2exe
class cmd_py2exe(_py2exe):
def run(self):
root = get_root()
cfg = get_config_from_root(root)
versions = get_versions()
target_versionfile = cfg.versionfile_source
print("UPDATING %s" % target_versionfile)
write_to_version_file(target_versionfile, versions)
_py2exe.run(self)
os.unlink(target_versionfile)
with open(cfg.versionfile_source, "w") as f:
LONG = LONG_VERSION_PY[cfg.VCS]
f.write(LONG %
{"DOLLAR": "$",
"STYLE": cfg.style,
"TAG_PREFIX": cfg.tag_prefix,
"PARENTDIR_PREFIX": cfg.parentdir_prefix,
"VERSIONFILE_SOURCE": cfg.versionfile_source,
})
cmds["py2exe"] = cmd_py2exe
# we override different "sdist" commands for both environments
if 'sdist' in cmds:
_sdist = cmds['sdist']
elif "setuptools" in sys.modules:
from setuptools.command.sdist import sdist as _sdist
else:
from distutils.command.sdist import sdist as _sdist
class cmd_sdist(_sdist):
def run(self):
versions = get_versions()
self._versioneer_generated_versions = versions
# unless we update this, the command will keep using the old
# version
self.distribution.metadata.version = versions["version"]
return _sdist.run(self)
def make_release_tree(self, base_dir, files):
root = get_root()
cfg = get_config_from_root(root)
_sdist.make_release_tree(self, base_dir, files)
# now locate _version.py in the new base_dir directory
# (remembering that it may be a hardlink) and replace it with an
# updated value
target_versionfile = os.path.join(base_dir, cfg.versionfile_source)
print("UPDATING %s" % target_versionfile)
write_to_version_file(target_versionfile,
self._versioneer_generated_versions)
cmds["sdist"] = cmd_sdist
return cmds
CONFIG_ERROR = """
setup.cfg is missing the necessary Versioneer configuration. You need
a section like:
[versioneer]
VCS = git
style = pep440
versionfile_source = src/myproject/_version.py
versionfile_build = myproject/_version.py
tag_prefix =
parentdir_prefix = myproject-
You will also need to edit your setup.py to use the results:
import versioneer
setup(version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(), ...)
Please read the docstring in ./versioneer.py for configuration instructions,
edit setup.cfg, and re-run the installer or 'python versioneer.py setup'.
"""
SAMPLE_CONFIG = """
# See the docstring in versioneer.py for instructions. Note that you must
# re-run 'versioneer.py setup' after changing this section, and commit the
# resulting files.
[versioneer]
#VCS = git
#style = pep440
#versionfile_source =
#versionfile_build =
#tag_prefix =
#parentdir_prefix =
"""
OLD_SNIPPET = """
from ._version import get_versions
__version__ = get_versions()['version']
del get_versions
"""
INIT_PY_SNIPPET = """
from . import {0}
__version__ = {0}.get_versions()['version']
"""
def do_setup():
"""Do main VCS-independent setup function for installing Versioneer."""
root = get_root()
try:
cfg = get_config_from_root(root)
except (OSError, configparser.NoSectionError,
configparser.NoOptionError) as e:
if isinstance(e, (OSError, configparser.NoSectionError)):
print("Adding sample versioneer config to setup.cfg",
file=sys.stderr)
with open(os.path.join(root, "setup.cfg"), "a") as f:
f.write(SAMPLE_CONFIG)
print(CONFIG_ERROR, file=sys.stderr)
return 1
print(" creating %s" % cfg.versionfile_source)
with open(cfg.versionfile_source, "w") as f:
LONG = LONG_VERSION_PY[cfg.VCS]
f.write(LONG % {"DOLLAR": "$",
"STYLE": cfg.style,
"TAG_PREFIX": cfg.tag_prefix,
"PARENTDIR_PREFIX": cfg.parentdir_prefix,
"VERSIONFILE_SOURCE": cfg.versionfile_source,
})
ipy = os.path.join(os.path.dirname(cfg.versionfile_source),
"__init__.py")
if os.path.exists(ipy):
try:
with open(ipy, "r") as f:
old = f.read()
except OSError:
old = ""
module = os.path.splitext(os.path.basename(cfg.versionfile_source))[0]
snippet = INIT_PY_SNIPPET.format(module)
if OLD_SNIPPET in old:
print(" replacing boilerplate in %s" % ipy)
with open(ipy, "w") as f:
f.write(old.replace(OLD_SNIPPET, snippet))
elif snippet not in old:
print(" appending to %s" % ipy)
with open(ipy, "a") as f:
f.write(snippet)
else:
print(" %s unmodified" % ipy)
else:
print(" %s doesn't exist, ok" % ipy)
ipy = None
# Make sure both the top-level "versioneer.py" and versionfile_source
# (PKG/_version.py, used by runtime code) are in MANIFEST.in, so
# they'll be copied into source distributions. Pip won't be able to
# install the package without this.
manifest_in = os.path.join(root, "MANIFEST.in")
simple_includes = set()
try:
with open(manifest_in, "r") as f:
for line in f:
if line.startswith("include "):
for include in line.split()[1:]:
simple_includes.add(include)
except OSError:
pass
# That doesn't cover everything MANIFEST.in can do
# (http://docs.python.org/2/distutils/sourcedist.html#commands), so
# it might give some false negatives. Appending redundant 'include'
# lines is safe, though.
if "versioneer.py" not in simple_includes:
print(" appending 'versioneer.py' to MANIFEST.in")
with open(manifest_in, "a") as f:
f.write("include versioneer.py\n")
else:
print(" 'versioneer.py' already in MANIFEST.in")
if cfg.versionfile_source not in simple_includes:
print(" appending versionfile_source ('%s') to MANIFEST.in" %
cfg.versionfile_source)
with open(manifest_in, "a") as f:
f.write("include %s\n" % cfg.versionfile_source)
else:
print(" versionfile_source already in MANIFEST.in")
# Make VCS-specific changes. For git, this means creating/changing
# .gitattributes to mark _version.py for export-subst keyword
# substitution.
do_vcs_install(manifest_in, cfg.versionfile_source, ipy)
return 0
def scan_setup_py():
"""Validate the contents of setup.py against Versioneer's expectations."""
found = set()
setters = False
errors = 0
with open("setup.py", "r") as f:
for line in f.readlines():
if "import versioneer" in line:
found.add("import")
if "versioneer.get_cmdclass()" in line:
found.add("cmdclass")
if "versioneer.get_version()" in line:
found.add("get_version")
if "versioneer.VCS" in line:
setters = True
if "versioneer.versionfile_source" in line:
setters = True
if len(found) != 3:
print("")
print("Your setup.py appears to be missing some important items")
print("(but I might be wrong). Please make sure it has something")
print("roughly like the following:")
print("")
print(" import versioneer")
print(" setup( version=versioneer.get_version(),")
print(" cmdclass=versioneer.get_cmdclass(), ...)")
print("")
errors += 1
if setters:
print("You should remove lines like 'versioneer.VCS = ' and")
print("'versioneer.versionfile_source = ' . This configuration")
print("now lives in setup.cfg, and should be removed from setup.py")
print("")
errors += 1
return errors
if __name__ == "__main__":
cmd = sys.argv[1]
if cmd == "setup":
errors = do_setup()
errors += scan_setup_py()
if errors:
sys.exit(1) | ytad | /ytad-0.0.8.tar.gz/ytad-0.0.8/versioneer.py | versioneer.py |
# ytam - YouTube Album Maker
A commandline utility that enables the creation of albums from Youtube playlists.
## Getting Started
<!--These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See deployment for notes on how to deploy the project on a live system. -->
### Prerequisites
To be able to use the mp4 to mp3 conversion feature, ffmpeg must be installed.
#### Debian:
```
sudo apt-get install ffmpeg
```
#### Windows:
- Download ffmpeg binaries from [here](https://www.gyan.dev/ffmpeg/builds)
- Add the bin\\ directory to Windows PATH
### Installing
```
pip install ytam
```
Usage:
```
usage: ytam [-h] [-t TITLES] [-d DIRECTORY] [-s START] [-e END] [-A ALBUM]
[-a ARTIST] [-i IMAGE] [-p PROXY] [-3 [MP3]] [-k [CHECK]]
URL
positional arguments:
URL the target URL of the playlist to download
optional arguments:
-h, --help show this help message and exit
-t TITLES, --titles TITLES
a plain text file containing the desired titles and
artists of the songs in the playlist, each on a new
line. Format: title<@>artist
-d DIRECTORY, --directory DIRECTORY
the download directory (defaults to 'music' - a
subdirectory of the current directory)
-s START, --start START
from which position in the playlist to start
downloading (defaults to 1)
-e END, --end END position in the playlist of the last song to be
downloaded (defaults to last position in the playlist)
-A ALBUM, --album ALBUM
the name of the album that the songs in the playlist
belongs to (defaults to playlist title)
-a ARTIST, --artist ARTIST
the name of the artist that performed the songs in the
playlist (defaults to Unknown)
-i IMAGE, --image IMAGE
the path to the image to be used as the album cover.
Only works when -A flag is set
-p PROXY, --proxy PROXY
list of proxies to use. Must be enclosed in string
quotes with a space separating each proxy. Proxy
format: <protocol>-<proxy>
-3 [MP3], --mp3 [MP3]
converts downloaded files to mp3 format and deletes
original mp4 file. Requires ffmpeg to be installed on
your machine
-k [CHECK], --check [CHECK]
checks whether ytam is working as it should by trying
to download a pre-defined playlist and setting pre-
defined metadata. Setting this argument causes ytam to
ignore ALL others
```
## Tests
TODO
<!-- ## Running the tests
Explain how to run the automated tests for this system
### Break down into end to end tests
Explain what these tests test and why
```
Give an example
```
### And coding style tests
Explain what these tests test and why
```
Give an example
```
## Deployment
Add additional notes about how to deploy this on a live system -->
## Built With
* [pytube](http://github.com/nficano/pytube.git) - Lightweight Python library for downloading videos
* [mutagen](https://mutagen.readthedocs.io/en/latest/api/mp4.html) - For MP4 metadata tagging
* [argparse](https://docs.python.org/3/library/argparse.html) - For parsing commandline arguments
* [ffmpeg](https://ffmpeg.org/) - For mp4 to mp3 conversion
<!-- ## Contributing
Please read [CONTRIBUTING.md](https://gist.github.com/PurpleBooth/b24679402957c63ec426) for details on our code of conduct, and the process for submitting pull requests to us.
## Versioning
We use [SemVer](http://semver.org/) for versioning. For the versions available, see the [tags on this repository](https://github.com/your/project/tags).
-->
## Authors
* **jayathungek** - *Initial work* - [jayathungek](https://github.com/jayathungek)
<!-- See also the list of [contributors](https://github.com/your/project/contributors) who participated in this project. -->
<!-- ## License
This project is licensed under the MIT License - see the [LICENSE.md](LICENSE.md) file for details -->
<!-- ## Acknowledgments
* Hat tip to anyone whose code was used
* Inspiration
* etc
--> | ytam | /ytam-0.4.4.tar.gz/ytam-0.4.4/README.md | README.md |
import logging
import numpy as np
import pandas as pd # type: ignore
from typing import Sequence, Union, Tuple, List, Dict
from .config import settings
logger = logging.getLogger(settings.PROJECT_SLUG)
DOWNLOADING_FUNC_PARAM = "url"
COL_TYPE_DICT = {"url": str, "format": str, "time_start": "int64",
"time_end": str, "fps": int, "bitrate": str}
def columns_validation(columns: Sequence[str]) -> Union[List[str], None]:
"""Validate columns of dataframe read from csv file.
To make sure the requisite column url exists, and find out existed optional
columns, such as "format", "time_start", "time_end", "bitrate".
Args:
columns (:obj:`list` of :obj:`str`): column names of dataframe read
from csv file.
Returns:
:obj:`None` or :obj:`list` of :obj:`str`: return None if "url" is not
included in the columns; otherwise return the list of existed optional
columns.
"""
converting_func_params = set(COL_TYPE_DICT.keys())
converting_func_params.remove(DOWNLOADING_FUNC_PARAM)
if DOWNLOADING_FUNC_PARAM not in columns:
return None
return list(converting_func_params.intersection(columns))
def load_file(file_path: str) -> Tuple[List[str], List[Dict]]:
"""Load information from csv file for downloading video function and converting
video function.
Args:
file_path (:obj:`str`): path to the csv file containing information for
downloading video function and converting video function.
Returns:
:obj:`tuple` (:obj:`list` of :obj:`str` and :obj:`list` of :obj:`dict`):
urls for downloading videos as a list and converting function params as
a list of dictionaries.
"""
df = pd.read_csv(file_path, sep=",", dtype=COL_TYPE_DICT)
converting_func_params = columns_validation(df.columns)
converting_info_dicts: List[Dict] = []
if converting_func_params is None:
logger.error("The requisite column [url] is not in the csv file.")
raise ValueError("Missing column [url] in csv file.")
elif not converting_func_params:
pass
elif "time_end" in converting_func_params:
df["time_end"] = df["time_end"].replace(np.nan, None)
for info_dict in df[converting_func_params].to_dict('records'):
if info_dict["time_end"]:
info_dict["time_end"] = int(info_dict["time_end"])
converting_info_dicts.append(info_dict)
else:
converting_info_dicts = df[converting_func_params].to_dict('records')
return df[DOWNLOADING_FUNC_PARAM].tolist(), converting_info_dicts | ytb-downloader | /ytb_downloader-0.3.1-py3-none-any.whl/ytb_downloader/file_loader.py | file_loader.py |
file to the given format."""
import logging
import moviepy.editor as mp # type: ignore
import os
from typing import Optional, Union
from .config import settings
from .constants import Media
logger = logging.getLogger(settings.PROJECT_SLUG)
def convert_to(downloaded_file: str,
media_format: Media = Media.AUDIO,
conversion_format: str = 'mp3',
t_start: int = 0, t_end: Optional[int] = None,
fps: int = 44100, bitrate: str = '3000k') -> Union[str, None]:
"""Convert the downloaded youtube video file to the given format.
Args:
downloaded_file (:obj:`str`): path to the download youtube video file.
media_format (:obj:`Media`): the original file's format.
conversion_format (:obj:`str`): format of the output file, mp3, avi etc.
t_start (:obj:`int`): starting point for cutting the video.
t_end (:obj:`int`, optional): ending point for cutting the video, if
not provided, the whole video will be converted.
fps (:obj:`int`): Frames per second. It will default to 44100.
bitrate (:obj:`str`): Audio bitrate, given as a string like '50k', '500k',
'3000k'. Will determine the size and quality of the output file.
Note that it mainly an indicative goal, the bitrate won't
necessarily be the this in the output file.
Returns:
:obj:`None` or :obj:`str`: the converted file path, if conversion
succeed, otherwise None.
"""
if not os.path.exists(downloaded_file):
logger.error("The given downloaded file path doesn't exist.")
return None
filename, _ = os.path.splitext(downloaded_file)
output_file = "{}.{}".format(filename, conversion_format)
if media_format == Media.VIDEO:
clip = mp.VideoFileClip(downloaded_file).subclip(t_start, t_end).audio
else:
clip = mp.AudioFileClip(downloaded_file).subclip(t_start, t_end)
clip.write_audiofile(output_file, fps=fps, bitrate=bitrate)
return output_file | ytb-downloader | /ytb_downloader-0.3.1-py3-none-any.whl/ytb_downloader/converter.py | converter.py |
import click
import logging
import os
from typing import Optional
from .config import settings
from .converter import convert_to
from .downloader import download, YDL_AUDIO_OPTS, YDL_VIDEO_OPTS
from .file_loader import load_file
from .constants import Media
logger = logging.getLogger(settings.PROJECT_SLUG)
@click.command()
@click.option('--video-only', is_flag=True)
@click.option('--format', '-f', default="mp3", required=False, type=str,
show_default=True, help='Output file format, like mp3, avi etc.')
@click.option('--time-start', '-ts', default=0, required=False, type=int,
show_default=True, help='Start time for converting the video.')
@click.option('--time-end', '-te', required=False, type=int, show_default=True,
help='End time for converting the video. If not provided, the '
'whole video will be converted')
@click.option('--fps', '-fs', default=44100, required=False,
type=int, show_default=True,
help="Frames per second. It will default to 44100.")
@click.option('--bitrate', '-br', default='3000k', required=False,
type=str, show_default=True,
help="Audio bitrate, like '50k', '500k', '3000k'.")
@click.argument("url")
def download_single(video_only: bool, format: str, time_start: int,
time_end: Optional[int], fps: int, bitrate: str, url: str):
"""CMD function for downloading a video from given URL and convert it to the
audio with given format and other converting params.
Args:
video_only (:obj:`bool`): flag, to indicate whether to convert the video
or not.
format (:obj:`str`): audio format for the conversion.
time_start (:obj:`int`): starting time for cutting the video.
time_end (:obj:`int`, optional): end time for cutting the video.
bitrate (:obj:`str`): bitrate of the audio.
url (:obj:`str`): url of the video to download and convert.
"""
ytb_opts = YDL_VIDEO_OPTS if video_only else YDL_AUDIO_OPTS
tmp_output_file = download([url], ytb_opts)[0]
if video_only:
msg = "The video from [{}] is downloaded " \
"in file [{}].".format(url, tmp_output_file)
logger.info(msg)
return
output_file = convert_to(tmp_output_file, Media.AUDIO, format,
time_start, time_end, fps, bitrate)
os.remove(tmp_output_file)
msg = "The video from [{}] is downloaded and converted to [{}] format " \
"in file [{}].".format(url, format, output_file)
logger.info(msg)
@click.command()
@click.option('--video-only', is_flag=True)
@click.argument("file")
def download_bulk(video_only: bool, file: str):
"""Download videos in bulk and convert them to audios.
Args:
video_only (:obj:`bool`): flag, to indicate whether to convert the video or
not.
file (:obj:`str`): path to the csv file containing all the information for
downloading and converting.
"""
urls, info_dicts = load_file(file)
ytb_opts = YDL_VIDEO_OPTS if video_only else YDL_AUDIO_OPTS
tmp_output_files = download(urls, ytb_opts)
if not video_only:
output_files = []
for i, info_dict in enumerate(info_dicts):
output_file = convert_to(tmp_output_files[i], **info_dict)
os.remove(tmp_output_files[i])
output_files.append(output_file)
msg = "The videos are downloaded and converted to files {}.".format(str(output_files))
else:
msg = "The videos are downloaded in files {}.".format(str(tmp_output_files))
logger.info(msg) | ytb-downloader | /ytb_downloader-0.3.1-py3-none-any.whl/ytb_downloader/main.py | main.py |
import json
from typing import Dict, List
""" Login module """
def domain_to_url(domain: str) -> str:
""" Converts a (partial) domain to valid URL """
if domain.startswith("."):
domain = "www" + domain
return "http://" + domain
async def format_cookie_file(cookie_file: str):
"""Restore auth cookies from a file. Does not guarantee that the user is logged in afterwards.
Visits the domains specified in the cookies to set them, the previous page is not restored."""
domain_cookies: Dict[str, List[object]] = {}
# cookie_file=r'D:\Download\audio-visual\make-reddit-video\auddit\assets\cookies\aww.json'
with open(cookie_file) as file:
cookies: List = json.load(file)
# Sort cookies by domain, because we need to visit to domain to add cookies
for cookie in cookies:
if cookie['sameSite']=='no_restriction' or cookie['sameSite'].lower()=='no_restriction':
cookie.update(sameSite='None')
try:
domain_cookies[cookie["domain"]].append(cookie)
except KeyError:
domain_cookies[cookie["domain"]] = [cookie]
# print(str(domain_cookies).replace(",", ",\n"))
# cookie.pop("sameSite", None) # Attribute should be available in Selenium >4
# cookie.pop("storeId", None) # Firefox container attribute
print('add cookies',domain_cookies[cookie["domain"]])
# await self.context.add_cookies(cookies)
return domain_cookies[cookie["domain"]]
def confirm_logged_in(page) -> bool:
""" Confirm that the user is logged in. The browser needs to be navigated to a YouTube page. """
try:
print(page.locator("yt-img-shadow.ytd-topbar-menu-button-renderer > img:nth-child(1)"))
page.locator("yt-img-shadow.ytd-topbar-menu-button-renderer > img:nth-child(1)")
# WebDriverWait(page, 10).until(EC.element_to_be_clickable("avatar-btn")))
return True
except TimeoutError:
return False
def confirm_logged_in_douyin(page) -> bool:
try:
page.locator('.avatar--1lU_a')
return True
except:
return False
def confirm_logged_in(page) -> bool:
""" Confirm that the user is logged in. The browser needs to be navigated to a YouTube page. """
try:
print(page.locator("yt-img-shadow.ytd-topbar-menu-button-renderer > img:nth-child(1)"))
page.locator("yt-img-shadow.ytd-topbar-menu-button-renderer > img:nth-child(1)")
# WebDriverWait(page, 10).until(EC.element_to_be_clickable("avatar-btn")))
return True
except TimeoutError:
return False | ytb-up | /ytb_up-0.1.15-py3-none-any.whl/ytb_up/login.py | login.py |
TIKTOK_URL= f"https://www.tiktok.com/upload?lang=en"
DOUYIN_URL = "https://creator.douyin.com/creator-micro/home"
DOUYIN_STUDIO_URL = "https://creator.douyin.com/creator-micro/home"
DOUYIN_UPLOAD_URL = "https://creator.douyin.com/creator-micro/content/upload"
DOUYIN_INPUT_FILE_VIDEO='.upload-btn--9eZLd'
DOUYIN_TEXTBOX ='.notranslate'
DOUYIN_TITLE_COUNTER=500
DOUYIN_INPUT_FILE_THUMBNAIL_EDIT='.mycard-info-text-span--1vcFz'
DOUYIN_INPUT_FILE_THUMBNAIL_OPTION_UPLOAD='.header--3-YMH > div:nth-child(1) > div:nth-child(1) > div:nth-child(2)'
DOUYIN_INPUT_FILE_THUMBNAIL='.upload-btn--9eZLd'
DOUYIN_INPUT_FILE_THUMBNAIL_UPLOAD_TRIM_CONFIRM='button.primary--1AMXd:nth-child(2)'
DOUYIN_INPUT_FILE_THUMBNAIL_UPLOAD_CONFIRM='.submit--3Qt1n'
DOUYIN_LOCATION='.select--148Qe > div:nth-child(1) > div:nth-child(1)'
DOUYIN_LOCATION_RESULT='div.semi-select-option:nth-child(1) > div:nth-child(2) > div:nth-child(1)'
DOUYIN_MINI_SELECT='.select--2uNK1'
DOUYIN_MINI_SELECT_OPTION='.select--2uNK1 > div:nth-child(1) > div:nth-child(1) > span:nth-child(1)'
DOUYIN_MINI='.semi-input'
DOUYIN_MINI_RESULT=''
DOUYIN_HOT_TOPIC='div.semi-select-filterable:nth-child(2) > div:nth-child(1) > div:nth-child(1) > span:nth-child(1)'
DOUYIN_HOT_TOPIC_RESULT='//html/body/div[7]/div/div/div/div/div/div/div[1]/div[2]/div[1]/div'
DOUYIN_HEJI_SELECT_OPTION='.sel-area--2hBSM > div:nth-child(2) > div:nth-child(1) > div:nth-child(1)'
DOUYIN_HEJI_SELECT_OPTION_VALUE='.sel-area--2hBSM > div:nth-child(2) > div:nth-child(1) > div:nth-child(1) > span:nth-child(1) > div:nth-child(1)'
DOUYIN_UP2='.semi-switch-native-control'
WEIXIN_URL="https://channels.weixin.qq.com/post/create"
YOUTUBE_URL = "https://www.youtube.com"
YOUTUBE_STUDIO_URL = "https://studio.youtube.com"
YOUTUBE_UPLOAD_URL = "https://www.youtube.com/upload"
USER_WAITING_TIME = 1
# CONTAINERS
CONFIRM_CONTAINER='#confirmation-dialog'
TAGS_CONTAINER = '//*[@id="tags-container"]'
ERROR_CONTAINER = '//*[@id="error-message"]'
STATUS_CONTAINER = "//html/body/ytcp-uploads-dialog/tp-yt-paper-dialog/div/ytcp-animatable[2]/div/div[1]/ytcp-video-upload-progress/span"
VIDEO_URL_CONTAINER = "//span[@class='video-url-fadeable style-scope ytcp-video-info']"
DESCRIPTION_CONTAINER = "//*[@id='description-container']"
MORE_OPTIONS_CONTAINER = "#toggle-button > div:nth-child(2)"
TIME_BETWEEN_POSTS = 3600
# COUNTERS
TAGS_COUNTER = 500
TITLE_COUNTER = 100
DESCRIPTION_COUNTER = 5000
# OTHER
HREF = "href"
TEXTBOX = "#title-textarea"
UPLOADED = "Uploading"
TEXT_INPUT = "#text-input"
NOT_MADE_FOR_KIDS_RADIO_LABEL = "tp-yt-paper-radio-button.ytkc-made-for-kids-select:nth-child(2) > div:nth-child(2) > ytcp-ve:nth-child(1)"
DONE_BUTTON = "//*[@id='done-button']"
NEXT_BUTTON = "//*[@id='next-button']"
PUBLIC_RADIO_LABEL ="tp-yt-paper-radio-button.style-scope:nth-child(20)"
PRIVATE_RADIO_LABEL ="#private-radio-button > div:nth-child(1)"
PUBLIC_BUTTON = "PUBLIC"
PRIVATE_BUTTON = "PRIVATE"
RADIO_CONTAINER = "//*[@id='radioContainer']"
PUBLISH_DATE="//html/body/ytcp-uploads-dialog/tp-yt-paper-dialog/div/ytcp-animatable[1]/ytcp-uploads-review/div[2]/div[1]/ytcp-video-visibility-select/div[2]/ytcp-visibility-scheduler/div[1]/ytcp-datetime-picker/div/ytcp-text-dropdown-trigger[1]/ytcp-dropdown-trigger/div/div[2]/span"
INPUT_FILE_VIDEO = "//input[@type='file']"
VIDEO_URL_ELEMENT = "//a[@class='style-scope ytcp-video-info']"
UPLOAD_DIALOG_MODAL = "#dialog.ytcp-uploads-dialog"
INPUT_FILE_THUMBNAIL = "//input[@accept='image/jpeg,image/png']"
VIDEO_NOT_FOUND_ERROR = "Could not find video_id"
NOT_MADE_FOR_KIDS_LABEL = ".made-for-kids-rating-container"
ERROR_SHORT_SELECTOR = '#dialog > div > ytcp-animatable.button-area.metadata-fade-in-section.style-scope.ytcp-uploads-dialog > div > div.left-button-area.style-scope.ytcp-uploads-dialog > div > div.error-short.style-scope.ytcp-uploads-dialog'
ERROR_SHORT_XPATH = '//*[@id="dialog"]/div/ytcp-animatable[2]/div/div[1]/div/div[1]'
UPLOADING_PROGRESS_SELECTOR = '#dialog > div > ytcp-animatable.button-area.metadata-fade-in-section.style-scope.ytcp-uploads-dialog > div > div.left-button-area.style-scope.ytcp-uploads-dialog > ytcp-video-upload-progress > span' | ytb-up | /ytb_up-0.1.15-py3-none-any.whl/ytb_up/constants.py | constants.py |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.