Skip to content

Switch docs to jupyter-execute sphinx extension for HTML reprs #10383

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 23 commits into from
Jun 9, 2025
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
cf78cf8
switch user-guide from ipython sphinx extenstion to jupyter-execute
scottyhq Jun 1, 2025
b889477
switch internals to jupyter-execute
scottyhq Jun 1, 2025
beacea8
switch remain doc files to jupyter-execute
scottyhq Jun 1, 2025
3b111d9
manual review of data model section
scottyhq Jun 2, 2025
58a5406
manual review core-operations
scottyhq Jun 2, 2025
8765673
manual review of IO
scottyhq Jun 2, 2025
0478087
manual review of plotting
scottyhq Jun 2, 2025
7ec6b56
manual review of interoperability
scottyhq Jun 2, 2025
00aad0b
review domain-specific and testing
scottyhq Jun 2, 2025
e380019
review outputs in internals section
scottyhq Jun 3, 2025
84a7976
fully remove ipython directive
scottyhq Jun 3, 2025
cd35a8d
handle execution warnings in time-coding
scottyhq Jun 3, 2025
420e7fb
use zarr v2 and consolidated=False to silence execution warnings
scottyhq Jun 3, 2025
0f36da2
cleanup, handle more warnings for RTD build
scottyhq Jun 3, 2025
07886df
catch cartopy coastline download warning
scottyhq Jun 3, 2025
6e56985
catch downloading 50m coastline warning too
scottyhq Jun 3, 2025
3de4b5e
silence xmode minimal printouts, more compact numpy printout
scottyhq Jun 3, 2025
22edd95
dont execute code in whatsnew
scottyhq Jun 3, 2025
aeaeff6
fix dark mode for datatrees
scottyhq Jun 4, 2025
673ce44
fix mermaid diagram, kerchunk, ncdata, and zarr sections
scottyhq Jun 4, 2025
fe4ed54
use tree command to check local zarrs
scottyhq Jun 4, 2025
8ca7819
address time-coding review
scottyhq Jun 4, 2025
dea0c23
Merge branch 'main' into jupyter-sphinx
dcherian Jun 9, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Next Next commit
switch user-guide from ipython sphinx extenstion to jupyter-execute
  • Loading branch information
scottyhq committed Jun 1, 2025
commit cf78cf8d862f8ee0aba8aabfe3b2dba0aeb6ed82
46 changes: 23 additions & 23 deletions doc/user-guide/combining.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@
Combining data
--------------

.. ipython:: python
:suppress:
.. jupyter-execute::
:hide-code:

import numpy as np
import pandas as pd
Expand All @@ -27,7 +27,7 @@ into a larger object, you can use :py:func:`~xarray.concat`. ``concat``
takes an iterable of ``DataArray`` or ``Dataset`` objects, as well as a
dimension name, and concatenates along that dimension:

.. ipython:: python
.. jupyter-execute::

da = xr.DataArray(
np.arange(6).reshape(2, 3), [("x", ["a", "b"]), ("y", [10, 20, 30])]
Expand All @@ -41,7 +41,7 @@ dimension name, and concatenates along that dimension:
In addition to combining along an existing dimension, ``concat`` can create a
new dimension by stacking lower dimensional arrays together:

.. ipython:: python
.. jupyter-execute::

da.sel(x="a")
xr.concat([da.isel(x=0), da.isel(x=1)], "x")
Expand All @@ -50,21 +50,21 @@ If the second argument to ``concat`` is a new dimension name, the arrays will
be concatenated along that new dimension, which is always inserted as the first
dimension:

.. ipython:: python
.. jupyter-execute::

xr.concat([da.isel(x=0), da.isel(x=1)], "new_dim")

The second argument to ``concat`` can also be an :py:class:`~pandas.Index` or
:py:class:`~xarray.DataArray` object as well as a string, in which case it is
used to label the values along the new dimension:

.. ipython:: python
.. jupyter-execute::

xr.concat([da.isel(x=0), da.isel(x=1)], pd.Index([-90, -100], name="new_dim"))

Of course, ``concat`` also works on ``Dataset`` objects:

.. ipython:: python
.. jupyter-execute::

ds = da.to_dataset(name="foo")
xr.concat([ds.sel(x="a"), ds.sel(x="b")], "x")
Expand All @@ -85,7 +85,7 @@ To combine variables and coordinates between multiple ``DataArray`` and/or
``Dataset``, ``DataArray`` or dictionaries of objects convertible to
``DataArray`` objects:

.. ipython:: python
.. jupyter-execute::

xr.merge([ds, ds.rename({"foo": "bar"})])
xr.merge([xr.DataArray(n, name="var%d" % n) for n in range(5)])
Expand All @@ -94,7 +94,7 @@ If you merge another dataset (or a dictionary including data array objects), by
default the resulting dataset will be aligned on the **union** of all index
coordinates:

.. ipython:: python
.. jupyter-execute::

other = xr.Dataset({"bar": ("x", [1, 2, 3, 4]), "x": list("abcd")})
xr.merge([ds, other])
Expand All @@ -117,7 +117,7 @@ if you attempt to merge two variables with the same name but different values:
The same non-destructive merging between ``DataArray`` index coordinates is
used in the :py:class:`~xarray.Dataset` constructor:

.. ipython:: python
.. jupyter-execute::

xr.Dataset({"a": da.isel(x=slice(0, 1)), "b": da.isel(x=slice(1, 2))})

Expand All @@ -132,7 +132,7 @@ using values from the called object to fill holes. The resulting coordinates
are the union of coordinate labels. Vacant cells as a result of the outer-join
are filled with ``NaN``. For example:

.. ipython:: python
.. jupyter-execute::

ar0 = xr.DataArray([[0, 0], [0, 0]], [("x", ["a", "b"]), ("y", [-1, 0])])
ar1 = xr.DataArray([[1, 1], [1, 1]], [("x", ["b", "c"]), ("y", [0, 1])])
Expand All @@ -153,7 +153,7 @@ In contrast to ``merge``, :py:meth:`~xarray.Dataset.update` modifies a dataset
in-place without checking for conflicts, and will overwrite any existing
variables with new values:

.. ipython:: python
.. jupyter-execute::

ds.update({"space": ("space", [10.2, 9.4, 3.9])})

Expand All @@ -164,14 +164,14 @@ replace all dataset variables that use it.
``update`` also performs automatic alignment if necessary. Unlike ``merge``, it
maintains the alignment of the original array instead of merging indexes:

.. ipython:: python
.. jupyter-execute::

ds.update(other)

The exact same alignment logic when setting a variable with ``__setitem__``
syntax:

.. ipython:: python
.. jupyter-execute::

ds["baz"] = xr.DataArray([9, 9, 9, 9, 9], coords=[("x", list("abcde"))])
ds.baz
Expand All @@ -187,22 +187,22 @@ the optional ``compat`` argument on ``concat`` and ``merge``.
:py:attr:`~xarray.Dataset.equals` checks dimension names, indexes and array
values:

.. ipython:: python
.. jupyter-execute::

da.equals(da.copy())

:py:attr:`~xarray.Dataset.identical` also checks attributes, and the name of each
object:

.. ipython:: python
.. jupyter-execute::

da.identical(da.rename("bar"))

:py:attr:`~xarray.Dataset.broadcast_equals` does a more relaxed form of equality
check that allows variables to have different dimensions, as long as values
are constant along those new dimensions:

.. ipython:: python
.. jupyter-execute::

left = xr.Dataset(coords={"x": 0})
right = xr.Dataset({"x": [0, 0, 0]})
Expand All @@ -214,7 +214,7 @@ missing values marked by ``NaN`` in the same locations.
In contrast, the ``==`` operation performs element-wise comparison (like
numpy):

.. ipython:: python
.. jupyter-execute::

da == da.copy()

Expand All @@ -232,7 +232,7 @@ methods it allows the merging of xarray objects with locations where *either*
have ``NaN`` values. This can be used to combine data with overlapping
coordinates as long as any non-missing values agree or are disjoint:

.. ipython:: python
.. jupyter-execute::

ds1 = xr.Dataset({"a": ("x", [10, 20, 30, np.nan])}, {"x": [1, 2, 3, 4]})
ds2 = xr.Dataset({"a": ("x", [np.nan, 30, 40, 50])}, {"x": [2, 3, 4, 5]})
Expand Down Expand Up @@ -264,7 +264,7 @@ each processor wrote out data to a separate file. A domain which was decomposed
into 4 parts, 2 each along both the x and y axes, requires organising the
datasets into a doubly-nested list, e.g:

.. ipython:: python
.. jupyter-execute::

arr = xr.DataArray(
name="temperature", data=np.random.randint(5, size=(2, 2)), dims=["x", "y"]
Expand All @@ -279,7 +279,7 @@ along two times, and contain two different variables, we can pass ``None``
to ``'concat_dim'`` to specify the dimension of the nested list over which
we wish to use ``merge`` instead of ``concat``:

.. ipython:: python
.. jupyter-execute::

temp = xr.DataArray(name="temperature", data=np.random.randn(2), dims=["t"])
precip = xr.DataArray(name="precipitation", data=np.random.randn(2), dims=["t"])
Expand All @@ -294,8 +294,8 @@ Here we combine two datasets using their common dimension coordinates. Notice
they are concatenated in order based on the values in their dimension
coordinates, not on their position in the list passed to ``combine_by_coords``.

.. ipython:: python
:okwarning:
.. jupyter-execute::


x1 = xr.DataArray(name="foo", data=np.random.randn(3), coords=[("x", [0, 1, 2])])
x2 = xr.DataArray(name="foo", data=np.random.randn(3), coords=[("x", [3, 4, 5])])
Expand Down
Loading
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy