lsst.pipe.base  19.0.0-24-g878c510+3
Public Member Functions | Public Attributes | List of all members
lsst.pipe.base.graphBuilder._PipelineScaffolding Class Reference

Public Member Functions

def __init__ (self, pipeline, *registry)
 
def __repr__ (self)
 
def connectDataIds (self, registry, collections, userQuery)
 
def resolveDatasetRefs (self, registry, collections, run, *skipExisting=True)
 
def makeQuantumGraph (self)
 

Public Attributes

 tasks
 
 dimensions
 

Detailed Description

A helper data structure that organizes the information involved in
constructing a `QuantumGraph` for a `Pipeline`.

Parameters
----------
pipeline : `Pipeline`
    Sequence of tasks from which a graph is to be constructed.  Must
    have nested task classes already imported.
universe : `DimensionUniverse`
    Universe of all possible dimensions.

Notes
-----
The scaffolding data structure contains nested data structures for both
tasks (`_TaskScaffolding`) and datasets (`_DatasetDict`).  The dataset
data structures are shared between the pipeline-level structure (which
aggregates all datasets and categorizes them from the perspective of the
complete pipeline) and the individual tasks that use them as inputs and
outputs.

`QuantumGraph` construction proceeds in four steps, with each corresponding
to a different `_PipelineScaffolding` method:

1. When `_PipelineScaffolding` is constructed, we extract and categorize
   the DatasetTypes used by the pipeline (delegating to
   `PipelineDatasetTypes.fromPipeline`), then use these to construct the
   nested `_TaskScaffolding` and `_DatasetDict` objects.

2. In `connectDataIds`, we construct and run the "Big Join Query", which
   returns related tuples of all dimensions used to identify any regular
   input, output, and intermediate datasets (not prerequisites).  We then
   iterate over these tuples of related dimensions, identifying the subsets
   that correspond to distinct data IDs for each task and dataset type,
   and then create `_QuantumScaffolding` objects.

3. In `resolveDatasetRefs`, we run follow-up queries against all of the
   dataset data IDs previously identified, transforming unresolved
   DatasetRefs into resolved DatasetRefs where appropriate.  We then look
   up prerequisite datasets for all quanta.

4. In `makeQuantumGraph`, we construct a `QuantumGraph` from the lists of
   per-task `_QuantumScaffolding` objects.

Definition at line 363 of file graphBuilder.py.

Constructor & Destructor Documentation

◆ __init__()

def lsst.pipe.base.graphBuilder._PipelineScaffolding.__init__ (   self,
  pipeline,
registry 
)

Definition at line 407 of file graphBuilder.py.

Member Function Documentation

◆ __repr__()

def lsst.pipe.base.graphBuilder._PipelineScaffolding.__repr__ (   self)

Definition at line 432 of file graphBuilder.py.

◆ connectDataIds()

def lsst.pipe.base.graphBuilder._PipelineScaffolding.connectDataIds (   self,
  registry,
  collections,
  userQuery 
)
Query for the data IDs that connect nodes in the `QuantumGraph`.

This method populates `_TaskScaffolding.dataIds` and
`_DatasetScaffolding.dataIds` (except for those in `prerequisites`).

Parameters
----------
registry : `lsst.daf.butler.Registry`
    Registry for the data repository; used for all data ID queries.
collections : `lsst.daf.butler.CollectionSearch`
    Object representing the collections to search for input datasets.
userQuery : `str`, optional
    User-provided expression to limit the data IDs processed.

Definition at line 485 of file graphBuilder.py.

◆ makeQuantumGraph()

def lsst.pipe.base.graphBuilder._PipelineScaffolding.makeQuantumGraph (   self)
Create a `QuantumGraph` from the quanta already present in
the scaffolding data structure.

Returns
-------
graph : `QuantumGraph`
    The full `QuantumGraph`.

Definition at line 700 of file graphBuilder.py.

◆ resolveDatasetRefs()

def lsst.pipe.base.graphBuilder._PipelineScaffolding.resolveDatasetRefs (   self,
  registry,
  collections,
  run,
skipExisting = True 
)
Perform follow up queries for each dataset data ID produced in
`fillDataIds`.

This method populates `_DatasetScaffolding.refs` (except for those in
`prerequisites`).

Parameters
----------
registry : `lsst.daf.butler.Registry`
    Registry for the data repository; used for all data ID queries.
collections : `lsst.daf.butler.CollectionSearch`
    Object representing the collections to search for input datasets.
run : `str`, optional
    Name of the `~lsst.daf.butler.CollectionType.RUN` collection for
    output datasets, if it already exists.
skipExisting : `bool`, optional
    If `True` (default), a Quantum is not created if all its outputs
    already exist in ``run``.  Ignored if ``run`` is `None`.

Raises
------
OutputExistsError
    Raised if an output dataset already exists in the output run
    and ``skipExisting`` is `False`.  The case where some but not all
    of a quantum's outputs are present and ``skipExisting`` is `True`
    cannot be identified at this stage, and is handled by `fillQuanta`
    instead.

Definition at line 558 of file graphBuilder.py.

Member Data Documentation

◆ dimensions

lsst.pipe.base.graphBuilder._PipelineScaffolding.dimensions

Definition at line 420 of file graphBuilder.py.

◆ tasks

lsst.pipe.base.graphBuilder._PipelineScaffolding.tasks

Definition at line 409 of file graphBuilder.py.


The documentation for this class was generated from the following file: