lsst.pipe.base  16.0-16-ge6a35c8+4
Public Member Functions | Public Attributes | Static Public Attributes | List of all members
lsst.pipe.base.pipelineTask.PipelineTask Class Reference
Inheritance diagram for lsst.pipe.base.pipelineTask.PipelineTask:
lsst.pipe.base.task.Task

Public Member Functions

def __init__ (self, config=None, log=None, initInputs=None, kwargs)
 
def getInitOutputDatasets (self)
 
def getInputDatasetTypes (cls, config)
 
def getOutputDatasetTypes (cls, config)
 
def getInitInputDatasetTypes (cls, config)
 
def getInitOutputDatasetTypes (cls, config)
 
def getDatasetTypes (cls, config, configClass)
 
def adaptArgsAndRun (self, inputData, inputDataIds, outputDataIds)
 
def run (self, kwargs)
 
def runQuantum (self, quantum, butler)
 
def saveStruct (self, struct, outputDataRefs, butler)
 
def getResourceConfig (self)
 
def emptyMetadata (self)
 
def getSchemaCatalogs (self)
 
def getAllSchemaCatalogs (self)
 
def getFullMetadata (self)
 
def getFullName (self)
 
def getName (self)
 
def getTaskDict (self)
 
def makeSubtask (self, name, keyArgs)
 
def timer (self, name, logLevel=Log.DEBUG)
 
def makeField (cls, doc)
 
def __reduce__ (self)
 

Public Attributes

 metadata
 
 log
 
 config
 

Static Public Attributes

bool canMultiprocess = True
 

Detailed Description

Base class for all pipeline tasks.

This is an abstract base class for PipelineTasks which represents an
algorithm executed by framework(s) on data which comes from data butler,
resulting data is also stored in a data butler.

PipelineTask inherits from a `pipe.base.Task` and uses the same
configuration mechanism based on `pex.config`. PipelineTask sub-class
typically implements `run()` method which receives Python-domain data
objects and returns `pipe.base.Struct` object with resulting data.
`run()` method is not supposed to perform any I/O, it operates entirely
on in-memory objects. `runQuantum()` is the method (can be re-implemented
in sub-class) where all necessary I/O is performed, it reads all input
data from data butler into memory, calls `run()` method with that data,
examines returned `Struct` object and saves some or all of that data back
to data butler. `runQuantum()` method receives `daf.butler.Quantum`
instance which defines all input and output datasets for a single
invocation of PipelineTask.

Subclasses must be constructable with exactly the arguments taken by the
PipelineTask base class constructor, but may support other signatures as
well.

Attributes
----------
canMultiprocess : bool, True by default (class attribute)
    This class attribute is checked by execution framework, sub-classes
    can set it to ``False`` in case task does not support multiprocessing.

Parameters
----------
config : `pex.config.Config`, optional
    Configuration for this task (an instance of ``self.ConfigClass``,
    which is a task-specific subclass of `PipelineTaskConfig`).
    If not specified then it defaults to `self.ConfigClass()`.
log : `lsst.log.Log`, optional
    Logger instance whose name is used as a log name prefix, or ``None``
    for no prefix.
initInputs : `dict`, optional
    A dictionary of objects needed to construct this PipelineTask, with
    keys matching the keys of the dictionary returned by
    `getInitInputDatasetTypes` and values equivalent to what would be
    obtained by calling `Butler.get` with those DatasetTypes and no data
    IDs.  While it is optional for the base class, subclasses are
    permitted to require this argument.

Definition at line 102 of file pipelineTask.py.

Constructor & Destructor Documentation

◆ __init__()

def lsst.pipe.base.pipelineTask.PipelineTask.__init__ (   self,
  config = None,
  log = None,
  initInputs = None,
  kwargs 
)

Definition at line 152 of file pipelineTask.py.

Member Function Documentation

◆ __reduce__()

def lsst.pipe.base.task.Task.__reduce__ (   self)
inherited
Pickler.

Definition at line 373 of file task.py.

◆ adaptArgsAndRun()

def lsst.pipe.base.pipelineTask.PipelineTask.adaptArgsAndRun (   self,
  inputData,
  inputDataIds,
  outputDataIds 
)
Run task algorithm on in-memory data.

This method is called by `runQuantum` to operate on input in-memory
data and produce coressponding output in-memory data. It receives
arguments which are dictionaries with input data and input/output
DataIds. Many simple tasks do not need to know DataIds so default
implementation of this method calls `run` method passing input data
objects as keyword arguments. Most simple tasks will implement `run`
method, more complex tasks that need to know about output DataIds
will override this method instead.

All three arguments to this method are dictionaries with keys equal
to the name of the configuration fields for dataset type. If dataset
type is configured with ``scalar`` fiels set to ``True`` then it is
expected that only one dataset appears on input or output for that
dataset type and dictionary value will be a single data object or
DataId. Otherwise if ``scalar`` is ``False`` (default) then value
will be a list (even if only one item is in the list).

The method returns `Struct` instance with attributes matching the
configuration fields for output dataset types. Values stored in
returned struct are single object if ``scalar`` is ``True`` or
list of objects otherwise. If tasks produces more than one object
for some dataset type then data objects returned in ``struct`` must
match in count and order corresponding DataIds in ``outputDataIds``.

Parameters
----------
inputData : `dict`
    Dictionary whose keys are the names of the configuration fields
    describing input dataset types and values are Python-domain data
    objects (or lists of objects) retrieved from data butler.
inputDataIds : `dict`
    Dictionary whose keys are the names of the configuration fields
    describing input dataset types and values are DataIds (or lists
    of DataIds) that task consumes for corresponding dataset type.
    DataIds are guaranteed to match data objects in ``inputData``
outputDataIds : `dict`
    Dictionary whose keys are the names of the configuration fields
    describing output dataset types and values are DataIds (or lists
    of DataIds) that task is to produce for corresponding dataset
    type.

Returns
-------
struct : `Struct`
    Standard convention is that this method should return `Struct`
    instance containing all output data. Struct attribute names
    should correspond to the names of the configuration fields
    describing task output dataset types. If something different
    is returned then `saveStruct` method has to be re-implemented
    accordingly.

Definition at line 315 of file pipelineTask.py.

◆ emptyMetadata()

def lsst.pipe.base.task.Task.emptyMetadata (   self)
inherited
Empty (clear) the metadata for this Task and all sub-Tasks.

Definition at line 153 of file task.py.

◆ getAllSchemaCatalogs()

def lsst.pipe.base.task.Task.getAllSchemaCatalogs (   self)
inherited
Get schema catalogs for all tasks in the hierarchy, combining the results into a single dict.

Returns
-------
schemacatalogs : `dict`
    Keys are butler dataset type, values are a empty catalog (an instance of the appropriate
    lsst.afw.table Catalog type) for all tasks in the hierarchy, from the top-level task down
    through all subtasks.

Notes
-----
This method may be called on any task in the hierarchy; it will return the same answer, regardless.

The default implementation should always suffice. If your subtask uses schemas the override
`Task.getSchemaCatalogs`, not this method.

Definition at line 188 of file task.py.

◆ getDatasetTypes()

def lsst.pipe.base.pipelineTask.PipelineTask.getDatasetTypes (   cls,
  config,
  configClass 
)
Return dataset type descriptors defined in task configuration.

This method can be used by other methods that need to extract dataset
types from task configuration (e.g. `getInputDatasetTypes` or
sub-class methods).

Parameters
----------
config : `Config`
    Configuration for this task. Typically datasets are defined in
    a task configuration.
configClass : `type`
    Class of the configuration object which defines dataset type.

Returns
-------
Dictionary where key is the name (arbitrary) of the output dataset
and value is the `DatasetTypeDescriptor` instance. Default
implementation uses configuration field name as dictionary key.
Returns empty dict if configuration has no fields with the specified
``configClass``.

Definition at line 286 of file pipelineTask.py.

◆ getFullMetadata()

def lsst.pipe.base.task.Task.getFullMetadata (   self)
inherited
Get metadata for all tasks.

Returns
-------
metadata : `lsst.daf.base.PropertySet`
    The `~lsst.daf.base.PropertySet` keys are the full task name. Values are metadata
    for the top-level task and all subtasks, sub-subtasks, etc..

Notes
-----
The returned metadata includes timing information (if ``@timer.timeMethod`` is used)
and any metadata set by the task. The name of each item consists of the full task name
with ``.`` replaced by ``:``, followed by ``.`` and the name of the item, e.g.::

    topLevelTaskName:subtaskName:subsubtaskName.itemName

using ``:`` in the full task name disambiguates the rare situation that a task has a subtask
and a metadata item with the same name.

Definition at line 210 of file task.py.

◆ getFullName()

def lsst.pipe.base.task.Task.getFullName (   self)
inherited
Get the task name as a hierarchical name including parent task names.

Returns
-------
fullName : `str`
    The full name consists of the name of the parent task and each subtask separated by periods.
    For example:

    - The full name of top-level task "top" is simply "top".
    - The full name of subtask "sub" of top-level task "top" is "top.sub".
    - The full name of subtask "sub2" of subtask "sub" of top-level task "top" is "top.sub.sub2".

Definition at line 235 of file task.py.

◆ getInitInputDatasetTypes()

def lsst.pipe.base.pipelineTask.PipelineTask.getInitInputDatasetTypes (   cls,
  config 
)
Return dataset type descriptors that can be used to retrieve the
``initInputs`` constructor argument.

Datasets used in initialization may not be associated with any
DataUnits (i.e. their data IDs must be empty dictionaries).

Default implementation finds all fields of type
`InitInputInputDatasetConfig` in configuration (non-recursively) and
uses them for constructing `DatasetTypeDescriptor` instances. The
names of these fields are used as keys in returned dictionary.
Subclasses can override this behavior.

Parameters
----------
config : `Config`
    Configuration for this task. Typically datasets are defined in
    a task configuration.

Returns
-------
Dictionary where key is the name (arbitrary) of the input dataset
and value is the `DatasetTypeDescriptor` instance. Default
implementation uses configuration field name as dictionary key.

When the task requires no initialization inputs, should return an
empty dict.

Definition at line 224 of file pipelineTask.py.

◆ getInitOutputDatasets()

def lsst.pipe.base.pipelineTask.PipelineTask.getInitOutputDatasets (   self)
Return persistable outputs that are available immediately after
the task has been constructed.

Subclasses that operate on catalogs should override this method to
return the schema(s) of the catalog(s) they produce.

It is not necessary to return the PipelineTask's configuration or
other provenance information in order for it to be persisted; that is
the responsibility of the execution system.

Returns
-------
datasets : `dict`
    Dictionary with keys that match those of the dict returned by
    `getInitOutputDatasetTypes` values that can be written by calling
    `Butler.put` with those DatasetTypes and no data IDs. An empty
    `dict` should be returned by tasks that produce no initialization
    outputs.

Definition at line 155 of file pipelineTask.py.

◆ getInitOutputDatasetTypes()

def lsst.pipe.base.pipelineTask.PipelineTask.getInitOutputDatasetTypes (   cls,
  config 
)
Return dataset type descriptors that can be used to write the
objects returned by `getOutputDatasets`.

Datasets used in initialization may not be associated with any
DataUnits (i.e. their data IDs must be empty dictionaries).

Default implementation finds all fields of type
`InitOutputDatasetConfig` in configuration (non-recursively) and uses
them for constructing `DatasetTypeDescriptor` instances. The names of
these fields are used as keys in returned dictionary. Subclasses can
override this behavior.

Parameters
----------
config : `Config`
    Configuration for this task. Typically datasets are defined in
    a task configuration.

Returns
-------
Dictionary where key is the name (arbitrary) of the output dataset
and value is the `DatasetTypeDescriptor` instance. Default
implementation uses configuration field name as dictionary key.

When the task produces no initialization outputs, should return an
empty dict.

Definition at line 255 of file pipelineTask.py.

◆ getInputDatasetTypes()

def lsst.pipe.base.pipelineTask.PipelineTask.getInputDatasetTypes (   cls,
  config 
)
Return input dataset type descriptors for this task.

Default implementation finds all fields of type `InputDatasetConfig`
in configuration (non-recursively) and uses them for constructing
`DatasetTypeDescriptor` instances. The names of these fields are used
as keys in returned dictionary. Subclasses can override this behavior.

Parameters
----------
config : `Config`
    Configuration for this task. Typically datasets are defined in
    a task configuration.

Returns
-------
Dictionary where key is the name (arbitrary) of the input dataset
and value is the `DatasetTypeDescriptor` instance. Default
implementation uses configuration field name as dictionary key.

Definition at line 178 of file pipelineTask.py.

◆ getName()

def lsst.pipe.base.task.Task.getName (   self)
inherited
Get the name of the task.

Returns
-------
taskName : `str`
    Name of the task.

See also
--------
getFullName

Definition at line 250 of file task.py.

◆ getOutputDatasetTypes()

def lsst.pipe.base.pipelineTask.PipelineTask.getOutputDatasetTypes (   cls,
  config 
)
Return output dataset type descriptors for this task.

Default implementation finds all fields of type `OutputDatasetConfig`
in configuration (non-recursively) and uses them for constructing
`DatasetTypeDescriptor` instances. The keys of these fields are used
as keys in returned dictionary. Subclasses can override this behavior.

Parameters
----------
config : `Config`
    Configuration for this task. Typically datasets are defined in
    a task configuration.

Returns
-------
Dictionary where key is the name (arbitrary) of the output dataset
and value is the `DatasetTypeDescriptor` instance. Default
implementation uses configuration field name as dictionary key.

Definition at line 201 of file pipelineTask.py.

◆ getResourceConfig()

def lsst.pipe.base.pipelineTask.PipelineTask.getResourceConfig (   self)
Return resource configuration for this task.

Returns
-------
Object of type `~config.ResourceConfig` or ``None`` if resource
configuration is not defined for this task.

Definition at line 541 of file pipelineTask.py.

◆ getSchemaCatalogs()

def lsst.pipe.base.task.Task.getSchemaCatalogs (   self)
inherited
Get the schemas generated by this task.

Returns
-------
schemaCatalogs : `dict`
    Keys are butler dataset type, values are an empty catalog (an instance of the appropriate
    `lsst.afw.table` Catalog type) for this task.

Notes
-----

.. warning::

   Subclasses that use schemas must override this method. The default implemenation returns
   an empty dict.

This method may be called at any time after the Task is constructed, which means that all task
schemas should be computed at construction time, *not* when data is actually processed. This
reflects the philosophy that the schema should not depend on the data.

Returning catalogs rather than just schemas allows us to save e.g. slots for SourceCatalog as well.

See also
--------
Task.getAllSchemaCatalogs

Definition at line 159 of file task.py.

◆ getTaskDict()

def lsst.pipe.base.task.Task.getTaskDict (   self)
inherited
Get a dictionary of all tasks as a shallow copy.

Returns
-------
taskDict : `dict`
    Dictionary containing full task name: task object for the top-level task and all subtasks,
    sub-subtasks, etc..

Definition at line 264 of file task.py.

◆ makeField()

def lsst.pipe.base.task.Task.makeField (   cls,
  doc 
)
inherited
Make a `lsst.pex.config.ConfigurableField` for this task.

Parameters
----------
doc : `str`
    Help text for the field.

Returns
-------
configurableField : `lsst.pex.config.ConfigurableField`
    A `~ConfigurableField` for this task.

Examples
--------
Provides a convenient way to specify this task is a subtask of another task.

Here is an example of use::

    class OtherTaskConfig(lsst.pex.config.Config)
aSubtask = ATaskClass.makeField("a brief description of what this task does")

Definition at line 329 of file task.py.

◆ makeSubtask()

def lsst.pipe.base.task.Task.makeSubtask (   self,
  name,
  keyArgs 
)
inherited
Create a subtask as a new instance as the ``name`` attribute of this task.

Parameters
----------
name : `str`
    Brief name of the subtask.
keyArgs
    Extra keyword arguments used to construct the task. The following arguments are automatically
    provided and cannot be overridden:

    - "config".
    - "parentTask".

Notes
-----
The subtask must be defined by ``Task.config.name``, an instance of pex_config ConfigurableField
or RegistryField.

Definition at line 275 of file task.py.

◆ run()

def lsst.pipe.base.pipelineTask.PipelineTask.run (   self,
  kwargs 
)
Run task algorithm on in-memory data.

This method should be implemented in a subclass unless tasks overrides
`adaptArgsAndRun` to do something different from its default
implementation. With default implementation of `adaptArgsAndRun` this
method will receive keyword arguments whose names will be the same as
names of configuration fields describing input dataset types. Argument
values will be data objects retrieved from data butler. If a dataset
type is configured with ``scalar`` field set to ``True`` then argument
value will be a single object, otherwise it will be a list of objects.

If the task needs to know its input or output DataIds then it has to
override `adaptArgsAndRun` method instead.

Returns
-------
struct : `Struct`
    See description of `adaptArgsAndRun` method.

Examples
--------
Typical implementation of this method may look like::

    def run(self, input, calib):
# "input", "calib", and "output" are the names of the config fields

# Assuming that input/calib datasets are `scalar` they are simple objects,
# do something with inputs and calibs, produce output image.
image = self.makeImage(input, calib)

# If output dataset is `scalar` then return object, not list
return Struct(output=image)

Definition at line 371 of file pipelineTask.py.

◆ runQuantum()

def lsst.pipe.base.pipelineTask.PipelineTask.runQuantum (   self,
  quantum,
  butler 
)
Execute PipelineTask algorithm on single quantum of data.

Typical implementation of this method will use inputs from quantum
to retrieve Python-domain objects from data butler and call
`adaptArgsAndRun` method on that data. On return from
`adaptArgsAndRun` this method will extract data from returned
`Struct` instance and save that data to butler.

The `Struct` returned from `adaptArgsAndRun` is expected to contain
data attributes with the names equal to the names of the
configuration fields defining output dataset types. The values of
the data attributes must be data objects corresponding to
the DataIds of output dataset types. All data objects will be
saved in butler using DataRefs from Quantum's output dictionary.

This method does not return anything to the caller, on errors
corresponding exception is raised.

Parameters
----------
quantum : `Quantum`
    Object describing input and output corresponding to this
    invocation of PipelineTask instance.
butler : object
    Data butler instance.

Raises
------
`ScalarError` if a dataset type is configured as scalar but receives
multiple DataIds in `quantum`. Any exceptions that happen in data
butler or in `adaptArgsAndRun` method.

Definition at line 408 of file pipelineTask.py.

◆ saveStruct()

def lsst.pipe.base.pipelineTask.PipelineTask.saveStruct (   self,
  struct,
  outputDataRefs,
  butler 
)
Save data in butler.

Convention is that struct returned from ``run()`` method has data
field(s) with the same names as the config fields defining
output DatasetTypes. Subclasses may override this method to implement
different convention for `Struct` content or in case any
post-processing of data may be needed.

Parameters
----------
struct : `Struct`
    Data produced by the task packed into `Struct` instance
outputDataRefs : `dict`
    Dictionary whose keys are the names of the configuration fields
    describing output dataset types and values are lists of DataRefs.
    DataRefs must match corresponding data objects in ``struct`` in
    number and order.
butler : object
    Data butler instance.

Definition at line 507 of file pipelineTask.py.

◆ timer()

def lsst.pipe.base.task.Task.timer (   self,
  name,
  logLevel = Log.DEBUG 
)
inherited
Context manager to log performance data for an arbitrary block of code.

Parameters
----------
name : `str`
    Name of code being timed; data will be logged using item name: ``Start`` and ``End``.
logLevel
    A `lsst.log` level constant.

Examples
--------
Creating a timer context::

    with self.timer("someCodeToTime"):
pass  # code to time

See also
--------
timer.logInfo

Definition at line 301 of file task.py.

Member Data Documentation

◆ canMultiprocess

bool lsst.pipe.base.pipelineTask.PipelineTask.canMultiprocess = True
static

Definition at line 150 of file pipelineTask.py.

◆ config

lsst.pipe.base.task.Task.config
inherited

Definition at line 149 of file task.py.

◆ log

lsst.pipe.base.task.Task.log
inherited

Definition at line 148 of file task.py.

◆ metadata

lsst.pipe.base.task.Task.metadata
inherited

Definition at line 121 of file task.py.


The documentation for this class was generated from the following file: