lsst.obs.base  20.0.0-59-gb502cbb+0e9af1ef10
Public Member Functions | Public Attributes | Static Public Attributes | List of all members
lsst.obs.base.ingest.RawIngestTask Class Reference
Inheritance diagram for lsst.obs.base.ingest.RawIngestTask:

Public Member Functions

def getDatasetType (self)
 
def __init__ (self, Optional[RawIngestConfig] config=None, *Butler butler, **Any kwargs)
 
RawFileData extractMetadata (self, str filename)
 
List[RawExposureDatagroupByExposure (self, Iterable[RawFileData] files)
 
RawExposureData expandDataIds (self, RawExposureData data)
 
Iterator[RawExposureDataprep (self, files, *Optional[Pool] pool=None, int processes=1)
 
List[DatasetRef] ingestExposureDatasets (self, RawExposureData exposure, *Optional[str] run=None)
 
def run (self, files, *Optional[Pool] pool=None, int processes=1, Optional[str] run=None)
 

Public Attributes

 butler
 
 universe
 
 datasetType
 

Static Public Attributes

 ConfigClass = RawIngestConfig
 

Detailed Description

Driver Task for ingesting raw data into Gen3 Butler repositories.

Parameters
----------
config : `RawIngestConfig`
    Configuration for the task.
butler : `~lsst.daf.butler.Butler`
    Writeable butler instance, with ``butler.run`` set to the appropriate
    `~lsst.daf.butler.CollectionType.RUN` collection for these raw
    datasets.
**kwargs
    Additional keyword arguments are forwarded to the `lsst.pipe.base.Task`
    constructor.

Notes
-----
Each instance of `RawIngestTask` writes to the same Butler.  Each
invocation of `RawIngestTask.run` ingests a list of files.

Definition at line 160 of file ingest.py.

Constructor & Destructor Documentation

◆ __init__()

def lsst.obs.base.ingest.RawIngestTask.__init__ (   self,
Optional[RawIngestConfig]   config = None,
*Butler  butler,
**Any  kwargs 
)

Definition at line 191 of file ingest.py.

Member Function Documentation

◆ expandDataIds()

RawExposureData lsst.obs.base.ingest.RawIngestTask.expandDataIds (   self,
RawExposureData  data 
)
Expand the data IDs associated with a raw exposure to include
additional metadata records.

Parameters
----------
exposure : `RawExposureData`
    A structure containing information about the exposure to be
    ingested.  Must have `RawExposureData.records` populated. Should
    be considered consumed upon return.

Returns
-------
exposure : `RawExposureData`
    An updated version of the input structure, with
    `RawExposureData.dataId` and nested `RawFileData.dataId` attributes
    updated to data IDs for which `DataCoordinate.hasRecords` returns
    `True`.

Definition at line 314 of file ingest.py.

◆ extractMetadata()

RawFileData lsst.obs.base.ingest.RawIngestTask.extractMetadata (   self,
str  filename 
)
Extract and process metadata from a single raw file.

Parameters
----------
filename : `str`
    Path to the file.

Returns
-------
data : `RawFileData`
    A structure containing the metadata extracted from the file,
    as well as the original filename.  All fields will be populated,
    but the `RawFileData.dataId` attribute will be a minimal
    (unexpanded) `DataCoordinate` instance.

Notes
-----
Assumes that there is a single dataset associated with the given
file.  Instruments using a single file to store multiple datasets
must implement their own version of this method.

Definition at line 206 of file ingest.py.

◆ getDatasetType()

def lsst.obs.base.ingest.RawIngestTask.getDatasetType (   self)
Return the DatasetType of the datasets ingested by this Task.

Definition at line 185 of file ingest.py.

◆ groupByExposure()

List[RawExposureData] lsst.obs.base.ingest.RawIngestTask.groupByExposure (   self,
Iterable[RawFileData files 
)
Group an iterable of `RawFileData` by exposure.

Parameters
----------
files : iterable of `RawFileData`
    File-level information to group.

Returns
-------
exposures : `list` of `RawExposureData`
    A list of structures that group the file-level information by
    exposure. All fields will be populated.  The
    `RawExposureData.dataId` attributes will be minimal (unexpanded)
    `DataCoordinate` instances.

Definition at line 289 of file ingest.py.

◆ ingestExposureDatasets()

List[DatasetRef] lsst.obs.base.ingest.RawIngestTask.ingestExposureDatasets (   self,
RawExposureData  exposure,
*Optional[str]   run = None 
)
Ingest all raw files in one exposure.

Parameters
----------
exposure : `RawExposureData`
    A structure containing information about the exposure to be
    ingested.  Must have `RawExposureData.records` populated and all
    data ID attributes expanded.
run : `str`, optional
    Name of a RUN-type collection to write to, overriding
    ``self.butler.run``.

Returns
-------
refs : `list` of `lsst.daf.butler.DatasetRef`
    Dataset references for ingested raws.

Definition at line 426 of file ingest.py.

◆ prep()

Iterator[RawExposureData] lsst.obs.base.ingest.RawIngestTask.prep (   self,
  files,
*Optional[Pool]   pool = None,
int   processes = 1 
)
Perform all ingest preprocessing steps that do not involve actually
modifying the database.

Parameters
----------
files : iterable over `str` or path-like objects
    Paths to the files to be ingested.  Will be made absolute
    if they are not already.
pool : `multiprocessing.Pool`, optional
    If not `None`, a process pool with which to parallelize some
    operations.
processes : `int`, optional
    The number of processes to use.  Ignored if ``pool`` is not `None`.

Yields
------
exposure : `RawExposureData`
    Data structures containing dimension records, filenames, and data
    IDs to be ingested (one structure for each exposure).
bad_files : `list` of `str`
    List of all the files that could not have metadata extracted.

Definition at line 357 of file ingest.py.

◆ run()

def lsst.obs.base.ingest.RawIngestTask.run (   self,
  files,
*Optional[Pool]   pool = None,
int   processes = 1,
Optional[str]   run = None 
)
Ingest files into a Butler data repository.

This creates any new exposure or visit Dimension entries needed to
identify the ingested files, creates new Dataset entries in the
Registry and finally ingests the files themselves into the Datastore.
Any needed instrument, detector, and physical_filter Dimension entries
must exist in the Registry before `run` is called.

Parameters
----------
files : iterable over `str` or path-like objects
    Paths to the files to be ingested.  Will be made absolute
    if they are not already.
pool : `multiprocessing.Pool`, optional
    If not `None`, a process pool with which to parallelize some
    operations.
processes : `int`, optional
    The number of processes to use.  Ignored if ``pool`` is not `None`.
run : `str`, optional
    Name of a RUN-type collection to write to, overriding
    the default derived from the instrument name.

Returns
-------
refs : `list` of `lsst.daf.butler.DatasetRef`
    Dataset references for ingested raws.

Notes
-----
This method inserts all datasets for an exposure within a transaction,
guaranteeing that partial exposures are never ingested.  The exposure
dimension record is inserted with `Registry.syncDimensionData` first
(in its own transaction), which inserts only if a record with the same
primary key does not already exist.  This allows different files within
the same exposure to be incremented in different runs.

Definition at line 452 of file ingest.py.

Member Data Documentation

◆ butler

lsst.obs.base.ingest.RawIngestTask.butler

Definition at line 194 of file ingest.py.

◆ ConfigClass

lsst.obs.base.ingest.RawIngestTask.ConfigClass = RawIngestConfig
static

Definition at line 181 of file ingest.py.

◆ datasetType

lsst.obs.base.ingest.RawIngestTask.datasetType

Definition at line 196 of file ingest.py.

◆ universe

lsst.obs.base.ingest.RawIngestTask.universe

Definition at line 195 of file ingest.py.


The documentation for this class was generated from the following file: