Coverage for python/lsst/daf/butler/datastores/chainedDatastore.py : 93%

Hot-keys on this page
r m x p toggle line displays
j k next/prev highlighted chunk
0 (zero) top of page
1 (one) first highlighted chunk
# This file is part of daf_butler. # # Developed for the LSST Data Management System. # This product includes software developed by the LSST Project # (http://www.lsst.org). # See the COPYRIGHT file at the top-level directory of this distribution # for details of code ownership. # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>.
DatastoreValidationError, Constraints, FileDataset
"""Helper class for ChainedDatastore ingest implementation.
Parameters ---------- children : `list` of `tuple` Pairs of `Datastore`, `IngestPrepData` for all child datastores. """
"""Chained Datastores to allow read and writes from multiple datastores.
A ChainedDatastore is configured with multiple datastore configurations. A ``put()`` is always sent to each datastore. A ``get()`` operation is sent to each datastore in turn and the first datastore to return a valid dataset is used.
Parameters ---------- config : `DatastoreConfig` or `str` Configuration. This configuration must include a ``datastores`` field as a sequence of datastore configurations. The order in this sequence indicates the order to use for read operations. registry : `Registry` Registry to use for storing internal information about the datasets. butlerRoot : `str`, optional New datastore root to use to override the configuration value. This root is sent to each child datastore.
Notes ----- ChainedDatastore never supports `None` or `"move"` as an `ingest` transfer mode. It supports `"copy"`, `"symlink"`, and `"hardlink"` if and only if its child datastores do. """
"""Path to configuration defaults. Relative to $DAF_BUTLER_DIR/config or absolute path. Can be None if no defaults specified. """
"""Key to specify where child datastores are configured."""
"""All the child datastores known to this datastore."""
"""Constraints to be applied to each of the child datastores."""
"""Set any filesystem-dependent config options for child Datastores to be appropriate for a new empty repository with the given root.
Parameters ---------- root : `str` Filesystem path to the root of the data repository. config : `Config` A `Config` to update. Only the subset understood by this component will be updated. Will not expand defaults. full : `Config` A complete config with all defaults expanded that can be converted to a `DatastoreConfig`. Read-only and will not be modified by this method. Repository-specific options that should not be obtained from defaults when Butler instances are constructed should be copied from ``full`` to ``config``. overwrite : `bool`, optional If `False`, do not modify a value in ``config`` if the value already exists. Default is always to overwrite with the provided ``root``.
Notes ----- If a keyword is explicitly defined in the supplied ``config`` it will not be overridden by this method if ``overwrite`` is `False`. This allows explicit values set in external configs to be retained. """
# Extract the part of the config we care about updating
# And the subset of the full config that we can use for reference. # Do not bother with defaults because we are told this already has # them.
# Loop over each datastore config and pass the subsets to the # child datastores to process.
fullDatastoreConfig[containerKey])):
# Reattach to parent
# Reattach modified datastore config to parent # If this has a datastore key we attach there, otherwise we assume # this information goes at the top of the config hierarchy. else:
# Scan for child datastores and instantiate them with the same registry
# Name ourself based on our children # We must set the names explicitly else: childNames = "(empty@{})".format(time.time()) self._names = [childNames]
# We declare we are ephemeral if all our child datastores declare # they are ephemeral
# per-datastore override constraints
raise DatastoreValidationError(f"Number of registered datastores ({len(self.datastores)})" " differs from number of constraints overrides" f" {len(overrides)}")
for c in overrides]
else:
def names(self):
"""Check if the dataset exists in one of the datastores.
Parameters ---------- ref : `DatasetRef` Reference to the required dataset.
Returns ------- exists : `bool` `True` if the entity exists in one of the child datastores. """
"""Load an InMemoryDataset from the store.
The dataset is returned from the first datastore that has the dataset.
Parameters ---------- ref : `DatasetRef` Reference to the required Dataset. parameters : `dict` `StorageClass`-specific parameters that specify, for example, a slice of the Dataset to be loaded.
Returns ------- inMemoryDataset : `object` Requested Dataset or slice thereof as an InMemoryDataset.
Raises ------ FileNotFoundError Requested dataset can not be retrieved. TypeError Return value from formatter has unexpected type. ValueError Formatter failed to process the dataset. """
"""Write a InMemoryDataset with a given `DatasetRef` to each datastore.
The put() to child datastores can fail with `DatasetTypeNotSupportedError`. The put() for this datastore will be deemed to have succeeded so long as at least one child datastore accepted the inMemoryDataset.
Parameters ---------- inMemoryDataset : `object` The Dataset to store. ref : `DatasetRef` Reference to the associated Dataset.
Raises ------ TypeError Supplied object and storage class are inconsistent. DatasetTypeNotSupportedError All datastores reported `DatasetTypeNotSupportedError`. """
# Confirm that we can accept this dataset # Raise rather than use boolean return value. " configuration.")
datastore.name, ref)
else:
warnings.warn(f"Put of {ref} only succeeded in ephemeral databases", stacklevel=2)
# Docstring inherited from Datastore._prepIngest.
name, ", ".join(str(ref) for ref in dataset.refs)) else:
# Filter down to just datasets the chained datastore's own # configuration accepts. if isDatasetAcceptable(dataset, name=self.name, constraints=self.constraints)]
# Iterate over nested datastores and call _prepIngest on each. # Save the results to a list: # ...and remember whether all of the failures are due to # NotImplementedError being raised. if isDatasetAcceptable(dataset, name=datastore.name, constraints=constraints)] else: "mode %s is not supported.", datastore.name, transfer)
# Docstring inherited from Datastore._finishIngest.
"""URI to the Dataset.
The returned URI is from the first datastore in the list that has the dataset with preference given to the first dataset coming from a permanent datastore. If no datastores have the dataset and prediction is allowed, the predicted URI for the first datastore in the list will be returned.
Parameters ---------- ref : `DatasetRef` Reference to the required Dataset. predict : `bool` If `True`, allow URIs to be returned of datasets that have not been written.
Returns ------- uri : `str` URI string pointing to the Dataset within the datastore. If the Dataset does not exist in the datastore, and if ``predict`` is `True`, the URI will be a prediction and will include a URI fragment "#predicted".
Notes ----- If the datastore does not have entities that relate well to the concept of a URI the returned URI string will be descriptive. The returned URI is not guaranteed to be obtainable.
Raises ------ FileNotFoundError A URI has been requested for a dataset that does not exist and guessing is not allowed. """
"""Indicate to the Datastore that a Dataset can be removed.
The dataset will be removed from each datastore. The dataset is not required to exist in every child datastore.
Parameters ---------- ref : `DatasetRef` Reference to the required Dataset.
Raises ------ FileNotFoundError Attempt to remove a dataset that does not exist. Raised if none of the child datastores removed the dataset. """
"""Retrieve a Dataset from an input `Datastore`, and store the result in this `Datastore`.
Parameters ---------- inputDatastore : `Datastore` The external `Datastore` from which to retreive the Dataset. ref : `DatasetRef` Reference to the required Dataset in the input data store.
Returns ------- results : `list` List containing the return value from the ``put()`` to each child datastore. """
"""Validate some of the configuration for this datastore.
Parameters ---------- entities : iterable of `DatasetRef`, `DatasetType`, or `StorageClass` Entities to test against this configuration. Can be differing types. logFailures : `bool`, optional If `True`, output a log message for every validation error detected.
Raises ------ DatastoreValidationError Raised if there is a validation problem with a configuration. All the problems are reported in a single exception.
Notes ----- This method checks each datastore in turn. """
# Need to catch each of the datastore outputs and ensure that # all are tested.
# Docstring is inherited from base class failures = [] for datastore in self.datastores: try: datastore.validateKey(lookupKey, entity) except DatastoreValidationError as e: failures.append(f"Datastore {self.name}: {e}")
if failures: msg = ";\n".join(failures) raise DatastoreValidationError(msg)
# Docstring is inherited from base class
keys.update(p.getLookupKeys())
|