24 from builtins
import object
30 """This module defines the Mapper base class.""" 34 """Mapper is a base class for all mappers. 36 Subclasses may define the following methods: 38 map_{datasetType}(self, dataId, write) 39 Map a dataset id for the given dataset type into a ButlerLocation. 40 If write=True, this mapping is for an output dataset. 42 query_{datasetType}(self, key, format, dataId) 43 Return the possible values for the format fields that would produce 44 datasets at the granularity of key in combination with the provided 47 std_{datasetType}(self, item) 48 Standardize an object of the given data set type. 50 Methods that must be overridden: 53 Return a list of the keys that can be used in data ids. 61 map(self, datasetType, dataId, write=False) 63 queryMetadata(self, datasetType, key, format, dataId) 65 canStandardize(self, datasetType) 67 standardize(self, datasetType, item, dataId) 69 validate(self, dataId) 74 '''Instantiate a Mapper from a configuration. 75 In come cases the cfg may have already been instantiated into a Mapper, this is allowed and 76 the input var is simply returned. 78 :param cfg: the cfg for this mapper. It is recommended this be created by calling 80 :return: a Mapper instance 82 if isinstance(cfg, Policy):
83 return cfg[
'cls'](cfg)
87 """Create a new Mapper, saving arguments for pickling. 89 This is in __new__ instead of __init__ to save the user 90 from having to save the arguments themselves (either explicitly, 91 or by calling the super's __init__ with all their 92 *args,**kwargs. The resulting pickling system (of __new__, 93 __getstate__ and __setstate__ is similar to how __reduce__ 94 is usually used, except that we save the user from any 95 responsibility (except when overriding __new__, but that 98 self = super(Mapper, cls).
__new__(cls)
106 return self._arguments
114 raise NotImplementedError(
"keys() unimplemented")
117 """Get possible values for keys given a partial data id. 119 :param datasetType: see documentation about the use of datasetType 120 :param key: this is used as the 'level' parameter 122 :param dataId: see documentation about the use of dataId 125 func = getattr(self,
'query_' + datasetType)
127 val = func(format, self.
validate(dataId))
131 """Return a list of the mappable dataset types.""" 134 for attr
in dir(self):
135 if attr.startswith(
"map_"):
136 list.append(attr[4:])
139 def map(self, datasetType, dataId, write=False):
140 """Map a data id using the mapping method for its dataset type. 145 The datasetType to map 146 dataId : DataId instance 147 The dataId to use when mapping 148 write : bool, optional 149 Indicates if the map is being performed for a read operation 150 (False) or a write operation (True) 154 ButlerLocation or a list of ButlerLocation 155 The location(s) found for the map operation. If write is True, a 156 list is returned. If write is False a single ButlerLocation is 162 If no locaiton was found for this map operation, the derived mapper 163 class may raise a lsst.daf.persistence.NoResults exception. Butler 164 catches this and will look in the next Repository if there is one. 166 func = getattr(self,
'map_' + datasetType)
167 return func(self.
validate(dataId), write)
171 """Return true if this mapper can standardize an object of the given 174 return hasattr(self,
'std_' + datasetType)
177 """Standardize an object using the standardization method for its data 178 set type, if it exists.""" 180 if hasattr(self,
'std_' + datasetType):
181 func = getattr(self,
'std_' + datasetType)
182 return func(item, self.
validate(dataId))
186 """Validate a dataId's contents. 188 If the dataId is valid, return it. If an invalid component can be 189 transformed into a valid one, copy the dataId, fix the component, and 190 return the copy. Otherwise, raise an exception.""" 195 """Rename any existing object with the given type and dataId. 197 Not implemented in the base mapper. 199 raise NotImplementedError(
"Base-class Mapper does not implement backups")
202 """Get the registry""" def __setstate__(self, state)
def backup(self, datasetType, dataId)
def canStandardize(self, datasetType)
def validate(self, dataId)
def map(self, datasetType, dataId, write=False)
def queryMetadata(self, datasetType, format, dataId)
def getDatasetTypes(self)
def __init__(self, kwargs)
def standardize(self, datasetType, item, dataId)
def __new__(cls, args, kwargs)