Hide keyboard shortcuts

Hot-keys on this page

r m x p   toggle line displays

j k   next/prev highlighted chunk

0   (zero) top of page

1   (one) first highlighted chunk

1# This file is part of daf_butler. 

2# 

3# Developed for the LSST Data Management System. 

4# This product includes software developed by the LSST Project 

5# (http://www.lsst.org). 

6# See the COPYRIGHT file at the top-level directory of this distribution 

7# for details of code ownership. 

8# 

9# This program is free software: you can redistribute it and/or modify 

10# it under the terms of the GNU General Public License as published by 

11# the Free Software Foundation, either version 3 of the License, or 

12# (at your option) any later version. 

13# 

14# This program is distributed in the hope that it will be useful, 

15# but WITHOUT ANY WARRANTY; without even the implied warranty of 

16# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 

17# GNU General Public License for more details. 

18# 

19# You should have received a copy of the GNU General Public License 

20# along with this program. If not, see <http://www.gnu.org/licenses/>. 

21 

22""" 

23Butler top level classes. 

24""" 

25from __future__ import annotations 

26 

27__all__ = ("Butler", "ButlerValidationError") 

28 

29import os 

30from collections import defaultdict 

31import contextlib 

32import logging 

33from typing import ( 

34 Any, 

35 ClassVar, 

36 ContextManager, 

37 Dict, 

38 Iterable, 

39 List, 

40 Mapping, 

41 MutableMapping, 

42 Optional, 

43 Tuple, 

44 Union, 

45) 

46 

47try: 

48 import boto3 

49except ImportError: 

50 boto3 = None 

51 

52from lsst.utils import doImport 

53from .core import ( 

54 ButlerURI, 

55 CompositesMap, 

56 Config, 

57 ConfigSubset, 

58 DataCoordinate, 

59 DataId, 

60 DatasetRef, 

61 DatasetType, 

62 Datastore, 

63 FileDataset, 

64 Quantum, 

65 RepoExport, 

66 StorageClassFactory, 

67 ValidationError, 

68) 

69from .core.repoRelocation import BUTLER_ROOT_TAG 

70from .core.safeFileIo import safeMakeDir 

71from .core.utils import transactional, getClassOf 

72from .core.s3utils import bucketExists 

73from ._deferredDatasetHandle import DeferredDatasetHandle 

74from ._butlerConfig import ButlerConfig 

75from .registry import Registry, RegistryConfig, CollectionType 

76from .registry.wildcards import CollectionSearch 

77 

78log = logging.getLogger(__name__) 

79 

80 

81class ButlerValidationError(ValidationError): 

82 """There is a problem with the Butler configuration.""" 

83 pass 

84 

85 

86class Butler: 

87 """Main entry point for the data access system. 

88 

89 Parameters 

90 ---------- 

91 config : `ButlerConfig`, `Config` or `str`, optional. 

92 Configuration. Anything acceptable to the 

93 `ButlerConfig` constructor. If a directory path 

94 is given the configuration will be read from a ``butler.yaml`` file in 

95 that location. If `None` is given default values will be used. 

96 butler : `Butler`, optional. 

97 If provided, construct a new Butler that uses the same registry and 

98 datastore as the given one, but with the given collection and run. 

99 Incompatible with the ``config``, ``searchPaths``, and ``writeable`` 

100 arguments. 

101 collections : `Any`, optional 

102 An expression specifying the collections to be searched (in order) when 

103 reading datasets, and optionally dataset type restrictions on them. 

104 This may be: 

105 - a `str` collection name; 

106 - a tuple of (collection name, *dataset type restriction*); 

107 - an iterable of either of the above; 

108 - a mapping from `str` to *dataset type restriction*. 

109 

110 See :ref:`daf_butler_collection_expressions` for more information, 

111 including the definition of a *dataset type restriction*. All 

112 collections must either already exist or be specified to be created 

113 by other arguments. 

114 run : `str`, optional 

115 Name of the run datasets should be output to. If the run 

116 does not exist, it will be created. If ``collections`` is `None`, it 

117 will be set to ``[run]``. If this is not set (and ``writeable`` is 

118 not set either), a read-only butler will be created. 

119 tags : `Iterable` [ `str` ], optional 

120 A list of `~CollectionType.TAGGED` collections that datasets should be 

121 associated with in `put` or `ingest` and disassociated from in 

122 `pruneDatasets`. If any of these collections does not exist, it will 

123 be created. 

124 chains : `Mapping` [ `str`, `Iterable` [ `str` ] ], optional 

125 A mapping from the names of new `~CollectionType.CHAINED` collections 

126 to an expression identifying their child collections (which takes the 

127 same form as the ``collections`` argument. Chains may be nested only 

128 if children precede their parents in this mapping. 

129 searchPaths : `list` of `str`, optional 

130 Directory paths to search when calculating the full Butler 

131 configuration. Not used if the supplied config is already a 

132 `ButlerConfig`. 

133 writeable : `bool`, optional 

134 Explicitly sets whether the butler supports write operations. If not 

135 provided, a read-write butler is created if any of ``run``, ``tags``, 

136 or ``chains`` is non-empty. 

137 

138 Examples 

139 -------- 

140 While there are many ways to control exactly how a `Butler` interacts with 

141 the collections in its `Registry`, the most common cases are still simple. 

142 

143 For a read-only `Butler` that searches one collection, do:: 

144 

145 butler = Butler("/path/to/repo", collections=["u/alice/DM-50000"]) 

146 

147 For a read-write `Butler` that writes to and reads from a 

148 `~CollectionType.RUN` collection:: 

149 

150 butler = Butler("/path/to/repo", run="u/alice/DM-50000/a") 

151 

152 The `Butler` passed to a ``PipelineTask`` is often much more complex, 

153 because we want to write to one `~CollectionType.RUN` collection but read 

154 from several others (as well), while defining a new 

155 `~CollectionType.CHAINED` collection that combines them all:: 

156 

157 butler = Butler("/path/to/repo", run="u/alice/DM-50000/a", 

158 collections=["u/alice/DM-50000"], 

159 chains={ 

160 "u/alice/DM-50000": ["u/alice/DM-50000/a", 

161 "u/bob/DM-49998", 

162 "raw/hsc"] 

163 }) 

164 

165 This butler will `put` new datasets to the run ``u/alice/DM-50000/a``, but 

166 they'll also be available from the chained collection ``u/alice/DM-50000``. 

167 Datasets will be read first from that run (since it appears first in the 

168 chain), and then from ``u/bob/DM-49998`` and finally ``raw/hsc``. 

169 If ``u/alice/DM-50000`` had already been defined, the ``chain`` argument 

170 would be unnecessary. We could also construct a butler that performs 

171 exactly the same `put` and `get` operations without actually creating a 

172 chained collection, just by passing multiple items is ``collections``:: 

173 

174 butler = Butler("/path/to/repo", run="u/alice/DM-50000/a", 

175 collections=["u/alice/DM-50000/a", 

176 "u/bob/DM-49998", 

177 "raw/hsc"]) 

178 

179 Finally, one can always create a `Butler` with no collections:: 

180 

181 butler = Butler("/path/to/repo", writeable=True) 

182 

183 This can be extremely useful when you just want to use ``butler.registry``, 

184 e.g. for inserting dimension data or managing collections, or when the 

185 collections you want to use with the butler are not consistent. 

186 Passing ``writeable`` explicitly here is only necessary if you want to be 

187 able to make changes to the repo - usually the value for ``writeable`` is 

188 can be guessed from the collection arguments provided, but it defaults to 

189 `False` when there are not collection arguments. 

190 """ 

191 def __init__(self, config: Union[Config, str, None] = None, *, 

192 butler: Optional[Butler] = None, 

193 collections: Any = None, 

194 run: Optional[str] = None, 

195 tags: Iterable[str] = (), 

196 chains: Optional[Mapping[str, Any]] = None, 

197 searchPaths: Optional[List[str]] = None, 

198 writeable: Optional[bool] = None): 

199 # Transform any single-pass iterator into an actual sequence so we 

200 # can see if its empty 

201 self.tags = tuple(tags) 

202 # Load registry, datastore, etc. from config or existing butler. 

203 if butler is not None: 

204 if config is not None or searchPaths is not None or writeable is not None: 

205 raise TypeError("Cannot pass 'config', 'searchPaths', or 'writeable' " 

206 "arguments with 'butler' argument.") 

207 self.registry = butler.registry 

208 self.datastore = butler.datastore 

209 self.storageClasses = butler.storageClasses 

210 self._composites = butler._composites 

211 self._config = butler._config 

212 else: 

213 self._config = ButlerConfig(config, searchPaths=searchPaths) 

214 if "root" in self._config: 

215 butlerRoot = self._config["root"] 

216 else: 

217 butlerRoot = self._config.configDir 

218 if writeable is None: 

219 writeable = run is not None or chains is not None or self.tags 

220 self.registry = Registry.fromConfig(self._config, butlerRoot=butlerRoot, writeable=writeable) 

221 self.datastore = Datastore.fromConfig(self._config, self.registry.getDatastoreBridgeManager(), 

222 butlerRoot=butlerRoot) 

223 self.storageClasses = StorageClassFactory() 

224 self.storageClasses.addFromConfig(self._config) 

225 self._composites = CompositesMap(self._config, universe=self.registry.dimensions) 

226 # Check the many collection arguments for consistency and create any 

227 # needed collections that don't exist. 

228 if collections is None: 

229 if run is not None: 

230 collections = (run,) 

231 else: 

232 collections = () 

233 self.collections = CollectionSearch.fromExpression(collections) 

234 if chains is None: 

235 chains = {} 

236 self.run = run 

237 if "run" in self._config or "collection" in self._config: 

238 raise ValueError("Passing a run or collection via configuration is no longer supported.") 

239 if self.run is not None: 

240 self.registry.registerCollection(self.run, type=CollectionType.RUN) 

241 for tag in self.tags: 

242 self.registry.registerCollection(tag, type=CollectionType.TAGGED) 

243 for parent, children in chains.items(): 

244 self.registry.registerCollection(parent, type=CollectionType.CHAINED) 

245 self.registry.setCollectionChain(parent, children) 

246 

247 GENERATION: ClassVar[int] = 3 

248 """This is a Generation 3 Butler. 

249 

250 This attribute may be removed in the future, once the Generation 2 Butler 

251 interface has been fully retired; it should only be used in transitional 

252 code. 

253 """ 

254 

255 @staticmethod 

256 def makeRepo(root: str, config: Union[Config, str, None] = None, standalone: bool = False, 

257 createRegistry: bool = True, searchPaths: Optional[List[str]] = None, 

258 forceConfigRoot: bool = True, outfile: Optional[str] = None, 

259 overwrite: bool = False) -> Config: 

260 """Create an empty data repository by adding a butler.yaml config 

261 to a repository root directory. 

262 

263 Parameters 

264 ---------- 

265 root : `str` or `ButlerURI` 

266 Path or URI to the root location of the new repository. Will be 

267 created if it does not exist. 

268 config : `Config` or `str`, optional 

269 Configuration to write to the repository, after setting any 

270 root-dependent Registry or Datastore config options. Can not 

271 be a `ButlerConfig` or a `ConfigSubset`. If `None`, default 

272 configuration will be used. Root-dependent config options 

273 specified in this config are overwritten if ``forceConfigRoot`` 

274 is `True`. 

275 standalone : `bool` 

276 If True, write all expanded defaults, not just customized or 

277 repository-specific settings. 

278 This (mostly) decouples the repository from the default 

279 configuration, insulating it from changes to the defaults (which 

280 may be good or bad, depending on the nature of the changes). 

281 Future *additions* to the defaults will still be picked up when 

282 initializing `Butlers` to repos created with ``standalone=True``. 

283 createRegistry : `bool`, optional 

284 If `True` create a new Registry. 

285 searchPaths : `list` of `str`, optional 

286 Directory paths to search when calculating the full butler 

287 configuration. 

288 forceConfigRoot : `bool`, optional 

289 If `False`, any values present in the supplied ``config`` that 

290 would normally be reset are not overridden and will appear 

291 directly in the output config. This allows non-standard overrides 

292 of the root directory for a datastore or registry to be given. 

293 If this parameter is `True` the values for ``root`` will be 

294 forced into the resulting config if appropriate. 

295 outfile : `str`, optional 

296 If not-`None`, the output configuration will be written to this 

297 location rather than into the repository itself. Can be a URI 

298 string. Can refer to a directory that will be used to write 

299 ``butler.yaml``. 

300 overwrite : `bool`, optional 

301 Create a new configuration file even if one already exists 

302 in the specified output location. Default is to raise 

303 an exception. 

304 

305 Returns 

306 ------- 

307 config : `Config` 

308 The updated `Config` instance written to the repo. 

309 

310 Raises 

311 ------ 

312 ValueError 

313 Raised if a ButlerConfig or ConfigSubset is passed instead of a 

314 regular Config (as these subclasses would make it impossible to 

315 support ``standalone=False``). 

316 FileExistsError 

317 Raised if the output config file already exists. 

318 os.error 

319 Raised if the directory does not exist, exists but is not a 

320 directory, or cannot be created. 

321 

322 Notes 

323 ----- 

324 Note that when ``standalone=False`` (the default), the configuration 

325 search path (see `ConfigSubset.defaultSearchPaths`) that was used to 

326 construct the repository should also be used to construct any Butlers 

327 to avoid configuration inconsistencies. 

328 """ 

329 if isinstance(config, (ButlerConfig, ConfigSubset)): 

330 raise ValueError("makeRepo must be passed a regular Config without defaults applied.") 

331 

332 # for "file" schemes we are assuming POSIX semantics for paths, for 

333 # schemeless URIs we are assuming os.path semantics. 

334 uri = ButlerURI(root, forceDirectory=True) 

335 if uri.scheme == "file" or not uri.scheme: 

336 if not os.path.isdir(uri.ospath): 

337 safeMakeDir(uri.ospath) 

338 elif uri.scheme == "s3": 

339 # bucket must already exist 

340 if not bucketExists(uri.netloc): 

341 raise ValueError(f"Bucket {uri.netloc} does not exist!") 

342 s3 = boto3.client("s3") 

343 # don't create S3 key when root is at the top-level of an Bucket 

344 if not uri.path == "/": 

345 s3.put_object(Bucket=uri.netloc, Key=uri.relativeToPathRoot) 

346 else: 

347 raise ValueError(f"Unrecognized scheme: {uri.scheme}") 

348 config = Config(config) 

349 

350 # If we are creating a new repo from scratch with relative roots, 

351 # do not propagate an explicit root from the config file 

352 if "root" in config: 

353 del config["root"] 

354 

355 full = ButlerConfig(config, searchPaths=searchPaths) # this applies defaults 

356 datastoreClass = doImport(full["datastore", "cls"]) 

357 datastoreClass.setConfigRoot(BUTLER_ROOT_TAG, config, full, overwrite=forceConfigRoot) 

358 

359 # if key exists in given config, parse it, otherwise parse the defaults 

360 # in the expanded config 

361 if config.get(("registry", "db")): 

362 registryConfig = RegistryConfig(config) 

363 else: 

364 registryConfig = RegistryConfig(full) 

365 defaultDatabaseUri = registryConfig.makeDefaultDatabaseUri(BUTLER_ROOT_TAG) 

366 if defaultDatabaseUri is not None: 

367 Config.updateParameters(RegistryConfig, config, full, 

368 toUpdate={"db": defaultDatabaseUri}, 

369 overwrite=forceConfigRoot) 

370 else: 

371 Config.updateParameters(RegistryConfig, config, full, toCopy=("db",), 

372 overwrite=forceConfigRoot) 

373 

374 if standalone: 

375 config.merge(full) 

376 if outfile is not None: 

377 # When writing to a separate location we must include 

378 # the root of the butler repo in the config else it won't know 

379 # where to look. 

380 config["root"] = uri.geturl() 

381 configURI = outfile 

382 else: 

383 configURI = uri 

384 config.dumpToUri(configURI, overwrite=overwrite) 

385 

386 # Create Registry and populate tables 

387 Registry.fromConfig(config, create=createRegistry, butlerRoot=root) 

388 return config 

389 

390 @classmethod 

391 def _unpickle(cls, config: ButlerConfig, collections: Optional[CollectionSearch], run: Optional[str], 

392 tags: Tuple[str, ...], writeable: bool) -> Butler: 

393 """Callable used to unpickle a Butler. 

394 

395 We prefer not to use ``Butler.__init__`` directly so we can force some 

396 of its many arguments to be keyword-only (note that ``__reduce__`` 

397 can only invoke callables with positional arguments). 

398 

399 Parameters 

400 ---------- 

401 config : `ButlerConfig` 

402 Butler configuration, already coerced into a true `ButlerConfig` 

403 instance (and hence after any search paths for overrides have been 

404 utilized). 

405 collections : `CollectionSearch` 

406 Names of collections to read from. 

407 run : `str`, optional 

408 Name of `~CollectionType.RUN` collection to write to. 

409 tags : `tuple` [`str`] 

410 Names of `~CollectionType.TAGGED` collections to associate with. 

411 writeable : `bool` 

412 Whether the Butler should support write operations. 

413 

414 Returns 

415 ------- 

416 butler : `Butler` 

417 A new `Butler` instance. 

418 """ 

419 return cls(config=config, collections=collections, run=run, tags=tags, writeable=writeable) 

420 

421 def __reduce__(self): 

422 """Support pickling. 

423 """ 

424 return (Butler._unpickle, (self._config, self.collections, self.run, self.tags, 

425 self.registry.isWriteable())) 

426 

427 def __str__(self): 

428 return "Butler(collections={}, run={}, tags={}, datastore='{}', registry='{}')".format( 

429 self.collections, self.run, self.tags, self.datastore, self.registry) 

430 

431 def isWriteable(self) -> bool: 

432 """Return `True` if this `Butler` supports write operations. 

433 """ 

434 return self.registry.isWriteable() 

435 

436 @contextlib.contextmanager 

437 def transaction(self): 

438 """Context manager supporting `Butler` transactions. 

439 

440 Transactions can be nested. 

441 """ 

442 with self.registry.transaction(): 

443 with self.datastore.transaction(): 

444 yield 

445 

446 def _standardizeArgs(self, datasetRefOrType: Union[DatasetRef, DatasetType, str], 

447 dataId: Optional[DataId] = None, **kwds: Any) -> Tuple[DatasetType, DataId]: 

448 """Standardize the arguments passed to several Butler APIs. 

449 

450 Parameters 

451 ---------- 

452 datasetRefOrType : `DatasetRef`, `DatasetType`, or `str` 

453 When `DatasetRef` the `dataId` should be `None`. 

454 Otherwise the `DatasetType` or name thereof. 

455 dataId : `dict` or `DataCoordinate` 

456 A `dict` of `Dimension` link name, value pairs that label the 

457 `DatasetRef` within a Collection. When `None`, a `DatasetRef` 

458 should be provided as the second argument. 

459 kwds 

460 Additional keyword arguments used to augment or construct a 

461 `DataCoordinate`. See `DataCoordinate.standardize` 

462 parameters. 

463 

464 Returns 

465 ------- 

466 datasetType : `DatasetType` 

467 A `DatasetType` instance extracted from ``datasetRefOrType``. 

468 dataId : `dict` or `DataId`, optional 

469 Argument that can be used (along with ``kwds``) to construct a 

470 `DataId`. 

471 

472 Notes 

473 ----- 

474 Butler APIs that conceptually need a DatasetRef also allow passing a 

475 `DatasetType` (or the name of one) and a `DataId` (or a dict and 

476 keyword arguments that can be used to construct one) separately. This 

477 method accepts those arguments and always returns a true `DatasetType` 

478 and a `DataId` or `dict`. 

479 

480 Standardization of `dict` vs `DataId` is best handled by passing the 

481 returned ``dataId`` (and ``kwds``) to `Registry` APIs, which are 

482 generally similarly flexible. 

483 """ 

484 externalDatasetType = None 

485 internalDatasetType = None 

486 if isinstance(datasetRefOrType, DatasetRef): 

487 if dataId is not None or kwds: 

488 raise ValueError("DatasetRef given, cannot use dataId as well") 

489 externalDatasetType = datasetRefOrType.datasetType 

490 dataId = datasetRefOrType.dataId 

491 else: 

492 # Don't check whether DataId is provided, because Registry APIs 

493 # can usually construct a better error message when it wasn't. 

494 if isinstance(datasetRefOrType, DatasetType): 

495 externalDatasetType = datasetRefOrType 

496 else: 

497 internalDatasetType = self.registry.getDatasetType(datasetRefOrType) 

498 

499 # Check that they are self-consistent 

500 if externalDatasetType is not None: 

501 internalDatasetType = self.registry.getDatasetType(externalDatasetType.name) 

502 if externalDatasetType != internalDatasetType: 

503 raise ValueError(f"Supplied dataset type ({externalDatasetType}) inconsistent with " 

504 f"registry definition ({internalDatasetType})") 

505 

506 return internalDatasetType, dataId 

507 

508 def _findDatasetRef(self, datasetRefOrType: Union[DatasetRef, DatasetType, str], 

509 dataId: Optional[DataId] = None, *, 

510 collections: Any = None, 

511 allowUnresolved: bool = False, 

512 **kwds: Any) -> DatasetRef: 

513 """Shared logic for methods that start with a search for a dataset in 

514 the registry. 

515 

516 Parameters 

517 ---------- 

518 datasetRefOrType : `DatasetRef`, `DatasetType`, or `str` 

519 When `DatasetRef` the `dataId` should be `None`. 

520 Otherwise the `DatasetType` or name thereof. 

521 dataId : `dict` or `DataCoordinate`, optional 

522 A `dict` of `Dimension` link name, value pairs that label the 

523 `DatasetRef` within a Collection. When `None`, a `DatasetRef` 

524 should be provided as the first argument. 

525 collections : Any, optional 

526 Collections to be searched, overriding ``self.collections``. 

527 Can be any of the types supported by the ``collections`` argument 

528 to butler construction. 

529 allowUnresolved : `bool`, optional 

530 If `True`, return an unresolved `DatasetRef` if finding a resolved 

531 one in the `Registry` fails. Defaults to `False`. 

532 kwds 

533 Additional keyword arguments used to augment or construct a 

534 `DataId`. See `DataId` parameters. 

535 

536 Returns 

537 ------- 

538 ref : `DatasetRef` 

539 A reference to the dataset identified by the given arguments. 

540 

541 Raises 

542 ------ 

543 LookupError 

544 Raised if no matching dataset exists in the `Registry` (and 

545 ``allowUnresolved is False``). 

546 ValueError 

547 Raised if a resolved `DatasetRef` was passed as an input, but it 

548 differs from the one found in the registry. 

549 TypeError 

550 Raised if no collections were provided. 

551 """ 

552 datasetType, dataId = self._standardizeArgs(datasetRefOrType, dataId, **kwds) 

553 if isinstance(datasetRefOrType, DatasetRef): 

554 idNumber = datasetRefOrType.id 

555 else: 

556 idNumber = None 

557 # Expand the data ID first instead of letting registry.findDataset do 

558 # it, so we get the result even if it returns None. 

559 dataId = self.registry.expandDataId(dataId, graph=datasetType.dimensions, **kwds) 

560 if collections is None: 

561 collections = self.collections 

562 if not collections: 

563 raise TypeError("No input collections provided.") 

564 else: 

565 collections = CollectionSearch.fromExpression(collections) 

566 # Always lookup the DatasetRef, even if one is given, to ensure it is 

567 # present in the current collection. 

568 ref = self.registry.findDataset(datasetType, dataId, collections=collections) 

569 if ref is None: 

570 if allowUnresolved: 

571 return DatasetRef(datasetType, dataId) 

572 else: 

573 raise LookupError(f"Dataset {datasetType.name} with data ID {dataId} " 

574 f"could not be found in collections {collections}.") 

575 if idNumber is not None and idNumber != ref.id: 

576 raise ValueError(f"DatasetRef.id provided ({idNumber}) does not match " 

577 f"id ({ref.id}) in registry in collections {collections}.") 

578 return ref 

579 

580 @transactional 

581 def put(self, obj: Any, datasetRefOrType: Union[DatasetRef, DatasetType, str], 

582 dataId: Optional[DataId] = None, *, 

583 producer: Optional[Quantum] = None, 

584 run: Optional[str] = None, 

585 tags: Optional[Iterable[str]] = None, 

586 **kwds: Any) -> DatasetRef: 

587 """Store and register a dataset. 

588 

589 Parameters 

590 ---------- 

591 obj : `object` 

592 The dataset. 

593 datasetRefOrType : `DatasetRef`, `DatasetType`, or `str` 

594 When `DatasetRef` is provided, ``dataId`` should be `None`. 

595 Otherwise the `DatasetType` or name thereof. 

596 dataId : `dict` or `DataCoordinate` 

597 A `dict` of `Dimension` link name, value pairs that label the 

598 `DatasetRef` within a Collection. When `None`, a `DatasetRef` 

599 should be provided as the second argument. 

600 producer : `Quantum`, optional 

601 The producer. 

602 run : `str`, optional 

603 The name of the run the dataset should be added to, overriding 

604 ``self.run``. 

605 tags : `Iterable` [ `str` ], optional 

606 The names of a `~CollectionType.TAGGED` collections to associate 

607 the dataset with, overriding ``self.tags``. These collections 

608 must have already been added to the `Registry`. 

609 kwds 

610 Additional keyword arguments used to augment or construct a 

611 `DataCoordinate`. See `DataCoordinate.standardize` 

612 parameters. 

613 

614 Returns 

615 ------- 

616 ref : `DatasetRef` 

617 A reference to the stored dataset, updated with the correct id if 

618 given. 

619 

620 Raises 

621 ------ 

622 TypeError 

623 Raised if the butler is read-only or if no run has been provided. 

624 """ 

625 log.debug("Butler put: %s, dataId=%s, producer=%s, run=%s", datasetRefOrType, dataId, producer, run) 

626 if not self.isWriteable(): 

627 raise TypeError("Butler is read-only.") 

628 datasetType, dataId = self._standardizeArgs(datasetRefOrType, dataId, **kwds) 

629 if isinstance(datasetRefOrType, DatasetRef) and datasetRefOrType.id is not None: 

630 raise ValueError("DatasetRef must not be in registry, must have None id") 

631 

632 if run is None: 

633 if self.run is None: 

634 raise TypeError("No run provided.") 

635 run = self.run 

636 # No need to check type for run; first thing we do is 

637 # insertDatasets, and that will check for us. 

638 

639 if tags is None: 

640 tags = self.tags 

641 else: 

642 tags = tuple(tags) 

643 for tag in tags: 

644 # Check that these are tagged collections up front, because we want 

645 # to avoid relying on Datastore transactionality to avoid modifying 

646 # the repo if there's an error later. 

647 collectionType = self.registry.getCollectionType(tag) 

648 if collectionType is not CollectionType.TAGGED: 

649 raise TypeError(f"Cannot associate into collection '{tag}' of non-TAGGED type " 

650 f"{collectionType.name}.") 

651 

652 # Add Registry Dataset entry. 

653 dataId = self.registry.expandDataId(dataId, graph=datasetType.dimensions, **kwds) 

654 ref, = self.registry.insertDatasets(datasetType, run=run, dataIds=[dataId], 

655 producer=producer) 

656 

657 # Add Datastore entry. 

658 self.datastore.put(obj, ref) 

659 

660 for tag in tags: 

661 self.registry.associate(tag, [ref]) 

662 

663 return ref 

664 

665 def getDirect(self, ref: DatasetRef, *, parameters: Optional[Dict[str, Any]] = None): 

666 """Retrieve a stored dataset. 

667 

668 Unlike `Butler.get`, this method allows datasets outside the Butler's 

669 collection to be read as long as the `DatasetRef` that identifies them 

670 can be obtained separately. 

671 

672 Parameters 

673 ---------- 

674 ref : `DatasetRef` 

675 Reference to an already stored dataset. 

676 parameters : `dict` 

677 Additional StorageClass-defined options to control reading, 

678 typically used to efficiently read only a subset of the dataset. 

679 

680 Returns 

681 ------- 

682 obj : `object` 

683 The dataset. 

684 """ 

685 return self.datastore.get(ref, parameters=parameters) 

686 

687 def getDeferred(self, datasetRefOrType: Union[DatasetRef, DatasetType, str], 

688 dataId: Optional[DataId] = None, *, 

689 parameters: Union[dict, None] = None, 

690 collections: Any = None, 

691 **kwds: Any) -> DeferredDatasetHandle: 

692 """Create a `DeferredDatasetHandle` which can later retrieve a dataset 

693 

694 Parameters 

695 ---------- 

696 datasetRefOrType : `DatasetRef`, `DatasetType`, or `str` 

697 When `DatasetRef` the `dataId` should be `None`. 

698 Otherwise the `DatasetType` or name thereof. 

699 dataId : `dict` or `DataCoordinate`, optional 

700 A `dict` of `Dimension` link name, value pairs that label the 

701 `DatasetRef` within a Collection. When `None`, a `DatasetRef` 

702 should be provided as the first argument. 

703 parameters : `dict` 

704 Additional StorageClass-defined options to control reading, 

705 typically used to efficiently read only a subset of the dataset. 

706 collections : Any, optional 

707 Collections to be searched, overriding ``self.collections``. 

708 Can be any of the types supported by the ``collections`` argument 

709 to butler construction. 

710 kwds 

711 Additional keyword arguments used to augment or construct a 

712 `DataId`. See `DataId` parameters. 

713 

714 Returns 

715 ------- 

716 obj : `DeferredDatasetHandle` 

717 A handle which can be used to retrieve a dataset at a later time. 

718 

719 Raises 

720 ------ 

721 LookupError 

722 Raised if no matching dataset exists in the `Registry` (and 

723 ``allowUnresolved is False``). 

724 ValueError 

725 Raised if a resolved `DatasetRef` was passed as an input, but it 

726 differs from the one found in the registry. 

727 TypeError 

728 Raised if no collections were provided. 

729 """ 

730 ref = self._findDatasetRef(datasetRefOrType, dataId, collections=collections, **kwds) 

731 return DeferredDatasetHandle(butler=self, ref=ref, parameters=parameters) 

732 

733 def get(self, datasetRefOrType: Union[DatasetRef, DatasetType, str], 

734 dataId: Optional[DataId] = None, *, 

735 parameters: Optional[Dict[str, Any]] = None, 

736 collections: Any = None, 

737 **kwds: Any) -> Any: 

738 """Retrieve a stored dataset. 

739 

740 Parameters 

741 ---------- 

742 datasetRefOrType : `DatasetRef`, `DatasetType`, or `str` 

743 When `DatasetRef` the `dataId` should be `None`. 

744 Otherwise the `DatasetType` or name thereof. 

745 dataId : `dict` or `DataCoordinate` 

746 A `dict` of `Dimension` link name, value pairs that label the 

747 `DatasetRef` within a Collection. When `None`, a `DatasetRef` 

748 should be provided as the first argument. 

749 parameters : `dict` 

750 Additional StorageClass-defined options to control reading, 

751 typically used to efficiently read only a subset of the dataset. 

752 collections : Any, optional 

753 Collections to be searched, overriding ``self.collections``. 

754 Can be any of the types supported by the ``collections`` argument 

755 to butler construction. 

756 kwds 

757 Additional keyword arguments used to augment or construct a 

758 `DataCoordinate`. See `DataCoordinate.standardize` 

759 parameters. 

760 

761 Returns 

762 ------- 

763 obj : `object` 

764 The dataset. 

765 

766 Raises 

767 ------ 

768 ValueError 

769 Raised if a resolved `DatasetRef` was passed as an input, but it 

770 differs from the one found in the registry. 

771 LookupError 

772 Raised if no matching dataset exists in the `Registry`. 

773 TypeError 

774 Raised if no collections were provided. 

775 """ 

776 log.debug("Butler get: %s, dataId=%s, parameters=%s", datasetRefOrType, dataId, parameters) 

777 ref = self._findDatasetRef(datasetRefOrType, dataId, collections=collections, **kwds) 

778 return self.getDirect(ref, parameters=parameters) 

779 

780 def getURIs(self, datasetRefOrType: Union[DatasetRef, DatasetType, str], 

781 dataId: Optional[DataId] = None, *, 

782 predict: bool = False, 

783 collections: Any = None, 

784 run: Optional[str] = None, 

785 **kwds: Any) -> Tuple[Optional[ButlerURI], Dict[str, ButlerURI]]: 

786 """Returns the URIs associated with the dataset. 

787 

788 Parameters 

789 ---------- 

790 datasetRefOrType : `DatasetRef`, `DatasetType`, or `str` 

791 When `DatasetRef` the `dataId` should be `None`. 

792 Otherwise the `DatasetType` or name thereof. 

793 dataId : `dict` or `DataCoordinate` 

794 A `dict` of `Dimension` link name, value pairs that label the 

795 `DatasetRef` within a Collection. When `None`, a `DatasetRef` 

796 should be provided as the first argument. 

797 predict : `bool` 

798 If `True`, allow URIs to be returned of datasets that have not 

799 been written. 

800 collections : Any, optional 

801 Collections to be searched, overriding ``self.collections``. 

802 Can be any of the types supported by the ``collections`` argument 

803 to butler construction. 

804 run : `str`, optional 

805 Run to use for predictions, overriding ``self.run``. 

806 kwds 

807 Additional keyword arguments used to augment or construct a 

808 `DataCoordinate`. See `DataCoordinate.standardize` 

809 parameters. 

810 

811 Returns 

812 ------- 

813 primary : `ButlerURI` 

814 The URI to the primary artifact associated with this dataset. 

815 If the dataset was disassembled within the datastore this 

816 may be `None`. 

817 components : `dict` 

818 URIs to any components associated with the dataset artifact. 

819 Can be empty if there are no components. 

820 """ 

821 ref = self._findDatasetRef(datasetRefOrType, dataId, allowUnresolved=predict, 

822 collections=collections, **kwds) 

823 if ref.id is None: # only possible if predict is True 

824 if run is None: 

825 run = self.run 

826 if run is None: 

827 raise TypeError("Cannot predict location with run=None.") 

828 # Lie about ID, because we can't guess it, and only 

829 # Datastore.getURIs() will ever see it (and it doesn't use it). 

830 ref = ref.resolved(id=0, run=self.run) 

831 return self.datastore.getURIs(ref, predict) 

832 

833 def getURI(self, datasetRefOrType: Union[DatasetRef, DatasetType, str], 

834 dataId: Optional[DataId] = None, *, 

835 predict: bool = False, 

836 collections: Any = None, 

837 run: Optional[str] = None, 

838 **kwds: Any) -> ButlerURI: 

839 """Return the URI to the Dataset. 

840 

841 Parameters 

842 ---------- 

843 datasetRefOrType : `DatasetRef`, `DatasetType`, or `str` 

844 When `DatasetRef` the `dataId` should be `None`. 

845 Otherwise the `DatasetType` or name thereof. 

846 dataId : `dict` or `DataCoordinate` 

847 A `dict` of `Dimension` link name, value pairs that label the 

848 `DatasetRef` within a Collection. When `None`, a `DatasetRef` 

849 should be provided as the first argument. 

850 predict : `bool` 

851 If `True`, allow URIs to be returned of datasets that have not 

852 been written. 

853 collections : Any, optional 

854 Collections to be searched, overriding ``self.collections``. 

855 Can be any of the types supported by the ``collections`` argument 

856 to butler construction. 

857 run : `str`, optional 

858 Run to use for predictions, overriding ``self.run``. 

859 kwds 

860 Additional keyword arguments used to augment or construct a 

861 `DataCoordinate`. See `DataCoordinate.standardize` 

862 parameters. 

863 

864 Returns 

865 ------- 

866 uri : `ButlerURI` 

867 URI pointing to the Dataset within the datastore. If the 

868 Dataset does not exist in the datastore, and if ``predict`` is 

869 `True`, the URI will be a prediction and will include a URI 

870 fragment "#predicted". 

871 If the datastore does not have entities that relate well 

872 to the concept of a URI the returned URI string will be 

873 descriptive. The returned URI is not guaranteed to be obtainable. 

874 

875 Raises 

876 ------ 

877 LookupError 

878 A URI has been requested for a dataset that does not exist and 

879 guessing is not allowed. 

880 ValueError 

881 Raised if a resolved `DatasetRef` was passed as an input, but it 

882 differs from the one found in the registry. 

883 TypeError 

884 Raised if no collections were provided. 

885 RuntimeError 

886 Raised if a URI is requested for a dataset that consists of 

887 multiple artifacts. 

888 """ 

889 primary, components = self.getURIs(datasetRefOrType, dataId=dataId, predict=predict, 

890 collections=collections, run=run, **kwds) 

891 

892 if primary is None or components: 

893 raise RuntimeError(f"Dataset ({datasetRefOrType}) includes distinct URIs for components. " 

894 "Use Butler.getURIs() instead.") 

895 return primary 

896 

897 def datasetExists(self, datasetRefOrType: Union[DatasetRef, DatasetType, str], 

898 dataId: Optional[DataId] = None, *, 

899 collections: Any = None, 

900 **kwds: Any) -> bool: 

901 """Return True if the Dataset is actually present in the Datastore. 

902 

903 Parameters 

904 ---------- 

905 datasetRefOrType : `DatasetRef`, `DatasetType`, or `str` 

906 When `DatasetRef` the `dataId` should be `None`. 

907 Otherwise the `DatasetType` or name thereof. 

908 dataId : `dict` or `DataCoordinate` 

909 A `dict` of `Dimension` link name, value pairs that label the 

910 `DatasetRef` within a Collection. When `None`, a `DatasetRef` 

911 should be provided as the first argument. 

912 collections : Any, optional 

913 Collections to be searched, overriding ``self.collections``. 

914 Can be any of the types supported by the ``collections`` argument 

915 to butler construction. 

916 kwds 

917 Additional keyword arguments used to augment or construct a 

918 `DataCoordinate`. See `DataCoordinate.standardize` 

919 parameters. 

920 

921 Raises 

922 ------ 

923 LookupError 

924 Raised if the dataset is not even present in the Registry. 

925 ValueError 

926 Raised if a resolved `DatasetRef` was passed as an input, but it 

927 differs from the one found in the registry. 

928 TypeError 

929 Raised if no collections were provided. 

930 """ 

931 ref = self._findDatasetRef(datasetRefOrType, dataId, collections=collections, **kwds) 

932 return self.datastore.exists(ref) 

933 

934 def pruneCollection(self, name: str, purge: bool = False, unstore: bool = False): 

935 """Remove a collection and possibly prune datasets within it. 

936 

937 Parameters 

938 ---------- 

939 name : `str` 

940 Name of the collection to remove. If this is a 

941 `~CollectionType.TAGGED` or `~CollectionType.CHAINED` collection, 

942 datasets within the collection are not modified unless ``unstore`` 

943 is `True`. If this is a `~CollectionType.RUN` collection, 

944 ``purge`` and ``unstore`` must be `True`, and all datasets in it 

945 are fully removed from the data repository. 

946 purge : `bool`, optional 

947 If `True`, permit `~CollectionType.RUN` collections to be removed, 

948 fully removing datasets within them. Requires ``unstore=True`` as 

949 well as an added precaution against accidental deletion. Must be 

950 `False` (default) if the collection is not a ``RUN``. 

951 unstore: `bool`, optional 

952 If `True`, remove all datasets in the collection from all 

953 datastores in which they appear. 

954 

955 Raises 

956 ------ 

957 TypeError 

958 Raised if the butler is read-only or arguments are mutually 

959 inconsistent. 

960 """ 

961 # See pruneDatasets comments for more information about the logic here; 

962 # the cases are almost the same, but here we can rely on Registry to 

963 # take care everything but Datastore deletion when we remove the 

964 # collection. 

965 if not self.isWriteable(): 

966 raise TypeError("Butler is read-only.") 

967 if purge and not unstore: 

968 raise TypeError("Cannot pass purge=True without unstore=True.") 

969 collectionType = self.registry.getCollectionType(name) 

970 if collectionType is CollectionType.RUN and not purge: 

971 raise TypeError(f"Cannot prune RUN collection {name} without purge=True.") 

972 if collectionType is not CollectionType.RUN and purge: 

973 raise TypeError(f"Cannot prune {collectionType.name} collection {name} with purge=True.") 

974 with self.registry.transaction(): 

975 if unstore: 

976 for ref in self.registry.queryDatasets(..., collections=name, deduplicate=True): 

977 if self.datastore.exists(ref): 

978 self.datastore.trash(ref) 

979 self.registry.removeCollection(name) 

980 if unstore: 

981 # Point of no return for removing artifacts 

982 self.datastore.emptyTrash() 

983 

984 def pruneDatasets(self, refs: Iterable[DatasetRef], *, 

985 disassociate: bool = True, 

986 unstore: bool = False, 

987 tags: Optional[Iterable[str]] = None, 

988 purge: bool = False, 

989 run: Optional[str] = None): 

990 """Remove one or more datasets from a collection and/or storage. 

991 

992 Parameters 

993 ---------- 

994 refs : `~collections.abc.Iterable` of `DatasetRef` 

995 Datasets to prune. These must be "resolved" references (not just 

996 a `DatasetType` and data ID). 

997 disassociate : bool`, optional 

998 Disassociate pruned datasets from ``self.tags`` (or the collections 

999 given via the ``tags`` argument). Ignored if ``refs`` is ``...``. 

1000 unstore : `bool`, optional 

1001 If `True` (`False` is default) remove these datasets from all 

1002 datastores known to this butler. Note that this will make it 

1003 impossible to retrieve these datasets even via other collections. 

1004 Datasets that are already not stored are ignored by this option. 

1005 tags : `Iterable` [ `str` ], optional 

1006 `~CollectionType.TAGGED` collections to disassociate the datasets 

1007 from, overriding ``self.tags``. Ignored if ``disassociate`` is 

1008 `False` or ``purge`` is `True`. 

1009 purge : `bool`, optional 

1010 If `True` (`False` is default), completely remove the dataset from 

1011 the `Registry`. To prevent accidental deletions, ``purge`` may 

1012 only be `True` if all of the following conditions are met: 

1013 

1014 - All given datasets are in the given run. 

1015 - ``disassociate`` is `True`; 

1016 - ``unstore`` is `True`. 

1017 

1018 This mode may remove provenance information from datasets other 

1019 than those provided, and should be used with extreme care. 

1020 run : `str`, optional 

1021 `~CollectionType.RUN` collection to purge from, overriding 

1022 ``self.run``. Ignored unless ``purge`` is `True`. 

1023 

1024 Raises 

1025 ------ 

1026 TypeError 

1027 Raised if the butler is read-only, if no collection was provided, 

1028 or the conditions for ``purge=True`` were not met. 

1029 """ 

1030 if not self.isWriteable(): 

1031 raise TypeError("Butler is read-only.") 

1032 if purge: 

1033 if not disassociate: 

1034 raise TypeError("Cannot pass purge=True without disassociate=True.") 

1035 if not unstore: 

1036 raise TypeError("Cannot pass purge=True without unstore=True.") 

1037 if run is None: 

1038 run = self.run 

1039 if run is None: 

1040 raise TypeError("No run provided but purge=True.") 

1041 collectionType = self.registry.getCollectionType(run) 

1042 if collectionType is not CollectionType.RUN: 

1043 raise TypeError(f"Cannot purge from collection '{run}' " 

1044 f"of non-RUN type {collectionType.name}.") 

1045 elif disassociate: 

1046 if tags is None: 

1047 tags = self.tags 

1048 else: 

1049 tags = tuple(tags) 

1050 if not tags: 

1051 raise TypeError("No tags provided but disassociate=True.") 

1052 for tag in tags: 

1053 collectionType = self.registry.getCollectionType(tag) 

1054 if collectionType is not CollectionType.TAGGED: 

1055 raise TypeError(f"Cannot disassociate from collection '{tag}' " 

1056 f"of non-TAGGED type {collectionType.name}.") 

1057 # Transform possibly-single-pass iterable into something we can iterate 

1058 # over multiple times. 

1059 refs = list(refs) 

1060 # Pruning a component of a DatasetRef makes no sense since registry 

1061 # doesn't know about components and datastore might not store 

1062 # components in a separate file 

1063 for ref in refs: 

1064 if ref.datasetType.component(): 

1065 raise ValueError(f"Can not prune a component of a dataset (ref={ref})") 

1066 # We don't need an unreliable Datastore transaction for this, because 

1067 # we've been extra careful to ensure that Datastore.trash only involves 

1068 # mutating the Registry (it can _look_ at Datastore-specific things, 

1069 # but shouldn't change them), and hence all operations here are 

1070 # Registry operations. 

1071 with self.registry.transaction(): 

1072 if unstore: 

1073 for ref in refs: 

1074 # There is a difference between a concrete composite 

1075 # and virtual composite. In a virtual composite the 

1076 # datastore is never given the top level DatasetRef. In 

1077 # the concrete composite the datastore knows all the 

1078 # refs and will clean up itself if asked to remove the 

1079 # parent ref. We can not check configuration for this 

1080 # since we can not trust that the configuration is the 

1081 # same. We therefore have to ask if the ref exists or 

1082 # not. This is consistent with the fact that we want 

1083 # to ignore already-removed-from-datastore datasets 

1084 # anyway. 

1085 if self.datastore.exists(ref): 

1086 self.datastore.trash(ref) 

1087 if purge: 

1088 self.registry.removeDatasets(refs) 

1089 elif disassociate: 

1090 for tag in tags: 

1091 self.registry.disassociate(tag, refs) 

1092 # We've exited the Registry transaction, and apparently committed. 

1093 # (if there was an exception, everything rolled back, and it's as if 

1094 # nothing happened - and we never get here). 

1095 # Datastore artifacts are not yet gone, but they're clearly marked 

1096 # as trash, so if we fail to delete now because of (e.g.) filesystem 

1097 # problems we can try again later, and if manual administrative 

1098 # intervention is required, it's pretty clear what that should entail: 

1099 # deleting everything on disk and in private Datastore tables that is 

1100 # in the dataset_location_trash table. 

1101 if unstore: 

1102 # Point of no return for removing artifacts 

1103 self.datastore.emptyTrash() 

1104 

1105 @transactional 

1106 def ingest(self, *datasets: FileDataset, transfer: Optional[str] = None, run: Optional[str] = None, 

1107 tags: Optional[Iterable[str]] = None,): 

1108 """Store and register one or more datasets that already exist on disk. 

1109 

1110 Parameters 

1111 ---------- 

1112 datasets : `FileDataset` 

1113 Each positional argument is a struct containing information about 

1114 a file to be ingested, including its path (either absolute or 

1115 relative to the datastore root, if applicable), a `DatasetRef`, 

1116 and optionally a formatter class or its fully-qualified string 

1117 name. If a formatter is not provided, the formatter that would be 

1118 used for `put` is assumed. On successful return, all 

1119 `FileDataset.ref` attributes will have their `DatasetRef.id` 

1120 attribute populated and all `FileDataset.formatter` attributes will 

1121 be set to the formatter class used. `FileDataset.path` attributes 

1122 may be modified to put paths in whatever the datastore considers a 

1123 standardized form. 

1124 transfer : `str`, optional 

1125 If not `None`, must be one of 'auto', 'move', 'copy', 'hardlink', 

1126 'relsymlink' or 'symlink', indicating how to transfer the file. 

1127 run : `str`, optional 

1128 The name of the run ingested datasets should be added to, 

1129 overriding ``self.run``. 

1130 tags : `Iterable` [ `str` ], optional 

1131 The names of a `~CollectionType.TAGGED` collections to associate 

1132 the dataset with, overriding ``self.tags``. These collections 

1133 must have already been added to the `Registry`. 

1134 

1135 Raises 

1136 ------ 

1137 TypeError 

1138 Raised if the butler is read-only or if no run was provided. 

1139 NotImplementedError 

1140 Raised if the `Datastore` does not support the given transfer mode. 

1141 DatasetTypeNotSupportedError 

1142 Raised if one or more files to be ingested have a dataset type that 

1143 is not supported by the `Datastore`.. 

1144 FileNotFoundError 

1145 Raised if one of the given files does not exist. 

1146 FileExistsError 

1147 Raised if transfer is not `None` but the (internal) location the 

1148 file would be moved to is already occupied. 

1149 

1150 Notes 

1151 ----- 

1152 This operation is not fully exception safe: if a database operation 

1153 fails, the given `FileDataset` instances may be only partially updated. 

1154 

1155 It is atomic in terms of database operations (they will either all 

1156 succeed or all fail) providing the database engine implements 

1157 transactions correctly. It will attempt to be atomic in terms of 

1158 filesystem operations as well, but this cannot be implemented 

1159 rigorously for most datastores. 

1160 """ 

1161 if not self.isWriteable(): 

1162 raise TypeError("Butler is read-only.") 

1163 if run is None: 

1164 if self.run is None: 

1165 raise TypeError("No run provided.") 

1166 run = self.run 

1167 # No need to check run type, since insertDatasets will do that 

1168 # (safely) for us. 

1169 if tags is None: 

1170 tags = self.tags 

1171 else: 

1172 tags = tuple(tags) 

1173 for tag in tags: 

1174 # Check that these are tagged collections up front, because we want 

1175 # to avoid relying on Datastore transactionality to avoid modifying 

1176 # the repo if there's an error later. 

1177 collectionType = self.registry.getCollectionType(tag) 

1178 if collectionType is not CollectionType.TAGGED: 

1179 raise TypeError(f"Cannot associate into collection '{tag}' of non-TAGGED type " 

1180 f"{collectionType.name}.") 

1181 # Reorganize the inputs so they're grouped by DatasetType and then 

1182 # data ID. We also include a list of DatasetRefs for each FileDataset 

1183 # to hold the resolved DatasetRefs returned by the Registry, before 

1184 # it's safe to swap them into FileDataset.refs. 

1185 # Some type annotation aliases to make that clearer: 

1186 GroupForType = Dict[DataCoordinate, Tuple[FileDataset, List[DatasetRef]]] 

1187 GroupedData = MutableMapping[DatasetType, GroupForType] 

1188 # The actual data structure: 

1189 groupedData: GroupedData = defaultdict(dict) 

1190 # And the nested loop that populates it: 

1191 for dataset in datasets: 

1192 # This list intentionally shared across the inner loop, since it's 

1193 # associated with `dataset`. 

1194 resolvedRefs = [] 

1195 for ref in dataset.refs: 

1196 groupedData[ref.datasetType][ref.dataId] = (dataset, resolvedRefs) 

1197 

1198 # Now we can bulk-insert into Registry for each DatasetType. 

1199 allResolvedRefs = [] 

1200 for datasetType, groupForType in groupedData.items(): 

1201 refs = self.registry.insertDatasets(datasetType, 

1202 dataIds=groupForType.keys(), 

1203 run=run) 

1204 # Append those resolved DatasetRefs to the new lists we set up for 

1205 # them. 

1206 for ref, (_, resolvedRefs) in zip(refs, groupForType.values()): 

1207 resolvedRefs.append(ref) 

1208 

1209 # Go back to the original FileDatasets to replace their refs with the 

1210 # new resolved ones, and also build a big list of all refs. 

1211 allResolvedRefs = [] 

1212 for groupForType in groupedData.values(): 

1213 for dataset, resolvedRefs in groupForType.values(): 

1214 dataset.refs = resolvedRefs 

1215 allResolvedRefs.extend(resolvedRefs) 

1216 

1217 # Bulk-associate everything with any tagged collections. 

1218 for tag in tags: 

1219 self.registry.associate(tag, allResolvedRefs) 

1220 

1221 # Bulk-insert everything into Datastore. 

1222 self.datastore.ingest(*datasets, transfer=transfer) 

1223 

1224 @contextlib.contextmanager 

1225 def export(self, *, directory: Optional[str] = None, 

1226 filename: Optional[str] = None, 

1227 format: Optional[str] = None, 

1228 transfer: Optional[str] = None) -> ContextManager[RepoExport]: 

1229 """Export datasets from the repository represented by this `Butler`. 

1230 

1231 This method is a context manager that returns a helper object 

1232 (`RepoExport`) that is used to indicate what information from the 

1233 repository should be exported. 

1234 

1235 Parameters 

1236 ---------- 

1237 directory : `str`, optional 

1238 Directory dataset files should be written to if ``transfer`` is not 

1239 `None`. 

1240 filename : `str`, optional 

1241 Name for the file that will include database information associated 

1242 with the exported datasets. If this is not an absolute path and 

1243 ``directory`` is not `None`, it will be written to ``directory`` 

1244 instead of the current working directory. Defaults to 

1245 "export.{format}". 

1246 format : `str`, optional 

1247 File format for the database information file. If `None`, the 

1248 extension of ``filename`` will be used. 

1249 transfer : `str`, optional 

1250 Transfer mode passed to `Datastore.export`. 

1251 

1252 Raises 

1253 ------ 

1254 TypeError 

1255 Raised if the set of arguments passed is inconsistent. 

1256 

1257 Examples 

1258 -------- 

1259 Typically the `Registry.queryDimensions` and `Registry.queryDatasets` 

1260 methods are used to provide the iterables over data IDs and/or datasets 

1261 to be exported:: 

1262 

1263 with butler.export("exports.yaml") as export: 

1264 # Export all flats, and the calibration_label dimensions 

1265 # associated with them. 

1266 export.saveDatasets(butler.registry.queryDatasets("flat"), 

1267 elements=[butler.registry.dimensions["calibration_label"]]) 

1268 # Export all datasets that start with "deepCoadd_" and all of 

1269 # their associated data ID information. 

1270 export.saveDatasets(butler.registry.queryDatasets("deepCoadd_*")) 

1271 """ 

1272 if directory is None and transfer is not None: 

1273 raise TypeError("Cannot transfer without providing a directory.") 

1274 if transfer == "move": 

1275 raise TypeError("Transfer may not be 'move': export is read-only") 

1276 if format is None: 

1277 if filename is None: 

1278 raise TypeError("At least one of 'filename' or 'format' must be provided.") 

1279 else: 

1280 _, format = os.path.splitext(filename) 

1281 elif filename is None: 

1282 filename = f"export.{format}" 

1283 if directory is not None: 

1284 filename = os.path.join(directory, filename) 

1285 BackendClass = getClassOf(self._config["repo_transfer_formats"][format]["export"]) 

1286 with open(filename, 'w') as stream: 

1287 backend = BackendClass(stream) 

1288 try: 

1289 helper = RepoExport(self.registry, self.datastore, backend=backend, 

1290 directory=directory, transfer=transfer) 

1291 yield helper 

1292 except BaseException: 

1293 raise 

1294 else: 

1295 helper._finish() 

1296 

1297 def import_(self, *, directory: Optional[str] = None, 

1298 filename: Optional[str] = None, 

1299 format: Optional[str] = None, 

1300 transfer: Optional[str] = None): 

1301 """Import datasets exported from a different butler repository. 

1302 

1303 Parameters 

1304 ---------- 

1305 directory : `str`, optional 

1306 Directory containing dataset files. If `None`, all file paths 

1307 must be absolute. 

1308 filename : `str`, optional 

1309 Name for the file that containing database information associated 

1310 with the exported datasets. If this is not an absolute path, does 

1311 not exist in the current working directory, and ``directory`` is 

1312 not `None`, it is assumed to be in ``directory``. Defaults to 

1313 "export.{format}". 

1314 format : `str`, optional 

1315 File format for the database information file. If `None`, the 

1316 extension of ``filename`` will be used. 

1317 transfer : `str`, optional 

1318 Transfer mode passed to `Datastore.export`. 

1319 

1320 Raises 

1321 ------ 

1322 TypeError 

1323 Raised if the set of arguments passed is inconsistent, or if the 

1324 butler is read-only. 

1325 """ 

1326 if not self.isWriteable(): 

1327 raise TypeError("Butler is read-only.") 

1328 if format is None: 

1329 if filename is None: 

1330 raise TypeError("At least one of 'filename' or 'format' must be provided.") 

1331 else: 

1332 _, format = os.path.splitext(filename) 

1333 elif filename is None: 

1334 filename = f"export.{format}" 

1335 if directory is not None and not os.path.exists(filename): 

1336 filename = os.path.join(directory, filename) 

1337 BackendClass = getClassOf(self._config["repo_transfer_formats"][format]["import"]) 

1338 with open(filename, 'r') as stream: 

1339 backend = BackendClass(stream, self.registry) 

1340 backend.register() 

1341 with self.transaction(): 

1342 backend.load(self.datastore, directory=directory, transfer=transfer) 

1343 

1344 def validateConfiguration(self, logFailures: bool = False, 

1345 datasetTypeNames: Optional[Iterable[str]] = None, 

1346 ignore: Iterable[str] = None): 

1347 """Validate butler configuration. 

1348 

1349 Checks that each `DatasetType` can be stored in the `Datastore`. 

1350 

1351 Parameters 

1352 ---------- 

1353 logFailures : `bool`, optional 

1354 If `True`, output a log message for every validation error 

1355 detected. 

1356 datasetTypeNames : iterable of `str`, optional 

1357 The `DatasetType` names that should be checked. This allows 

1358 only a subset to be selected. 

1359 ignore : iterable of `str`, optional 

1360 Names of DatasetTypes to skip over. This can be used to skip 

1361 known problems. If a named `DatasetType` corresponds to a 

1362 composite, all components of that `DatasetType` will also be 

1363 ignored. 

1364 

1365 Raises 

1366 ------ 

1367 ButlerValidationError 

1368 Raised if there is some inconsistency with how this Butler 

1369 is configured. 

1370 """ 

1371 if datasetTypeNames: 

1372 entities = [self.registry.getDatasetType(name) for name in datasetTypeNames] 

1373 else: 

1374 entities = list(self.registry.queryDatasetTypes()) 

1375 

1376 # filter out anything from the ignore list 

1377 if ignore: 

1378 ignore = set(ignore) 

1379 entities = [e for e in entities if e.name not in ignore and e.nameAndComponent()[0] not in ignore] 

1380 else: 

1381 ignore = set() 

1382 

1383 # Find all the registered instruments 

1384 instruments = set( 

1385 dataId["instrument"] for dataId in self.registry.queryDimensions(["instrument"]) 

1386 ) 

1387 

1388 # For each datasetType that has an instrument dimension, create 

1389 # a DatasetRef for each defined instrument 

1390 datasetRefs = [] 

1391 

1392 for datasetType in entities: 

1393 if "instrument" in datasetType.dimensions: 

1394 for instrument in instruments: 

1395 datasetRef = DatasetRef(datasetType, {"instrument": instrument}, conform=False) 

1396 datasetRefs.append(datasetRef) 

1397 

1398 entities.extend(datasetRefs) 

1399 

1400 datastoreErrorStr = None 

1401 try: 

1402 self.datastore.validateConfiguration(entities, logFailures=logFailures) 

1403 except ValidationError as e: 

1404 datastoreErrorStr = str(e) 

1405 

1406 # Also check that the LookupKeys used by the datastores match 

1407 # registry and storage class definitions 

1408 keys = self.datastore.getLookupKeys() 

1409 

1410 failedNames = set() 

1411 failedDataId = set() 

1412 for key in keys: 

1413 datasetType = None 

1414 if key.name is not None: 

1415 if key.name in ignore: 

1416 continue 

1417 

1418 # skip if specific datasetType names were requested and this 

1419 # name does not match 

1420 if datasetTypeNames and key.name not in datasetTypeNames: 

1421 continue 

1422 

1423 # See if it is a StorageClass or a DatasetType 

1424 if key.name in self.storageClasses: 

1425 pass 

1426 else: 

1427 try: 

1428 self.registry.getDatasetType(key.name) 

1429 except KeyError: 

1430 if logFailures: 

1431 log.fatal("Key '%s' does not correspond to a DatasetType or StorageClass", key) 

1432 failedNames.add(key) 

1433 else: 

1434 # Dimensions are checked for consistency when the Butler 

1435 # is created and rendezvoused with a universe. 

1436 pass 

1437 

1438 # Check that the instrument is a valid instrument 

1439 # Currently only support instrument so check for that 

1440 if key.dataId: 

1441 dataIdKeys = set(key.dataId) 

1442 if set(["instrument"]) != dataIdKeys: 

1443 if logFailures: 

1444 log.fatal("Key '%s' has unsupported DataId override", key) 

1445 failedDataId.add(key) 

1446 elif key.dataId["instrument"] not in instruments: 

1447 if logFailures: 

1448 log.fatal("Key '%s' has unknown instrument", key) 

1449 failedDataId.add(key) 

1450 

1451 messages = [] 

1452 

1453 if datastoreErrorStr: 

1454 messages.append(datastoreErrorStr) 

1455 

1456 for failed, msg in ((failedNames, "Keys without corresponding DatasetType or StorageClass entry: "), 

1457 (failedDataId, "Keys with bad DataId entries: ")): 

1458 if failed: 

1459 msg += ", ".join(str(k) for k in failed) 

1460 messages.append(msg) 

1461 

1462 if messages: 

1463 raise ValidationError(";\n".join(messages)) 

1464 

1465 registry: Registry 

1466 """The object that manages dataset metadata and relationships (`Registry`). 

1467 

1468 Most operations that don't involve reading or writing butler datasets are 

1469 accessible only via `Registry` methods. 

1470 """ 

1471 

1472 datastore: Datastore 

1473 """The object that manages actual dataset storage (`Datastore`). 

1474 

1475 Direct user access to the datastore should rarely be necessary; the primary 

1476 exception is the case where a `Datastore` implementation provides extra 

1477 functionality beyond what the base class defines. 

1478 """ 

1479 

1480 storageClasses: StorageClassFactory 

1481 """An object that maps known storage class names to objects that fully 

1482 describe them (`StorageClassFactory`). 

1483 """ 

1484 

1485 collections: Optional[CollectionSearch] 

1486 """The collections to search and any restrictions on the dataset types to 

1487 search for within them, in order (`CollectionSearch`). 

1488 """ 

1489 

1490 run: Optional[str] 

1491 """Name of the run this butler writes outputs to (`str` or `None`). 

1492 """ 

1493 

1494 tags: Tuple[str, ...] 

1495 """Names of `~CollectionType.TAGGED` collections this butler associates 

1496 with in `put` and `ingest`, and disassociates from in `pruneDatasets` 

1497 (`tuple` [ `str` ]). 

1498 """