26def _makeGetSchemaCatalogs(datasetSuffix):
27 """Construct a getSchemaCatalogs instance method
29 These are identical for most of the classes here, so we
'll consolidate
32 datasetSuffix: Suffix of dataset name, e.g., "src" for "deepCoadd_src"
35 def getSchemaCatalogs(self):
36 """Return a dict of empty catalogs for each catalog dataset produced by this task."""
37 src = afwTable.SourceCatalog(self.schema)
38 if hasattr(self,
"algMetadata"):
39 src.getTable().setMetadata(self.algMetadata)
40 return {self.config.coaddName +
"Coadd_" + datasetSuffix: src}
41 return getSchemaCatalogs
46 @anchor CullPeaksConfig_
48 @brief Configuration
for culling garbage peaks after merging footprints.
50 Peaks may also be culled after detection
or during deblending; this configuration object
51 only deals
with culling after merging Footprints.
53 These cuts are based on three quantities:
54 - nBands: the number of bands
in which the peak was detected
55 - peakRank: the position of the peak within its family, sorted
from brightest to faintest.
56 - peakRankNormalized: the peak rank divided by the total number of peaks
in the family.
58 The formula that identifie peaks to cull
is:
60 nBands < nBandsSufficient
61 AND (rank >= rankSufficient)
62 AND (rank >= rankConsider OR rank >= rankNormalizedConsider)
64 To disable peak culling, simply set nBandsSufficient=1.
67 nBandsSufficient = RangeField(dtype=int, default=2, min=1,
68 doc="Always keep peaks detected in this many bands")
69 rankSufficient = RangeField(dtype=int, default=20, min=1,
70 doc=
"Always keep this many peaks in each family")
71 rankConsidered = RangeField(dtype=int, default=30, min=1,
72 doc=(
"Keep peaks with less than this rank that also match the "
73 "rankNormalizedConsidered condition."))
74 rankNormalizedConsidered = RangeField(dtype=float, default=0.7, min=0.0,
75 doc=(
"Keep peaks with less than this normalized rank that"
76 " also match the rankConsidered condition."))