Hide keyboard shortcuts

Hot-keys on this page

r m x p   toggle line displays

j k   next/prev highlighted chunk

0   (zero) top of page

1   (one) first highlighted chunk

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

310

311

312

313

314

315

316

317

318

319

320

321

322

323

324

325

326

327

328

329

330

331

332

333

334

335

336

337

338

339

340

341

342

343

344

345

346

347

348

349

350

351

352

353

354

355

356

357

358

359

360

361

362

363

364

365

366

367

368

369

370

371

372

373

374

375

376

377

378

379

380

381

382

383

384

385

386

387

388

389

390

391

392

393

394

395

396

397

398

399

400

401

402

403

404

405

406

407

408

409

410

411

412

413

414

415

416

417

418

419

420

421

422

423

424

425

426

427

428

429

430

431

432

433

434

435

436

437

438

439

440

441

442

443

444

445

446

447

448

449

450

451

452

453

454

455

456

457

458

459

460

461

462

463

464

465

466

467

468

469

470

471

472

473

474

475

476

477

478

479

480

481

482

483

484

485

486

487

488

489

490

491

492

493

494

495

496

497

498

499

500

501

502

503

504

505

506

507

508

509

510

511

512

513

514

515

516

517

518

519

520

521

522

523

524

525

526

527

528

529

530

531

532

533

534

535

536

537

538

539

540

541

542

543

544

545

546

547

548

549

550

551

552

553

554

555

556

557

558

559

560

561

562

563

564

565

566

567

568

569

570

571

572

573

574

575

576

577

578

579

580

581

582

583

584

585

586

587

588

589

590

591

592

593

594

595

596

597

598

599

600

601

602

603

604

605

606

607

608

609

610

611

612

613

614

615

616

617

618

619

620

621

622

623

624

625

626

627

628

629

630

631

632

633

634

635

636

637

638

639

640

641

642

643

644

645

646

647

648

649

650

651

652

653

654

655

656

657

658

659

660

661

662

663

664

665

666

667

668

669

670

671

672

673

674

675

676

677

678

679

680

681

682

683

684

685

686

687

688

689

690

691

692

693

694

695

696

697

698

699

700

701

702

703

704

705

706

707

708

709

710

711

712

713

714

715

716

717

718

719

720

721

722

723

724

725

726

727

728

729

730

731

732

733

734

735

736

737

738

739

740

741

742

743

744

745

746

747

748

749

750

751

752

753

754

755

756

757

758

759

760

761

762

763

764

765

766

767

768

769

770

771

772

773

774

775

776

777

778

779

780

781

782

783

784

785

786

787

788

789

790

791

792

793

794

795

796

797

798

799

800

801

802

803

804

805

806

807

808

809

810

811

812

813

814

815

816

817

818

819

820

821

822

823

824

825

826

827

828

829

830

831

832

833

834

835

836

837

838

839

840

841

842

843

844

845

846

847

848

849

850

851

852

853

854

855

856

857

858

859

860

861

862

863

864

865

866

867

868

869

870

871

872

873

874

875

876

877

878

879

880

881

882

883

884

885

886

887

888

889

890

891

892

893

894

895

896

897

898

899

900

901

902

903

904

905

906

907

908

909

910

911

912

913

914

915

916

917

918

919

920

921

922

923

924

925

926

927

928

929

930

931

932

933

934

935

936

937

938

939

940

941

942

943

944

945

946

947

948

949

950

951

952

953

954

955

956

957

958

959

960

961

962

963

964

965

966

967

968

969

970

971

972

973

974

975

976

977

978

979

980

981

982

983

984

985

986

987

988

989

990

991

992

993

994

995

996

997

998

999

1000

1001

1002

1003

1004

1005

1006

1007

1008

1009

1010

1011

1012

1013

1014

1015

1016

1017

1018

1019

1020

1021

1022

1023

1024

1025

1026

1027

1028

1029

1030

1031

1032

1033

1034

1035

1036

1037

1038

1039

1040

1041

1042

1043

1044

1045

1046

1047

1048

1049

1050

1051

1052

1053

1054

1055

1056

1057

1058

1059

1060

1061

1062

1063

1064

1065

1066

1067

1068

1069

1070

1071

1072

1073

1074

1075

1076

1077

1078

1079

1080

1081

1082

1083

1084

1085

1086

1087

1088

1089

1090

1091

1092

1093

1094

1095

1096

1097

1098

1099

1100

1101

1102

1103

1104

1105

1106

1107

1108

1109

1110

1111

1112

1113

1114

1115

1116

1117

1118

1119

1120

1121

1122

1123

1124

1125

1126

1127

1128

1129

1130

1131

1132

1133

1134

1135

1136

1137

1138

1139

1140

1141

1142

1143

1144

1145

1146

1147

1148

1149

1150

1151

1152

1153

1154

1155

1156

1157

1158

1159

1160

1161

1162

1163

1164

1165

1166

1167

1168

1169

1170

1171

1172

1173

1174

1175

1176

1177

1178

1179

1180

1181

1182

1183

1184

1185

1186

1187

1188

1189

1190

1191

1192

1193

1194

1195

1196

1197

1198

1199

1200

1201

1202

1203

1204

1205

1206

1207

1208

1209

1210

1211

1212

1213

1214

1215

1216

1217

1218

1219

1220

1221

1222

1223

1224

1225

1226

1227

1228

1229

1230

1231

1232

1233

1234

1235

1236

1237

1238

1239

1240

1241

1242

1243

1244

1245

1246

1247

1248

1249

1250

1251

1252

1253

1254

1255

1256

1257

1258

1259

1260

1261

1262

1263

1264

1265

1266

1267

1268

1269

1270

1271

1272

1273

1274

1275

1276

1277

1278

1279

1280

1281

1282

1283

1284

1285

1286

1287

1288

1289

1290

1291

1292

1293

1294

1295

1296

1297

1298

1299

1300

1301

1302

1303

1304

1305

1306

1307

1308

1309

1310

1311

1312

1313

1314

1315

1316

1317

1318

1319

1320

1321

1322

1323

1324

1325

1326

1327

1328

1329

1330

1331

1332

1333

1334

1335

1336

1337

1338

1339

1340

1341

1342

1343

1344

1345

1346

1347

1348

1349

1350

1351

1352

1353

1354

1355

1356

1357

1358

1359

1360

1361

1362

1363

1364

1365

1366

1367

1368

1369

1370

1371

1372

1373

1374

1375

1376

1377

1378

1379

1380

1381

1382

1383

1384

1385

1386

1387

1388

1389

1390

1391

1392

1393

1394

1395

1396

1397

1398

1399

1400

1401

1402

1403

1404

1405

1406

1407

1408

1409

1410

1411

1412

1413

1414

1415

1416

1417

1418

1419

1420

1421

1422

1423

1424

1425

1426

1427

1428

1429

1430

1431

1432

1433

1434

1435

1436

1437

1438

1439

1440

1441

1442

1443

1444

1445

1446

1447

1448

1449

1450

1451

1452

1453

1454

1455

1456

1457

1458

1459

1460

1461

1462

1463

1464

1465

1466

1467

1468

1469

1470

1471

1472

1473

1474

1475

1476

1477

1478

1479

1480

1481

1482

1483

1484

1485

1486

1487

1488

1489

1490

1491

1492

1493

1494

1495

1496

1497

1498

1499

1500

1501

1502

1503

1504

1505

1506

1507

1508

1509

1510

1511

1512

1513

1514

1515

1516

1517

1518

1519

1520

1521

1522

1523

1524

1525

1526

1527

1528

1529

1530

1531

1532

1533

1534

1535

1536

1537

1538

1539

1540

1541

1542

1543

1544

1545

1546

1547

1548

1549

1550

1551

1552

1553

1554

1555

1556

1557

1558

1559

1560

1561

1562

1563

1564

1565

1566

1567

1568

1569

1570

1571

1572

1573

1574

1575

1576

1577

1578

1579

1580

1581

1582

1583

1584

1585

1586

1587

1588

1589

1590

1591

1592

1593

1594

1595

1596

1597

1598

1599

1600

1601

1602

1603

1604

1605

1606

1607

1608

1609

1610

1611

1612

1613

1614

1615

1616

1617

1618

1619

1620

1621

1622

1623

1624

1625

1626

1627

1628

1629

1630

1631

1632

1633

1634

1635

1636

1637

1638

1639

1640

1641

1642

1643

1644

1645

1646

1647

1648

1649

1650

1651

1652

1653

1654

1655

1656

1657

1658

1659

1660

1661

1662

1663

1664

1665

1666

1667

1668

1669

1670

1671

1672

1673

1674

1675

1676

1677

1678

1679

1680

1681

1682

1683

1684

1685

1686

1687

1688

1689

1690

1691

1692

1693

1694

1695

1696

1697

1698

1699

1700

1701

1702

1703

1704

1705

1706

1707

1708

1709

1710

1711

1712

1713

1714

1715

1716

1717

1718

1719

1720

1721

1722

1723

1724

1725

1726

1727

1728

1729

1730

1731

1732

1733

1734

1735

1736

1737

1738

1739

1740

1741

1742

1743

1744

1745

#!/usr/bin/env python 

# 

# LSST Data Management System 

# Copyright 2008-2015 AURA/LSST. 

# 

# This product includes software developed by the 

# LSST Project (http://www.lsst.org/). 

# 

# This program is free software: you can redistribute it and/or modify 

# it under the terms of the GNU General Public License as published by 

# the Free Software Foundation, either version 3 of the License, or 

# (at your option) any later version. 

# 

# This program is distributed in the hope that it will be useful, 

# but WITHOUT ANY WARRANTY; without even the implied warranty of 

# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 

# GNU General Public License for more details. 

# 

# You should have received a copy of the LSST License Statement and 

# the GNU General Public License along with this program. If not, 

# see <https://www.lsstcorp.org/LegalNotices/>. 

# 

import numpy 

 

from lsst.coadd.utils.coaddDataIdContainer import ExistingCoaddDataIdContainer 

from lsst.pipe.base import (CmdLineTask, Struct, TaskRunner, ArgumentParser, ButlerInitializedTaskRunner, 

PipelineTask, InitOutputDatasetField, InputDatasetField, OutputDatasetField, 

QuantumConfig) 

from lsst.pex.config import Config, Field, ListField, ConfigurableField, RangeField, ConfigField 

from lsst.meas.algorithms import DynamicDetectionTask, SkyObjectsTask 

from lsst.meas.base import SingleFrameMeasurementTask, ApplyApCorrTask, CatalogCalculationTask 

from lsst.meas.deblender import SourceDeblendTask, MultibandDeblendTask 

from lsst.pipe.tasks.coaddBase import getSkyInfo 

from lsst.pipe.tasks.scaleVariance import ScaleVarianceTask 

from lsst.meas.astrom import DirectMatchTask, denormalizeMatches 

from lsst.pipe.tasks.fakes import BaseFakeSourcesTask 

from lsst.pipe.tasks.setPrimaryFlags import SetPrimaryFlagsTask 

from lsst.pipe.tasks.propagateVisitFlags import PropagateVisitFlagsTask 

import lsst.afw.image as afwImage 

import lsst.afw.table as afwTable 

import lsst.afw.math as afwMath 

import lsst.afw.detection as afwDetect 

from lsst.daf.base import PropertyList 

 

""" 

New dataset types: 

* deepCoadd_det: detections from what used to be processCoadd (tract, patch, filter) 

* deepCoadd_mergeDet: merged detections (tract, patch) 

* deepCoadd_meas: measurements of merged detections (tract, patch, filter) 

* deepCoadd_ref: reference sources (tract, patch) 

All of these have associated *_schema catalogs that require no data ID and hold no records. 

 

In addition, we have a schema-only dataset, which saves the schema for the PeakRecords in 

the mergeDet, meas, and ref dataset Footprints: 

* deepCoadd_peak_schema 

""" 

 

 

def _makeGetSchemaCatalogs(datasetSuffix): 

"""Construct a getSchemaCatalogs instance method 

 

These are identical for most of the classes here, so we'll consolidate 

the code. 

 

datasetSuffix: Suffix of dataset name, e.g., "src" for "deepCoadd_src" 

""" 

 

def getSchemaCatalogs(self): 

"""Return a dict of empty catalogs for each catalog dataset produced by this task.""" 

src = afwTable.SourceCatalog(self.schema) 

if hasattr(self, "algMetadata"): 

src.getTable().setMetadata(self.algMetadata) 

return {self.config.coaddName + "Coadd_" + datasetSuffix: src} 

return getSchemaCatalogs 

 

 

def _makeMakeIdFactory(datasetName): 

"""Construct a makeIdFactory instance method 

 

These are identical for all the classes here, so this consolidates 

the code. 

 

datasetName: Dataset name without the coadd name prefix, e.g., "CoaddId" for "deepCoaddId" 

""" 

 

def makeIdFactory(self, dataRef): 

"""Return an IdFactory for setting the detection identifiers 

 

The actual parameters used in the IdFactory are provided by 

the butler (through the provided data reference. 

""" 

expBits = dataRef.get(self.config.coaddName + datasetName + "_bits") 

expId = int(dataRef.get(self.config.coaddName + datasetName)) 

return afwTable.IdFactory.makeSource(expId, 64 - expBits) 

return makeIdFactory 

 

 

def getShortFilterName(name): 

"""Given a longer, camera-specific filter name (e.g. "HSC-I") return its shorthand name ("i"). 

""" 

# I'm not sure if this is the way this is supposed to be implemented, but it seems to work, 

# and its the only way I could get it to work. 

return afwImage.Filter(name).getFilterProperty().getName() 

 

 

############################################################################################################## 

 

class DetectCoaddSourcesConfig(Config): 

"""! 

@anchor DetectCoaddSourcesConfig_ 

 

@brief Configuration parameters for the DetectCoaddSourcesTask 

""" 

doScaleVariance = Field(dtype=bool, default=True, doc="Scale variance plane using empirical noise?") 

scaleVariance = ConfigurableField(target=ScaleVarianceTask, doc="Variance rescaling") 

detection = ConfigurableField(target=DynamicDetectionTask, doc="Source detection") 

coaddName = Field(dtype=str, default="deep", doc="Name of coadd") 

doInsertFakes = Field(dtype=bool, default=False, 

doc="Run fake sources injection task") 

insertFakes = ConfigurableField(target=BaseFakeSourcesTask, 

doc="Injection of fake sources for testing " 

"purposes (must be retargeted)") 

detectionSchema = InitOutputDatasetField( 

doc="Schema of the detection catalog", 

name="{}Coadd_det_schema", 

storageClass="SourceCatalog", 

) 

exposure = InputDatasetField( 

doc="Exposure on which detections are to be performed", 

name="deepCoadd", 

scalar=True, 

storageClass="Exposure", 

units=("Tract", "Patch", "AbstractFilter", "SkyMap") 

) 

outputBackgrounds = OutputDatasetField( 

doc="Output Backgrounds used in detection", 

name="{}Coadd_calexp_background", 

scalar=True, 

storageClass="Background", 

units=("Tract", "Patch", "AbstractFilter", "SkyMap") 

) 

outputSources = OutputDatasetField( 

doc="Detected sources catalog", 

name="{}Coadd_det", 

scalar=True, 

storageClass="SourceCatalog", 

units=("Tract", "Patch", "AbstractFilter", "SkyMap") 

) 

outputExposure = OutputDatasetField( 

doc="Exposure post detection", 

name="{}Coadd_calexp", 

scalar=True, 

storageClass="Exposure", 

units=("Tract", "Patch", "AbstractFilter", "SkyMap") 

) 

quantum = QuantumConfig( 

units=("Tract", "Patch", "AbstractFilter", "SkyMap") 

) 

 

def setDefaults(self): 

Config.setDefaults(self) 

self.detection.thresholdType = "pixel_stdev" 

self.detection.isotropicGrow = True 

# Coadds are made from background-subtracted CCDs, so any background subtraction should be very basic 

self.detection.reEstimateBackground = False 

self.detection.background.useApprox = False 

self.detection.background.binSize = 4096 

self.detection.background.undersampleStyle = 'REDUCE_INTERP_ORDER' 

self.detection.doTempWideBackground = True # Suppress large footprints that overwhelm the deblender 

 

## @addtogroup LSST_task_documentation 

## @{ 

## @page DetectCoaddSourcesTask 

## @ref DetectCoaddSourcesTask_ "DetectCoaddSourcesTask" 

## @copybrief DetectCoaddSourcesTask 

## @} 

 

 

class DetectCoaddSourcesTask(PipelineTask, CmdLineTask): 

r"""! 

@anchor DetectCoaddSourcesTask_ 

 

@brief Detect sources on a coadd 

 

@section pipe_tasks_multiBand_Contents Contents 

 

- @ref pipe_tasks_multiBand_DetectCoaddSourcesTask_Purpose 

- @ref pipe_tasks_multiBand_DetectCoaddSourcesTask_Initialize 

- @ref pipe_tasks_multiBand_DetectCoaddSourcesTask_Run 

- @ref pipe_tasks_multiBand_DetectCoaddSourcesTask_Config 

- @ref pipe_tasks_multiBand_DetectCoaddSourcesTask_Debug 

- @ref pipe_tasks_multiband_DetectCoaddSourcesTask_Example 

 

@section pipe_tasks_multiBand_DetectCoaddSourcesTask_Purpose Description 

 

Command-line task that detects sources on a coadd of exposures obtained with a single filter. 

 

Coadding individual visits requires each exposure to be warped. This introduces covariance in the noise 

properties across pixels. Before detection, we correct the coadd variance by scaling the variance plane 

in the coadd to match the observed variance. This is an approximate approach -- strictly, we should 

propagate the full covariance matrix -- but it is simple and works well in practice. 

 

After scaling the variance plane, we detect sources and generate footprints by delegating to the @ref 

SourceDetectionTask_ "detection" subtask. 

 

@par Inputs: 

deepCoadd{tract,patch,filter}: ExposureF 

@par Outputs: 

deepCoadd_det{tract,patch,filter}: SourceCatalog (only parent Footprints) 

@n deepCoadd_calexp{tract,patch,filter}: Variance scaled, background-subtracted input 

exposure (ExposureF) 

@n deepCoadd_calexp_background{tract,patch,filter}: BackgroundList 

@par Data Unit: 

tract, patch, filter 

 

DetectCoaddSourcesTask delegates most of its work to the @ref SourceDetectionTask_ "detection" subtask. 

You can retarget this subtask if you wish. 

 

@section pipe_tasks_multiBand_DetectCoaddSourcesTask_Initialize Task initialization 

 

@copydoc \_\_init\_\_ 

 

@section pipe_tasks_multiBand_DetectCoaddSourcesTask_Run Invoking the Task 

 

@copydoc run 

 

@section pipe_tasks_multiBand_DetectCoaddSourcesTask_Config Configuration parameters 

 

See @ref DetectCoaddSourcesConfig_ "DetectSourcesConfig" 

 

@section pipe_tasks_multiBand_DetectCoaddSourcesTask_Debug Debug variables 

 

The @link lsst.pipe.base.cmdLineTask.CmdLineTask command line task@endlink interface supports a 

flag @c -d to import @b debug.py from your @c PYTHONPATH; see @ref baseDebug for more about @b debug.py 

files. 

 

DetectCoaddSourcesTask has no debug variables of its own because it relegates all the work to 

@ref SourceDetectionTask_ "SourceDetectionTask"; see the documetation for 

@ref SourceDetectionTask_ "SourceDetectionTask" for further information. 

 

@section pipe_tasks_multiband_DetectCoaddSourcesTask_Example A complete example 

of using DetectCoaddSourcesTask 

 

DetectCoaddSourcesTask is meant to be run after assembling a coadded image in a given band. The purpose of 

the task is to update the background, detect all sources in a single band and generate a set of parent 

footprints. Subsequent tasks in the multi-band processing procedure will merge sources across bands and, 

eventually, perform forced photometry. Command-line usage of DetectCoaddSourcesTask expects a data 

reference to the coadd to be processed. A list of the available optional arguments can be obtained by 

calling detectCoaddSources.py with the `--help` command line argument: 

@code 

detectCoaddSources.py --help 

@endcode 

 

To demonstrate usage of the DetectCoaddSourcesTask in the larger context of multi-band processing, we 

will process HSC data in the [ci_hsc](https://github.com/lsst/ci_hsc) package. Assuming one has followed 

steps 1 - 4 at @ref pipeTasks_multiBand, one may detect all the sources in each coadd as follows: 

@code 

detectCoaddSources.py $CI_HSC_DIR/DATA --id patch=5,4 tract=0 filter=HSC-I 

@endcode 

that will process the HSC-I band data. The results are written to 

`$CI_HSC_DIR/DATA/deepCoadd-results/HSC-I`. 

 

It is also necessary to run: 

@code 

detectCoaddSources.py $CI_HSC_DIR/DATA --id patch=5,4 tract=0 filter=HSC-R 

@endcode 

to generate the sources catalogs for the HSC-R band required by the next step in the multi-band 

processing procedure: @ref MergeDetectionsTask_ "MergeDetectionsTask". 

""" 

_DefaultName = "detectCoaddSources" 

ConfigClass = DetectCoaddSourcesConfig 

getSchemaCatalogs = _makeGetSchemaCatalogs("det") 

makeIdFactory = _makeMakeIdFactory("CoaddId") 

 

@classmethod 

def _makeArgumentParser(cls): 

parser = ArgumentParser(name=cls._DefaultName) 

parser.add_id_argument("--id", "deepCoadd", help="data ID, e.g. --id tract=12345 patch=1,2 filter=r", 

ContainerClass=ExistingCoaddDataIdContainer) 

return parser 

 

@classmethod 

def getOutputDatasetTypes(cls, config): 

coaddName = config.coaddName 

for name in ("outputBackgrounds", "outputSources", "outputExposure"): 

attr = getattr(config, name) 

setattr(attr, "name", attr.name.format(coaddName)) 

outputTypeDict = super().getOutputDatasetTypes(config) 

return outputTypeDict 

 

@classmethod 

def getInitOutputDatasetTypes(cls, config): 

coaddName = config.coaddName 

attr = config.detectionSchema 

setattr(attr, "name", attr.name.format(coaddName)) 

output = super().getInitOutputDatasetTypes(config) 

print(output) 

return output 

 

def __init__(self, schema=None, **kwargs): 

"""! 

@brief Initialize the task. Create the @ref SourceDetectionTask_ "detection" subtask. 

 

Keyword arguments (in addition to those forwarded to CmdLineTask.__init__): 

 

@param[in] schema: initial schema for the output catalog, modified-in place to include all 

fields set by this task. If None, the source minimal schema will be used. 

@param[in] **kwargs: keyword arguments to be passed to lsst.pipe.base.task.Task.__init__ 

""" 

# N.B. Super is used here to handle the multiple inheritance of PipelineTasks, the init tree 

# call structure has been reviewed carefully to be sure super will work as intended. 

super().__init__(**kwargs) 

313 ↛ 315line 313 didn't jump to line 315, because the condition on line 313 was never false if schema is None: 

schema = afwTable.SourceTable.makeMinimalSchema() 

315 ↛ 316line 315 didn't jump to line 316, because the condition on line 315 was never true if self.config.doInsertFakes: 

self.makeSubtask("insertFakes") 

self.schema = schema 

self.makeSubtask("detection", schema=self.schema) 

319 ↛ exitline 319 didn't return from function '__init__', because the condition on line 319 was never false if self.config.doScaleVariance: 

self.makeSubtask("scaleVariance") 

 

def getInitOutputDatasets(self): 

return {"detectionSchema": afwTable.SourceCatalog(self.schema)} 

 

def runDataRef(self, patchRef): 

"""! 

@brief Run detection on a coadd. 

 

Invokes @ref run and then uses @ref write to output the 

results. 

 

@param[in] patchRef: data reference for patch 

""" 

exposure = patchRef.get(self.config.coaddName + "Coadd", immediate=True) 

expId = int(patchRef.get(self.config.coaddName + "CoaddId")) 

results = self.run(exposure, self.makeIdFactory(patchRef), expId=expId) 

self.write(results, patchRef) 

return results 

 

def adaptArgsAndRun(self, inputData, inputDataIds, outputDataIds): 

# FINDME: DM-15843 needs to come back and address these next two lines with a final solution 

inputData["idFactory"] = afwTable.IdFactory.makeSimple() 

inputData["expId"] = 0 

return self.run(**inputData) 

 

def run(self, exposure, idFactory, expId): 

"""! 

@brief Run detection on an exposure. 

 

First scale the variance plane to match the observed variance 

using @ref ScaleVarianceTask. Then invoke the @ref SourceDetectionTask_ "detection" subtask to 

detect sources. 

 

@param[in,out] exposure: Exposure on which to detect (may be backround-subtracted and scaled, 

depending on configuration). 

@param[in] idFactory: IdFactory to set source identifiers 

@param[in] expId: Exposure identifier (integer) for RNG seed 

 

@return a pipe.base.Struct with fields 

- sources: catalog of detections 

- backgrounds: list of backgrounds 

""" 

363 ↛ 366line 363 didn't jump to line 366, because the condition on line 363 was never false if self.config.doScaleVariance: 

varScale = self.scaleVariance.run(exposure.maskedImage) 

exposure.getMetadata().add("variance_scale", varScale) 

backgrounds = afwMath.BackgroundList() 

367 ↛ 368line 367 didn't jump to line 368, because the condition on line 367 was never true if self.config.doInsertFakes: 

self.insertFakes.run(exposure, background=backgrounds) 

table = afwTable.SourceTable.make(self.schema, idFactory) 

detections = self.detection.makeSourceCatalog(table, exposure, expId=expId) 

sources = detections.sources 

fpSets = detections.fpSets 

373 ↛ 376line 373 didn't jump to line 376, because the condition on line 373 was never false if hasattr(fpSets, "background") and fpSets.background: 

for bg in fpSets.background: 

backgrounds.append(bg) 

return Struct(outputSources=sources, outputBackgrounds=backgrounds, outputExposure=exposure) 

 

def write(self, results, patchRef): 

"""! 

@brief Write out results from runDetection. 

 

@param[in] exposure: Exposure to write out 

@param[in] results: Struct returned from runDetection 

@param[in] patchRef: data reference for patch 

""" 

coaddName = self.config.coaddName + "Coadd" 

patchRef.put(results.outputBackgrounds, coaddName + "_calexp_background") 

patchRef.put(results.outputSources, coaddName + "_det") 

patchRef.put(results.outputExposure, coaddName + "_calexp") 

 

############################################################################################################## 

 

 

class MergeSourcesRunner(TaskRunner): 

"""Task runner for the `MergeSourcesTask` 

 

Required because the run method requires a list of 

dataRefs rather than a single dataRef. 

""" 

def makeTask(self, parsedCmd=None, args=None): 

"""Provide a butler to the Task constructor. 

 

Parameters 

---------- 

parsedCmd: 

The parsed command 

args: tuple 

Tuple of a list of data references and kwargs (un-used) 

 

Raises 

------ 

RuntimeError 

Thrown if both `parsedCmd` & `args` are `None` 

""" 

if parsedCmd is not None: 

butler = parsedCmd.butler 

elif args is not None: 

dataRefList, kwargs = args 

butler = dataRefList[0].getButler() 

else: 

raise RuntimeError("Neither parsedCmd or args specified") 

return self.TaskClass(config=self.config, log=self.log, butler=butler) 

 

@staticmethod 

def buildRefDict(parsedCmd): 

"""Build a hierarchical dictionary of patch references 

 

Parameters 

---------- 

parsedCmd: 

The parsed command 

 

Returns 

------- 

refDict: dict 

A reference dictionary of the form {patch: {tract: {filter: dataRef}}} 

 

Raises 

------ 

RuntimeError 

Thrown when multiple references are provided for the same 

combination of tract, patch and filter 

""" 

refDict = {} # Will index this as refDict[tract][patch][filter] = ref 

for ref in parsedCmd.id.refList: 

tract = ref.dataId["tract"] 

patch = ref.dataId["patch"] 

filter = ref.dataId["filter"] 

if tract not in refDict: 

refDict[tract] = {} 

if patch not in refDict[tract]: 

refDict[tract][patch] = {} 

if filter in refDict[tract][patch]: 

raise RuntimeError("Multiple versions of %s" % (ref.dataId,)) 

refDict[tract][patch][filter] = ref 

return refDict 

 

@staticmethod 

def getTargetList(parsedCmd, **kwargs): 

"""Provide a list of patch references for each patch, tract, filter combo. 

 

Parameters 

---------- 

parsedCmd: 

The parsed command 

kwargs: 

Keyword arguments passed to the task 

 

Returns 

------- 

targetList: list 

List of tuples, where each tuple is a (dataRef, kwargs) pair. 

""" 

refDict = MergeSourcesRunner.buildRefDict(parsedCmd) 

return [(list(p.values()), kwargs) for t in refDict.values() for p in t.values()] 

 

 

class MergeSourcesConfig(Config): 

"""! 

@anchor MergeSourcesConfig_ 

 

@brief Configuration for merging sources. 

""" 

priorityList = ListField(dtype=str, default=[], 

doc="Priority-ordered list of bands for the merge.") 

coaddName = Field(dtype=str, default="deep", doc="Name of coadd") 

 

def validate(self): 

Config.validate(self) 

if len(self.priorityList) == 0: 

raise RuntimeError("No priority list provided") 

 

 

class MergeSourcesTask(CmdLineTask): 

"""! 

@anchor MergeSourcesTask_ 

 

@brief A base class for merging source catalogs. 

 

Merging detections (MergeDetectionsTask) and merging measurements (MergeMeasurementsTask) are 

so similar that it makes sense to re-use the code, in the form of this abstract base class. 

 

NB: Do not use this class directly. Instead use one of the child classes that inherit from 

MergeSourcesTask such as @ref MergeDetectionsTask_ "MergeDetectionsTask" or @ref MergeMeasurementsTask_ 

"MergeMeasurementsTask" 

 

Sub-classes should set the following class variables: 

* `_DefaultName`: name of Task 

* `inputDataset`: name of dataset to read 

* `outputDataset`: name of dataset to write 

* `getSchemaCatalogs` to the result of `_makeGetSchemaCatalogs(outputDataset)` 

 

In addition, sub-classes must implement the run method. 

""" 

_DefaultName = None 

ConfigClass = MergeSourcesConfig 

RunnerClass = MergeSourcesRunner 

inputDataset = None 

outputDataset = None 

getSchemaCatalogs = None 

 

@classmethod 

def _makeArgumentParser(cls): 

"""! 

@brief Create a suitable ArgumentParser. 

 

We will use the ArgumentParser to get a provide a list of data 

references for patches; the RunnerClass will sort them into lists 

of data references for the same patch 

""" 

parser = ArgumentParser(name=cls._DefaultName) 

parser.add_id_argument("--id", "deepCoadd_" + cls.inputDataset, 

ContainerClass=ExistingCoaddDataIdContainer, 

help="data ID, e.g. --id tract=12345 patch=1,2 filter=g^r^i") 

return parser 

 

def getInputSchema(self, butler=None, schema=None): 

"""! 

@brief Obtain the input schema either directly or froma butler reference. 

 

@param[in] butler butler reference to obtain the input schema from 

@param[in] schema the input schema 

""" 

544 ↛ 548line 544 didn't jump to line 548, because the condition on line 544 was never false if schema is None: 

assert butler is not None, "Neither butler nor schema specified" 

schema = butler.get(self.config.coaddName + "Coadd_" + self.inputDataset + "_schema", 

immediate=True).schema 

return schema 

 

def __init__(self, butler=None, schema=None, **kwargs): 

"""! 

@brief Initialize the task. 

 

Keyword arguments (in addition to those forwarded to CmdLineTask.__init__): 

@param[in] schema the schema of the detection catalogs used as input to this one 

@param[in] butler a butler used to read the input schema from disk, if schema is None 

 

Derived classes should use the getInputSchema() method to handle the additional 

arguments and retreive the actual input schema. 

""" 

CmdLineTask.__init__(self, **kwargs) 

 

def runDataRef(self, patchRefList): 

"""! 

@brief Merge coadd sources from multiple bands. Calls @ref `run` which must be defined in 

subclasses that inherit from MergeSourcesTask. 

 

@param[in] patchRefList list of data references for each filter 

""" 

catalogs = dict(self.readCatalog(patchRef) for patchRef in patchRefList) 

mergedCatalog = self.run(catalogs, patchRefList[0]) 

self.write(patchRefList[0], mergedCatalog) 

 

def readCatalog(self, patchRef): 

"""! 

@brief Read input catalog. 

 

We read the input dataset provided by the 'inputDataset' 

class variable. 

 

@param[in] patchRef data reference for patch 

@return tuple consisting of the filter name and the catalog 

""" 

filterName = patchRef.dataId["filter"] 

catalog = patchRef.get(self.config.coaddName + "Coadd_" + self.inputDataset, immediate=True) 

self.log.info("Read %d sources for filter %s: %s" % (len(catalog), filterName, patchRef.dataId)) 

return filterName, catalog 

 

def run(self, catalogs, patchRef): 

"""! 

@brief Merge multiple catalogs. This function must be defined in all subclasses that inherit from 

MergeSourcesTask. 

 

@param[in] catalogs dict mapping filter name to source catalog 

 

@return merged catalog 

""" 

raise NotImplementedError() 

 

def write(self, patchRef, catalog): 

"""! 

@brief Write the output. 

 

@param[in] patchRef data reference for patch 

@param[in] catalog catalog 

 

We write as the dataset provided by the 'outputDataset' 

class variable. 

""" 

patchRef.put(catalog, self.config.coaddName + "Coadd_" + self.outputDataset) 

# since the filter isn't actually part of the data ID for the dataset we're saving, 

# it's confusing to see it in the log message, even if the butler simply ignores it. 

mergeDataId = patchRef.dataId.copy() 

del mergeDataId["filter"] 

self.log.info("Wrote merged catalog: %s" % (mergeDataId,)) 

 

def writeMetadata(self, dataRefList): 

"""! 

@brief No metadata to write, and not sure how to write it for a list of dataRefs. 

""" 

pass 

 

 

class CullPeaksConfig(Config): 

"""! 

@anchor CullPeaksConfig_ 

 

@brief Configuration for culling garbage peaks after merging footprints. 

 

Peaks may also be culled after detection or during deblending; this configuration object 

only deals with culling after merging Footprints. 

 

These cuts are based on three quantities: 

- nBands: the number of bands in which the peak was detected 

- peakRank: the position of the peak within its family, sorted from brightest to faintest. 

- peakRankNormalized: the peak rank divided by the total number of peaks in the family. 

 

The formula that identifie peaks to cull is: 

 

nBands < nBandsSufficient 

AND (rank >= rankSufficient) 

AND (rank >= rankConsider OR rank >= rankNormalizedConsider) 

 

To disable peak culling, simply set nBandsSufficient=1. 

""" 

 

nBandsSufficient = RangeField(dtype=int, default=2, min=1, 

doc="Always keep peaks detected in this many bands") 

rankSufficient = RangeField(dtype=int, default=20, min=1, 

doc="Always keep this many peaks in each family") 

rankConsidered = RangeField(dtype=int, default=30, min=1, 

doc=("Keep peaks with less than this rank that also match the " 

"rankNormalizedConsidered condition.")) 

rankNormalizedConsidered = RangeField(dtype=float, default=0.7, min=0.0, 

doc=("Keep peaks with less than this normalized rank that" 

" also match the rankConsidered condition.")) 

 

 

class MergeDetectionsConfig(MergeSourcesConfig): 

"""! 

@anchor MergeDetectionsConfig_ 

 

@brief Configuration parameters for the MergeDetectionsTask. 

""" 

minNewPeak = Field(dtype=float, default=1, 

doc="Minimum distance from closest peak to create a new one (in arcsec).") 

 

maxSamePeak = Field(dtype=float, default=0.3, 

doc="When adding new catalogs to the merge, all peaks less than this distance " 

" (in arcsec) to an existing peak will be flagged as detected in that catalog.") 

cullPeaks = ConfigField(dtype=CullPeaksConfig, doc="Configuration for how to cull peaks.") 

 

skyFilterName = Field(dtype=str, default="sky", 

doc="Name of `filter' used to label sky objects (e.g. flag merge_peak_sky is set)\n" 

"(N.b. should be in MergeMeasurementsConfig.pseudoFilterList)") 

skyObjects = ConfigurableField(target=SkyObjectsTask, doc="Generate sky objects") 

 

def setDefaults(self): 

MergeSourcesConfig.setDefaults(self) 

self.skyObjects.avoidMask = ["DETECTED"] # Nothing else is available in our custom mask 

 

 

## @addtogroup LSST_task_documentation 

## @{ 

## @page MergeDetectionsTask 

## @ref MergeDetectionsTask_ "MergeDetectionsTask" 

## @copybrief MergeDetectionsTask 

## @} 

 

 

class MergeDetectionsTask(MergeSourcesTask): 

r"""! 

@anchor MergeDetectionsTask_ 

 

@brief Merge coadd detections from multiple bands. 

 

@section pipe_tasks_multiBand_Contents Contents 

 

- @ref pipe_tasks_multiBand_MergeDetectionsTask_Purpose 

- @ref pipe_tasks_multiBand_MergeDetectionsTask_Init 

- @ref pipe_tasks_multiBand_MergeDetectionsTask_Run 

- @ref pipe_tasks_multiBand_MergeDetectionsTask_Config 

- @ref pipe_tasks_multiBand_MergeDetectionsTask_Debug 

- @ref pipe_tasks_multiband_MergeDetectionsTask_Example 

 

@section pipe_tasks_multiBand_MergeDetectionsTask_Purpose Description 

 

Command-line task that merges sources detected in coadds of exposures obtained with different filters. 

 

To perform photometry consistently across coadds in multiple filter bands, we create a master catalog of 

sources from all bands by merging the sources (peaks & footprints) detected in each coadd, while keeping 

track of which band each source originates in. 

 

The catalog merge is performed by @ref getMergedSourceCatalog. Spurious peaks detected around bright 

objects are culled as described in @ref CullPeaksConfig_. 

 

@par Inputs: 

deepCoadd_det{tract,patch,filter}: SourceCatalog (only parent Footprints) 

@par Outputs: 

deepCoadd_mergeDet{tract,patch}: SourceCatalog (only parent Footprints) 

@par Data Unit: 

tract, patch 

 

MergeDetectionsTask subclasses @ref MergeSourcesTask_ "MergeSourcesTask". 

 

@section pipe_tasks_multiBand_MergeDetectionsTask_Init Task initialisation 

 

@copydoc \_\_init\_\_ 

 

@section pipe_tasks_multiBand_MergeDetectionsTask_Run Invoking the Task 

 

@copydoc run 

 

@section pipe_tasks_multiBand_MergeDetectionsTask_Config Configuration parameters 

 

See @ref MergeDetectionsConfig_ 

 

@section pipe_tasks_multiBand_MergeDetectionsTask_Debug Debug variables 

 

The @link lsst.pipe.base.cmdLineTask.CmdLineTask command line task@endlink interface supports a flag @c -d 

to import @b debug.py from your @c PYTHONPATH; see @ref baseDebug for more about @b debug.py files. 

 

MergeDetectionsTask has no debug variables. 

 

@section pipe_tasks_multiband_MergeDetectionsTask_Example A complete example of using MergeDetectionsTask 

 

MergeDetectionsTask is meant to be run after detecting sources in coadds generated for the chosen subset 

of the available bands. 

The purpose of the task is to merge sources (peaks & footprints) detected in the coadds generated from the 

chosen subset of filters. 

Subsequent tasks in the multi-band processing procedure will deblend the generated master list of sources 

and, eventually, perform forced photometry. 

Command-line usage of MergeDetectionsTask expects data references for all the coadds to be processed. 

A list of the available optional arguments can be obtained by calling mergeCoaddDetections.py with the 

`--help` command line argument: 

@code 

mergeCoaddDetections.py --help 

@endcode 

 

To demonstrate usage of the DetectCoaddSourcesTask in the larger context of multi-band processing, we 

will process HSC data in the [ci_hsc](https://github.com/lsst/ci_hsc) package. Assuming one has finished 

step 5 at @ref pipeTasks_multiBand, one may merge the catalogs of sources from each coadd as follows: 

@code 

mergeCoaddDetections.py $CI_HSC_DIR/DATA --id patch=5,4 tract=0 filter=HSC-I^HSC-R 

@endcode 

This will merge the HSC-I & -R band parent source catalogs and write the results to 

`$CI_HSC_DIR/DATA/deepCoadd-results/merged/0/5,4/mergeDet-0-5,4.fits`. 

 

The next step in the multi-band processing procedure is 

@ref MeasureMergedCoaddSourcesTask_ "MeasureMergedCoaddSourcesTask" 

""" 

ConfigClass = MergeDetectionsConfig 

_DefaultName = "mergeCoaddDetections" 

inputDataset = "det" 

outputDataset = "mergeDet" 

makeIdFactory = _makeMakeIdFactory("MergedCoaddId") 

 

def __init__(self, butler=None, schema=None, **kwargs): 

"""! 

@brief Initialize the merge detections task. 

 

A @ref FootprintMergeList_ "FootprintMergeList" will be used to 

merge the source catalogs. 

 

Additional keyword arguments (forwarded to MergeSourcesTask.__init__): 

@param[in] schema the schema of the detection catalogs used as input to this one 

@param[in] butler a butler used to read the input schema from disk, if schema is None 

@param[in] **kwargs keyword arguments to be passed to MergeSourcesTask.__init__ 

 

The task will set its own self.schema attribute to the schema of the output merged catalog. 

""" 

MergeSourcesTask.__init__(self, butler=butler, schema=schema, **kwargs) 

self.makeSubtask("skyObjects") 

self.schema = self.getInputSchema(butler=butler, schema=schema) 

 

filterNames = [getShortFilterName(name) for name in self.config.priorityList] 

filterNames += [self.config.skyFilterName] 

self.merged = afwDetect.FootprintMergeList(self.schema, filterNames) 

 

def run(self, catalogs, patchRef): 

r"""! 

@brief Merge multiple catalogs. 

 

After ordering the catalogs and filters in priority order, 

@ref getMergedSourceCatalog of the @ref FootprintMergeList_ "FootprintMergeList" created by 

@ref \_\_init\_\_ is used to perform the actual merging. Finally, @ref cullPeaks is used to remove 

garbage peaks detected around bright objects. 

 

@param[in] catalogs 

@param[in] patchRef 

@param[out] mergedList 

""" 

 

# Convert distance to tract coordinate 

skyInfo = getSkyInfo(coaddName=self.config.coaddName, patchRef=patchRef) 

tractWcs = skyInfo.wcs 

peakDistance = self.config.minNewPeak / tractWcs.getPixelScale().asArcseconds() 

samePeakDistance = self.config.maxSamePeak / tractWcs.getPixelScale().asArcseconds() 

 

# Put catalogs, filters in priority order 

orderedCatalogs = [catalogs[band] for band in self.config.priorityList if band in catalogs.keys()] 

orderedBands = [getShortFilterName(band) for band in self.config.priorityList 

if band in catalogs.keys()] 

 

mergedList = self.merged.getMergedSourceCatalog(orderedCatalogs, orderedBands, peakDistance, 

self.schema, self.makeIdFactory(patchRef), 

samePeakDistance) 

 

# 

# Add extra sources that correspond to blank sky 

# 

skySeed = patchRef.get(self.config.coaddName + "MergedCoaddId") 

skySourceFootprints = self.getSkySourceFootprints(mergedList, skyInfo, skySeed) 

834 ↛ 842line 834 didn't jump to line 842, because the condition on line 834 was never false if skySourceFootprints: 

key = mergedList.schema.find("merge_footprint_%s" % self.config.skyFilterName).key 

for foot in skySourceFootprints: 

s = mergedList.addNew() 

s.setFootprint(foot) 

s.set(key, True) 

 

# Sort Peaks from brightest to faintest 

for record in mergedList: 

record.getFootprint().sortPeaks() 

self.log.info("Merged to %d sources" % len(mergedList)) 

# Attempt to remove garbage peaks 

self.cullPeaks(mergedList) 

return mergedList 

 

def cullPeaks(self, catalog): 

"""! 

@brief Attempt to remove garbage peaks (mostly on the outskirts of large blends). 

 

@param[in] catalog Source catalog 

""" 

keys = [item.key for item in self.merged.getPeakSchema().extract("merge_peak_*").values()] 

assert len(keys) > 0, "Error finding flags that associate peaks with their detection bands." 

totalPeaks = 0 

culledPeaks = 0 

for parentSource in catalog: 

# Make a list copy so we can clear the attached PeakCatalog and append the ones we're keeping 

# to it (which is easier than deleting as we iterate). 

keptPeaks = parentSource.getFootprint().getPeaks() 

oldPeaks = list(keptPeaks) 

keptPeaks.clear() 

familySize = len(oldPeaks) 

totalPeaks += familySize 

for rank, peak in enumerate(oldPeaks): 

if ((rank < self.config.cullPeaks.rankSufficient) or 

(sum([peak.get(k) for k in keys]) >= self.config.cullPeaks.nBandsSufficient) or 

(rank < self.config.cullPeaks.rankConsidered and 

rank < self.config.cullPeaks.rankNormalizedConsidered * familySize)): 

keptPeaks.append(peak) 

else: 

culledPeaks += 1 

self.log.info("Culled %d of %d peaks" % (culledPeaks, totalPeaks)) 

 

def getSchemaCatalogs(self): 

"""! 

Return a dict of empty catalogs for each catalog dataset produced by this task. 

 

@param[out] dictionary of empty catalogs 

""" 

mergeDet = afwTable.SourceCatalog(self.schema) 

peak = afwDetect.PeakCatalog(self.merged.getPeakSchema()) 

return {self.config.coaddName + "Coadd_mergeDet": mergeDet, 

self.config.coaddName + "Coadd_peak": peak} 

 

def getSkySourceFootprints(self, mergedList, skyInfo, seed): 

"""! 

@brief Return a list of Footprints of sky objects which don't overlap with anything in mergedList 

 

@param mergedList The merged Footprints from all the input bands 

@param skyInfo A description of the patch 

@param seed Seed for the random number generator 

""" 

mask = afwImage.Mask(skyInfo.patchInfo.getOuterBBox()) 

detected = mask.getPlaneBitMask("DETECTED") 

for s in mergedList: 

s.getFootprint().spans.setMask(mask, detected) 

 

footprints = self.skyObjects.run(mask, seed) 

902 ↛ 903line 902 didn't jump to line 903, because the condition on line 902 was never true if not footprints: 

return footprints 

 

# Need to convert the peak catalog's schema so we can set the "merge_peak_<skyFilterName>" flags 

schema = self.merged.getPeakSchema() 

mergeKey = schema.find("merge_peak_%s" % self.config.skyFilterName).key 

converted = [] 

for oldFoot in footprints: 

assert len(oldFoot.getPeaks()) == 1, "Should be a single peak only" 

peak = oldFoot.getPeaks()[0] 

newFoot = afwDetect.Footprint(oldFoot.spans, schema) 

newFoot.addPeak(peak.getFx(), peak.getFy(), peak.getPeakValue()) 

newFoot.getPeaks()[0].set(mergeKey, True) 

converted.append(newFoot) 

 

return converted 

 

 

class DeblendCoaddSourcesConfig(Config): 

"""DeblendCoaddSourcesConfig 

 

Configuration parameters for the `DeblendCoaddSourcesTask`. 

""" 

singleBandDeblend = ConfigurableField(target=SourceDeblendTask, 

doc="Deblend sources separately in each band") 

multiBandDeblend = ConfigurableField(target=MultibandDeblendTask, 

doc="Deblend sources simultaneously across bands") 

simultaneous = Field(dtype=bool, default=False, doc="Simultaneously deblend all bands?") 

coaddName = Field(dtype=str, default="deep", doc="Name of coadd") 

 

def setDefaults(self): 

Config.setDefaults(self) 

self.singleBandDeblend.propagateAllPeaks = True 

 

 

class DeblendCoaddSourcesRunner(MergeSourcesRunner): 

"""Task runner for the `MergeSourcesTask` 

 

Required because the run method requires a list of 

dataRefs rather than a single dataRef. 

""" 

@staticmethod 

def getTargetList(parsedCmd, **kwargs): 

"""Provide a list of patch references for each patch, tract, filter combo. 

 

Parameters 

---------- 

parsedCmd: 

The parsed command 

kwargs: 

Keyword arguments passed to the task 

 

Returns 

------- 

targetList: list 

List of tuples, where each tuple is a (dataRef, kwargs) pair. 

""" 

refDict = MergeSourcesRunner.buildRefDict(parsedCmd) 

kwargs["psfCache"] = parsedCmd.psfCache 

return [(list(p.values()), kwargs) for t in refDict.values() for p in t.values()] 

 

 

class DeblendCoaddSourcesTask(CmdLineTask): 

"""Deblend the sources in a merged catalog 

 

Deblend sources from master catalog in each coadd. 

This can either be done separately in each band using the HSC-SDSS deblender 

(`DeblendCoaddSourcesTask.config.simultaneous==False`) 

or use SCARLET to simultaneously fit the blend in all bands 

(`DeblendCoaddSourcesTask.config.simultaneous==True`). 

The task will set its own `self.schema` atribute to the `Schema` of the 

output deblended catalog. 

This will include all fields from the input `Schema`, as well as additional fields 

from the deblender. 

 

`pipe.tasks.multiband.DeblendCoaddSourcesTask Description 

--------------------------------------------------------- 

` 

 

Parameters 

---------- 

butler: `Butler` 

Butler used to read the input schemas from disk or 

construct the reference catalog loader, if `schema` or `peakSchema` or 

schema: `Schema` 

The schema of the merged detection catalog as an input to this task. 

peakSchema: `Schema` 

The schema of the `PeakRecord`s in the `Footprint`s in the merged detection catalog 

""" 

ConfigClass = DeblendCoaddSourcesConfig 

RunnerClass = DeblendCoaddSourcesRunner 

_DefaultName = "deblendCoaddSources" 

makeIdFactory = _makeMakeIdFactory("MergedCoaddId") 

 

@classmethod 

def _makeArgumentParser(cls): 

parser = ArgumentParser(name=cls._DefaultName) 

parser.add_id_argument("--id", "deepCoadd_calexp", 

help="data ID, e.g. --id tract=12345 patch=1,2 filter=g^r^i", 

ContainerClass=ExistingCoaddDataIdContainer) 

parser.add_argument("--psfCache", type=int, default=100, help="Size of CoaddPsf cache") 

return parser 

 

def __init__(self, butler=None, schema=None, peakSchema=None, **kwargs): 

CmdLineTask.__init__(self, **kwargs) 

1007 ↛ 1010line 1007 didn't jump to line 1010, because the condition on line 1007 was never false if schema is None: 

assert butler is not None, "Neither butler nor schema is defined" 

schema = butler.get(self.config.coaddName + "Coadd_mergeDet_schema", immediate=True).schema 

self.schemaMapper = afwTable.SchemaMapper(schema) 

self.schemaMapper.addMinimalSchema(schema) 

self.schema = self.schemaMapper.getOutputSchema() 

1013 ↛ 1017line 1013 didn't jump to line 1017, because the condition on line 1013 was never false if peakSchema is None: 

assert butler is not None, "Neither butler nor peakSchema is defined" 

peakSchema = butler.get(self.config.coaddName + "Coadd_peak_schema", immediate=True).schema 

 

1017 ↛ 1018line 1017 didn't jump to line 1018, because the condition on line 1017 was never true if self.config.simultaneous: 

self.makeSubtask("multiBandDeblend", schema=self.schema, peakSchema=peakSchema) 

else: 

self.makeSubtask("singleBandDeblend", schema=self.schema, peakSchema=peakSchema) 

 

def getSchemaCatalogs(self): 

"""Return a dict of empty catalogs for each catalog dataset produced by this task. 

 

Returns 

------- 

result: dict 

Dictionary of empty catalogs, with catalog names as keys. 

""" 

catalog = afwTable.SourceCatalog(self.schema) 

return {self.config.coaddName + "Coadd_deblendedFlux": catalog, 

self.config.coaddName + "Coadd_deblendedModel": catalog} 

 

def runDataRef(self, patchRefList, psfCache=100): 

"""Deblend the patch 

 

Deblend each source simultaneously or separately 

(depending on `DeblendCoaddSourcesTask.config.simultaneous`). 

Set `is-primary` and related flags. 

Propagate flags from individual visits. 

Write the deblended sources out. 

 

Parameters 

---------- 

patchRefList: list 

List of data references for each filter 

""" 

1048 ↛ 1050line 1048 didn't jump to line 1050, because the condition on line 1048 was never true if self.config.simultaneous: 

# Use SCARLET to simultaneously deblend across filters 

filters = [] 

exposures = [] 

for patchRef in patchRefList: 

exposure = patchRef.get(self.config.coaddName + "Coadd_calexp", immediate=True) 

filters.append(patchRef.dataId["filter"]) 

exposures.append(exposure) 

# The input sources are the same for all bands, since it is a merged catalog 

sources = self.readSources(patchRef) 

exposure = afwImage.MultibandExposure.fromExposures(filters, exposures) 

fluxCatalogs, templateCatalogs = self.multiBandDeblend.run(exposure, sources) 

for n in range(len(patchRefList)): 

self.write(patchRefList[n], fluxCatalogs[filters[n]], templateCatalogs[filters[n]]) 

else: 

# Use the singeband deblender to deblend each band separately 

for patchRef in patchRefList: 

exposure = patchRef.get(self.config.coaddName + "Coadd_calexp", immediate=True) 

exposure.getPsf().setCacheCapacity(psfCache) 

sources = self.readSources(patchRef) 

self.singleBandDeblend.run(exposure, sources) 

self.write(patchRef, sources) 

 

def readSources(self, dataRef): 

"""Read merged catalog 

 

Read the catalog of merged detections and create a catalog 

in a single band. 

 

Parameters 

---------- 

dataRef: data reference 

Data reference for catalog of merged detections 

 

Returns 

------- 

sources: `SourceCatalog` 

List of sources in merged catalog 

 

We also need to add columns to hold the measurements we're about to make 

so we can measure in-place. 

""" 

merged = dataRef.get(self.config.coaddName + "Coadd_mergeDet", immediate=True) 

self.log.info("Read %d detections: %s" % (len(merged), dataRef.dataId)) 

idFactory = self.makeIdFactory(dataRef) 

for s in merged: 

idFactory.notify(s.getId()) 

table = afwTable.SourceTable.make(self.schema, idFactory) 

sources = afwTable.SourceCatalog(table) 

sources.extend(merged, self.schemaMapper) 

return sources 

 

def write(self, dataRef, flux_sources, template_sources=None): 

"""Write the source catalog(s) 

 

Parameters 

---------- 

dataRef: Data Reference 

Reference to the output catalog. 

flux_sources: `SourceCatalog` 

Flux conserved sources to write to file. 

If using the single band deblender, this is the catalog 

generated. 

template_sources: `SourceCatalog` 

Source catalog using the multiband template models 

as footprints. 

""" 

# The multiband deblender does not have to conserve flux, 

# so only write the flux conserved catalog if it exists 

1117 ↛ 1123line 1117 didn't jump to line 1123, because the condition on line 1117 was never false if flux_sources is not None: 

assert not self.config.simultaneous or self.config.multiBandDeblend.conserveFlux 

dataRef.put(flux_sources, self.config.coaddName + "Coadd_deblendedFlux") 

# Only the multiband deblender has the option to output the 

# template model catalog, which can optionally be used 

# in MeasureMergedCoaddSources 

1123 ↛ 1124line 1123 didn't jump to line 1124, because the condition on line 1123 was never true if template_sources is not None: 

assert self.config.multiBandDeblend.saveTemplates 

dataRef.put(template_sources, self.config.coaddName + "Coadd_deblendedModel") 

self.log.info("Wrote %d sources: %s" % (len(flux_sources), dataRef.dataId)) 

 

def writeMetadata(self, dataRefList): 

"""Write the metadata produced from processing the data. 

Parameters 

---------- 

dataRefList 

List of Butler data references used to write the metadata. 

The metadata is written to dataset type `CmdLineTask._getMetadataName`. 

""" 

for dataRef in dataRefList: 

try: 

metadataName = self._getMetadataName() 

if metadataName is not None: 

dataRef.put(self.getFullMetadata(), metadataName) 

except Exception as e: 

self.log.warn("Could not persist metadata for dataId=%s: %s", dataRef.dataId, e) 

 

def getExposureId(self, dataRef): 

"""Get the ExposureId from a data reference 

""" 

return int(dataRef.get(self.config.coaddName + "CoaddId")) 

 

 

class MeasureMergedCoaddSourcesConfig(Config): 

"""! 

@anchor MeasureMergedCoaddSourcesConfig_ 

 

@brief Configuration parameters for the MeasureMergedCoaddSourcesTask 

""" 

inputCatalog = Field(dtype=str, default="deblendedFlux", 

doc=("Name of the input catalog to use." 

"If the single band deblender was used this should be 'deblendedFlux." 

"If the multi-band deblender was used this should be 'deblendedModel." 

"If no deblending was performed this should be 'mergeDet'")) 

measurement = ConfigurableField(target=SingleFrameMeasurementTask, doc="Source measurement") 

setPrimaryFlags = ConfigurableField(target=SetPrimaryFlagsTask, doc="Set flags for primary tract/patch") 

doPropagateFlags = Field( 

dtype=bool, default=True, 

doc="Whether to match sources to CCD catalogs to propagate flags (to e.g. identify PSF stars)" 

) 

propagateFlags = ConfigurableField(target=PropagateVisitFlagsTask, doc="Propagate visit flags to coadd") 

doMatchSources = Field(dtype=bool, default=True, doc="Match sources to reference catalog?") 

match = ConfigurableField(target=DirectMatchTask, doc="Matching to reference catalog") 

doWriteMatchesDenormalized = Field( 

dtype=bool, 

default=False, 

doc=("Write reference matches in denormalized format? " 

"This format uses more disk space, but is more convenient to read."), 

) 

coaddName = Field(dtype=str, default="deep", doc="Name of coadd") 

checkUnitsParseStrict = Field( 

doc="Strictness of Astropy unit compatibility check, can be 'raise', 'warn' or 'silent'", 

dtype=str, 

default="raise", 

) 

doApCorr = Field( 

dtype=bool, 

default=True, 

doc="Apply aperture corrections" 

) 

applyApCorr = ConfigurableField( 

target=ApplyApCorrTask, 

doc="Subtask to apply aperture corrections" 

) 

doRunCatalogCalculation = Field( 

dtype=bool, 

default=True, 

doc='Run catalogCalculation task' 

) 

catalogCalculation = ConfigurableField( 

target=CatalogCalculationTask, 

doc="Subtask to run catalogCalculation plugins on catalog" 

) 

 

def setDefaults(self): 

Config.setDefaults(self) 

self.measurement.plugins.names |= ['base_InputCount', 'base_Variance'] 

self.measurement.plugins['base_PixelFlags'].masksFpAnywhere = ['CLIPPED', 'SENSOR_EDGE', 

'INEXACT_PSF'] 

self.measurement.plugins['base_PixelFlags'].masksFpCenter = ['CLIPPED', 'SENSOR_EDGE', 

'INEXACT_PSF'] 

 

## @addtogroup LSST_task_documentation 

## @{ 

## @page MeasureMergedCoaddSourcesTask 

## @ref MeasureMergedCoaddSourcesTask_ "MeasureMergedCoaddSourcesTask" 

## @copybrief MeasureMergedCoaddSourcesTask 

## @} 

 

 

class MeasureMergedCoaddSourcesRunner(ButlerInitializedTaskRunner): 

"""Get the psfCache setting into MeasureMergedCoaddSourcesTask""" 

@staticmethod 

def getTargetList(parsedCmd, **kwargs): 

return ButlerInitializedTaskRunner.getTargetList(parsedCmd, psfCache=parsedCmd.psfCache) 

 

 

class MeasureMergedCoaddSourcesTask(CmdLineTask): 

r"""! 

@anchor MeasureMergedCoaddSourcesTask_ 

 

@brief Deblend sources from master catalog in each coadd seperately and measure. 

 

@section pipe_tasks_multiBand_Contents Contents 

 

- @ref pipe_tasks_multiBand_MeasureMergedCoaddSourcesTask_Purpose 

- @ref pipe_tasks_multiBand_MeasureMergedCoaddSourcesTask_Initialize 

- @ref pipe_tasks_multiBand_MeasureMergedCoaddSourcesTask_Run 

- @ref pipe_tasks_multiBand_MeasureMergedCoaddSourcesTask_Config 

- @ref pipe_tasks_multiBand_MeasureMergedCoaddSourcesTask_Debug 

- @ref pipe_tasks_multiband_MeasureMergedCoaddSourcesTask_Example 

 

@section pipe_tasks_multiBand_MeasureMergedCoaddSourcesTask_Purpose Description 

 

Command-line task that uses peaks and footprints from a master catalog to perform deblending and 

measurement in each coadd. 

 

Given a master input catalog of sources (peaks and footprints) or deblender outputs 

(including a HeavyFootprint in each band), measure each source on the 

coadd. Repeating this procedure with the same master catalog across multiple coadds will generate a 

consistent set of child sources. 

 

The deblender retains all peaks and deblends any missing peaks (dropouts in that band) as PSFs. Source 

properties are measured and the @c is-primary flag (indicating sources with no children) is set. Visit 

flags are propagated to the coadd sources. 

 

Optionally, we can match the coadd sources to an external reference catalog. 

 

@par Inputs: 

deepCoadd_mergeDet{tract,patch} or deepCoadd_deblend{tract,patch}: SourceCatalog 

@n deepCoadd_calexp{tract,patch,filter}: ExposureF 

@par Outputs: 

deepCoadd_meas{tract,patch,filter}: SourceCatalog 

@par Data Unit: 

tract, patch, filter 

 

MeasureMergedCoaddSourcesTask delegates most of its work to a set of sub-tasks: 

 

<DL> 

<DT> @ref SingleFrameMeasurementTask_ "measurement" 

<DD> Measure source properties of deblended sources.</DD> 

<DT> @ref SetPrimaryFlagsTask_ "setPrimaryFlags" 

<DD> Set flag 'is-primary' as well as related flags on sources. 'is-primary' is set for sources that are 

not at the edge of the field and that have either not been deblended or are the children of deblended 

sources</DD> 

<DT> @ref PropagateVisitFlagsTask_ "propagateFlags" 

<DD> Propagate flags set in individual visits to the coadd.</DD> 

<DT> @ref DirectMatchTask_ "match" 

<DD> Match input sources to a reference catalog (optional). 

</DD> 

</DL> 

These subtasks may be retargeted as required. 

 

@section pipe_tasks_multiBand_MeasureMergedCoaddSourcesTask_Initialize Task initialization 

 

@copydoc \_\_init\_\_ 

 

@section pipe_tasks_multiBand_MeasureMergedCoaddSourcesTask_Run Invoking the Task 

 

@copydoc run 

 

@section pipe_tasks_multiBand_MeasureMergedCoaddSourcesTask_Config Configuration parameters 

 

See @ref MeasureMergedCoaddSourcesConfig_ 

 

@section pipe_tasks_multiBand_MeasureMergedCoaddSourcesTask_Debug Debug variables 

 

The @link lsst.pipe.base.cmdLineTask.CmdLineTask command line task@endlink interface supports a 

flag @c -d to import @b debug.py from your @c PYTHONPATH; see @ref baseDebug for more about @b debug.py 

files. 

 

MeasureMergedCoaddSourcesTask has no debug variables of its own because it delegates all the work to 

the various sub-tasks. See the documetation for individual sub-tasks for more information. 

 

@section pipe_tasks_multiband_MeasureMergedCoaddSourcesTask_Example A complete example of using 

MeasureMergedCoaddSourcesTask 

 

After MeasureMergedCoaddSourcesTask has been run on multiple coadds, we have a set of per-band catalogs. 

The next stage in the multi-band processing procedure will merge these measurements into a suitable 

catalog for driving forced photometry. 

 

Command-line usage of MeasureMergedCoaddSourcesTask expects a data reference to the coadds 

to be processed. 

A list of the available optional arguments can be obtained by calling measureCoaddSources.py with the 

`--help` command line argument: 

@code 

measureCoaddSources.py --help 

@endcode 

 

To demonstrate usage of the DetectCoaddSourcesTask in the larger context of multi-band processing, we 

will process HSC data in the [ci_hsc](https://github.com/lsst/ci_hsc) package. Assuming one has finished 

step 6 at @ref pipeTasks_multiBand, one may perform deblending and measure sources in the HSC-I band 

coadd as follows: 

@code 

measureCoaddSources.py $CI_HSC_DIR/DATA --id patch=5,4 tract=0 filter=HSC-I 

@endcode 

This will process the HSC-I band data. The results are written in 

`$CI_HSC_DIR/DATA/deepCoadd-results/HSC-I/0/5,4/meas-HSC-I-0-5,4.fits 

 

It is also necessary to run 

@code 

measureCoaddSources.py $CI_HSC_DIR/DATA --id patch=5,4 tract=0 filter=HSC-R 

@endcode 

to generate the sources catalogs for the HSC-R band required by the next step in the multi-band 

procedure: @ref MergeMeasurementsTask_ "MergeMeasurementsTask". 

""" 

_DefaultName = "measureCoaddSources" 

ConfigClass = MeasureMergedCoaddSourcesConfig 

RunnerClass = MeasureMergedCoaddSourcesRunner 

getSchemaCatalogs = _makeGetSchemaCatalogs("meas") 

makeIdFactory = _makeMakeIdFactory("MergedCoaddId") # The IDs we already have are of this type 

 

@classmethod 

def _makeArgumentParser(cls): 

parser = ArgumentParser(name=cls._DefaultName) 

parser.add_id_argument("--id", "deepCoadd_calexp", 

help="data ID, e.g. --id tract=12345 patch=1,2 filter=r", 

ContainerClass=ExistingCoaddDataIdContainer) 

parser.add_argument("--psfCache", type=int, default=100, help="Size of CoaddPsf cache") 

return parser 

 

def __init__(self, butler=None, schema=None, peakSchema=None, refObjLoader=None, **kwargs): 

"""! 

@brief Initialize the task. 

 

Keyword arguments (in addition to those forwarded to CmdLineTask.__init__): 

@param[in] schema: the schema of the merged detection catalog used as input to this one 

@param[in] peakSchema: the schema of the PeakRecords in the Footprints in the merged detection catalog 

@param[in] refObjLoader: an instance of LoadReferenceObjectsTasks that supplies an external reference 

catalog. May be None if the loader can be constructed from the butler argument or all steps 

requiring a reference catalog are disabled. 

@param[in] butler: a butler used to read the input schemas from disk or construct the reference 

catalog loader, if schema or peakSchema or refObjLoader is None 

 

The task will set its own self.schema attribute to the schema of the output measurement catalog. 

This will include all fields from the input schema, as well as additional fields for all the 

measurements. 

""" 

CmdLineTask.__init__(self, **kwargs) 

self.deblended = self.config.inputCatalog.startswith("deblended") 

self.inputCatalog = "Coadd_" + self.config.inputCatalog 

1368 ↛ 1371line 1368 didn't jump to line 1371, because the condition on line 1368 was never false if schema is None: 

assert butler is not None, "Neither butler nor schema is defined" 

schema = butler.get(self.config.coaddName + self.inputCatalog + "_schema", immediate=True).schema 

self.schemaMapper = afwTable.SchemaMapper(schema) 

self.schemaMapper.addMinimalSchema(schema) 

self.schema = self.schemaMapper.getOutputSchema() 

self.algMetadata = PropertyList() 

self.makeSubtask("measurement", schema=self.schema, algMetadata=self.algMetadata) 

self.makeSubtask("setPrimaryFlags", schema=self.schema) 

1377 ↛ 1378line 1377 didn't jump to line 1378, because the condition on line 1377 was never true if self.config.doMatchSources: 

if refObjLoader is None: 

assert butler is not None, "Neither butler nor refObjLoader is defined" 

self.makeSubtask("match", butler=butler, refObjLoader=refObjLoader) 

1381 ↛ 1383line 1381 didn't jump to line 1383, because the condition on line 1381 was never false if self.config.doPropagateFlags: 

self.makeSubtask("propagateFlags", schema=self.schema) 

self.schema.checkUnits(parse_strict=self.config.checkUnitsParseStrict) 

1384 ↛ 1386line 1384 didn't jump to line 1386, because the condition on line 1384 was never false if self.config.doApCorr: 

self.makeSubtask("applyApCorr", schema=self.schema) 

1386 ↛ exitline 1386 didn't return from function '__init__', because the condition on line 1386 was never false if self.config.doRunCatalogCalculation: 

self.makeSubtask("catalogCalculation", schema=self.schema) 

 

def runDataRef(self, patchRef, psfCache=100): 

"""! 

@brief Deblend and measure. 

 

@param[in] patchRef: Patch reference. 

 

Set 'is-primary' and related flags. Propagate flags 

from individual visits. Optionally match the sources to a reference catalog and write the matches. 

Finally, write the deblended sources and measurements out. 

""" 

exposure = patchRef.get(self.config.coaddName + "Coadd_calexp", immediate=True) 

exposure.getPsf().setCacheCapacity(psfCache) 

sources = self.readSources(patchRef) 

table = sources.getTable() 

table.setMetadata(self.algMetadata) # Capture algorithm metadata to write out to the source catalog. 

 

self.measurement.run(sources, exposure, exposureId=self.getExposureId(patchRef)) 

 

1407 ↛ 1417line 1407 didn't jump to line 1417, because the condition on line 1407 was never false if self.config.doApCorr: 

self.applyApCorr.run( 

catalog=sources, 

apCorrMap=exposure.getInfo().getApCorrMap() 

) 

 

# TODO DM-11568: this contiguous check-and-copy could go away if we 

# reserve enough space during SourceDetection and/or SourceDeblend. 

# NOTE: sourceSelectors require contiguous catalogs, so ensure 

# contiguity now, so views are preserved from here on. 

1417 ↛ 1418line 1417 didn't jump to line 1418, because the condition on line 1417 was never true if not sources.isContiguous(): 

sources = sources.copy(deep=True) 

 

1420 ↛ 1423line 1420 didn't jump to line 1423, because the condition on line 1420 was never false if self.config.doRunCatalogCalculation: 

self.catalogCalculation.run(sources) 

 

skyInfo = getSkyInfo(coaddName=self.config.coaddName, patchRef=patchRef) 

self.setPrimaryFlags.run(sources, skyInfo.skyMap, skyInfo.tractInfo, skyInfo.patchInfo, 

includeDeblend=self.deblended) 

1426 ↛ 1429line 1426 didn't jump to line 1429, because the condition on line 1426 was never false if self.config.doPropagateFlags: 

self.propagateFlags.run(patchRef.getButler(), sources, self.propagateFlags.getCcdInputs(exposure), 

exposure.getWcs()) 

1429 ↛ 1430line 1429 didn't jump to line 1430, because the condition on line 1429 was never true if self.config.doMatchSources: 

self.writeMatches(patchRef, exposure, sources) 

self.write(patchRef, sources) 

 

def readSources(self, dataRef): 

"""! 

@brief Read input sources. 

 

@param[in] dataRef: Data reference for catalog of merged detections 

@return List of sources in merged catalog 

 

We also need to add columns to hold the measurements we're about to make 

so we can measure in-place. 

""" 

merged = dataRef.get(self.config.coaddName + self.inputCatalog, immediate=True) 

self.log.info("Read %d detections: %s" % (len(merged), dataRef.dataId)) 

idFactory = self.makeIdFactory(dataRef) 

for s in merged: 

idFactory.notify(s.getId()) 

table = afwTable.SourceTable.make(self.schema, idFactory) 

sources = afwTable.SourceCatalog(table) 

sources.extend(merged, self.schemaMapper) 

return sources 

 

def writeMatches(self, dataRef, exposure, sources): 

"""! 

@brief Write matches of the sources to the astrometric reference catalog. 

 

We use the Wcs in the exposure to match sources. 

 

@param[in] dataRef: data reference 

@param[in] exposure: exposure with Wcs 

@param[in] sources: source catalog 

""" 

result = self.match.run(sources, exposure.getInfo().getFilter().getName()) 

if result.matches: 

matches = afwTable.packMatches(result.matches) 

matches.table.setMetadata(result.matchMeta) 

dataRef.put(matches, self.config.coaddName + "Coadd_measMatch") 

if self.config.doWriteMatchesDenormalized: 

denormMatches = denormalizeMatches(result.matches, result.matchMeta) 

dataRef.put(denormMatches, self.config.coaddName + "Coadd_measMatchFull") 

 

def write(self, dataRef, sources): 

"""! 

@brief Write the source catalog. 

 

@param[in] dataRef: data reference 

@param[in] sources: source catalog 

""" 

dataRef.put(sources, self.config.coaddName + "Coadd_meas") 

self.log.info("Wrote %d sources: %s" % (len(sources), dataRef.dataId)) 

 

def getExposureId(self, dataRef): 

return int(dataRef.get(self.config.coaddName + "CoaddId")) 

 

 

class MergeMeasurementsConfig(MergeSourcesConfig): 

"""! 

@anchor MergeMeasurementsConfig_ 

 

@brief Configuration parameters for the MergeMeasurementsTask 

""" 

pseudoFilterList = ListField(dtype=str, default=["sky"], 

doc="Names of filters which may have no associated detection\n" 

"(N.b. should include MergeDetectionsConfig.skyFilterName)") 

snName = Field(dtype=str, default="base_PsfFlux", 

doc="Name of flux measurement for calculating the S/N when choosing the reference band.") 

minSN = Field(dtype=float, default=10., 

doc="If the S/N from the priority band is below this value (and the S/N " 

"is larger than minSNDiff compared to the priority band), use the band with " 

"the largest S/N as the reference band.") 

minSNDiff = Field(dtype=float, default=3., 

doc="If the difference in S/N between another band and the priority band is larger " 

"than this value (and the S/N in the priority band is less than minSN) " 

"use the band with the largest S/N as the reference band") 

flags = ListField(dtype=str, doc="Require that these flags, if available, are not set", 

default=["base_PixelFlags_flag_interpolatedCenter", "base_PsfFlux_flag", 

"ext_photometryKron_KronFlux_flag", "modelfit_CModel_flag", ]) 

 

## @addtogroup LSST_task_documentation 

## @{ 

## @page MergeMeasurementsTask 

## @ref MergeMeasurementsTask_ "MergeMeasurementsTask" 

## @copybrief MergeMeasurementsTask 

## @} 

 

 

class MergeMeasurementsTask(MergeSourcesTask): 

r"""! 

@anchor MergeMeasurementsTask_ 

 

@brief Merge measurements from multiple bands 

 

@section pipe_tasks_multiBand_Contents Contents 

 

- @ref pipe_tasks_multiBand_MergeMeasurementsTask_Purpose 

- @ref pipe_tasks_multiBand_MergeMeasurementsTask_Initialize 

- @ref pipe_tasks_multiBand_MergeMeasurementsTask_Run 

- @ref pipe_tasks_multiBand_MergeMeasurementsTask_Config 

- @ref pipe_tasks_multiBand_MergeMeasurementsTask_Debug 

- @ref pipe_tasks_multiband_MergeMeasurementsTask_Example 

 

@section pipe_tasks_multiBand_MergeMeasurementsTask_Purpose Description 

 

Command-line task that merges measurements from multiple bands. 

 

Combines consistent (i.e. with the same peaks and footprints) catalogs of sources from multiple filter 

bands to construct a unified catalog that is suitable for driving forced photometry. Every source is 

required to have centroid, shape and flux measurements in each band. 

 

@par Inputs: 

deepCoadd_meas{tract,patch,filter}: SourceCatalog 

@par Outputs: 

deepCoadd_ref{tract,patch}: SourceCatalog 

@par Data Unit: 

tract, patch 

 

MergeMeasurementsTask subclasses @ref MergeSourcesTask_ "MergeSourcesTask". 

 

@section pipe_tasks_multiBand_MergeMeasurementsTask_Initialize Task initialization 

 

@copydoc \_\_init\_\_ 

 

@section pipe_tasks_multiBand_MergeMeasurementsTask_Run Invoking the Task 

 

@copydoc run 

 

@section pipe_tasks_multiBand_MergeMeasurementsTask_Config Configuration parameters 

 

See @ref MergeMeasurementsConfig_ 

 

@section pipe_tasks_multiBand_MergeMeasurementsTask_Debug Debug variables 

 

The @link lsst.pipe.base.cmdLineTask.CmdLineTask command line task@endlink interface supports a 

flag @c -d to import @b debug.py from your @c PYTHONPATH; see @ref baseDebug for more about @b debug.py 

files. 

 

MergeMeasurementsTask has no debug variables. 

 

@section pipe_tasks_multiband_MergeMeasurementsTask_Example A complete example 

of using MergeMeasurementsTask 

 

MergeMeasurementsTask is meant to be run after deblending & measuring sources in every band. 

The purpose of the task is to generate a catalog of sources suitable for driving forced photometry in 

coadds and individual exposures. 

Command-line usage of MergeMeasurementsTask expects a data reference to the coadds to be processed. A list 

of the available optional arguments can be obtained by calling mergeCoaddMeasurements.py with the `--help` 

command line argument: 

@code 

mergeCoaddMeasurements.py --help 

@endcode 

 

To demonstrate usage of the DetectCoaddSourcesTask in the larger context of multi-band processing, we 

will process HSC data in the [ci_hsc](https://github.com/lsst/ci_hsc) package. Assuming one has finished 

step 7 at @ref pipeTasks_multiBand, one may merge the catalogs generated after deblending and measuring 

as follows: 

@code 

mergeCoaddMeasurements.py $CI_HSC_DIR/DATA --id patch=5,4 tract=0 filter=HSC-I^HSC-R 

@endcode 

This will merge the HSC-I & HSC-R band catalogs. The results are written in 

`$CI_HSC_DIR/DATA/deepCoadd-results/`. 

""" 

_DefaultName = "mergeCoaddMeasurements" 

ConfigClass = MergeMeasurementsConfig 

inputDataset = "meas" 

outputDataset = "ref" 

getSchemaCatalogs = _makeGetSchemaCatalogs("ref") 

 

def __init__(self, butler=None, schema=None, **kwargs): 

"""! 

Initialize the task. 

 

Additional keyword arguments (forwarded to MergeSourcesTask.__init__): 

@param[in] schema: the schema of the detection catalogs used as input to this one 

@param[in] butler: a butler used to read the input schema from disk, if schema is None 

 

The task will set its own self.schema attribute to the schema of the output merged catalog. 

""" 

MergeSourcesTask.__init__(self, butler=butler, schema=schema, **kwargs) 

inputSchema = self.getInputSchema(butler=butler, schema=schema) 

self.schemaMapper = afwTable.SchemaMapper(inputSchema, True) 

self.schemaMapper.addMinimalSchema(inputSchema, True) 

self.instFluxKey = inputSchema.find(self.config.snName + "_instFlux").getKey() 

self.instFluxErrKey = inputSchema.find(self.config.snName + "_instFluxErr").getKey() 

self.fluxFlagKey = inputSchema.find(self.config.snName + "_flag").getKey() 

 

self.flagKeys = {} 

for band in self.config.priorityList: 

short = getShortFilterName(band) 

outputKey = self.schemaMapper.editOutputSchema().addField( 

"merge_measurement_%s" % short, 

type="Flag", 

doc="Flag field set if the measurements here are from the %s filter" % band 

) 

peakKey = inputSchema.find("merge_peak_%s" % short).key 

footprintKey = inputSchema.find("merge_footprint_%s" % short).key 

self.flagKeys[band] = Struct(peak=peakKey, footprint=footprintKey, output=outputKey) 

self.schema = self.schemaMapper.getOutputSchema() 

 

self.pseudoFilterKeys = [] 

for filt in self.config.pseudoFilterList: 

try: 

self.pseudoFilterKeys.append(self.schema.find("merge_peak_%s" % filt).getKey()) 

except Exception as e: 

self.log.warn("merge_peak is not set for pseudo-filter %s: %s" % (filt, e)) 

 

self.badFlags = {} 

for flag in self.config.flags: 

try: 

self.badFlags[flag] = self.schema.find(flag).getKey() 

except KeyError as exc: 

self.log.warn("Can't find flag %s in schema: %s" % (flag, exc,)) 

 

def run(self, catalogs, patchRef): 

"""! 

Merge measurement catalogs to create a single reference catalog for forced photometry 

 

@param[in] catalogs: the catalogs to be merged 

@param[in] patchRef: patch reference for data 

 

For parent sources, we choose the first band in config.priorityList for which the 

merge_footprint flag for that band is is True. 

 

For child sources, the logic is the same, except that we use the merge_peak flags. 

""" 

# Put catalogs, filters in priority order 

orderedCatalogs = [catalogs[band] for band in self.config.priorityList if band in catalogs.keys()] 

orderedKeys = [self.flagKeys[band] for band in self.config.priorityList if band in catalogs.keys()] 

 

mergedCatalog = afwTable.SourceCatalog(self.schema) 

mergedCatalog.reserve(len(orderedCatalogs[0])) 

 

idKey = orderedCatalogs[0].table.getIdKey() 

1663 ↛ 1664line 1663 didn't jump to line 1664, because the loop on line 1663 never started for catalog in orderedCatalogs[1:]: 

if numpy.any(orderedCatalogs[0].get(idKey) != catalog.get(idKey)): 

raise ValueError("Error in inputs to MergeCoaddMeasurements: source IDs do not match") 

 

# This first zip iterates over all the catalogs simultaneously, yielding a sequence of one 

# record for each band, in priority order. 

for orderedRecords in zip(*orderedCatalogs): 

 

maxSNRecord = None 

maxSNFlagKeys = None 

maxSN = 0. 

priorityRecord = None 

priorityFlagKeys = None 

prioritySN = 0. 

hasPseudoFilter = False 

 

# Now we iterate over those record-band pairs, keeping track of the priority and the 

# largest S/N band. 

for inputRecord, flagKeys in zip(orderedRecords, orderedKeys): 

parent = (inputRecord.getParent() == 0 and inputRecord.get(flagKeys.footprint)) 

child = (inputRecord.getParent() != 0 and inputRecord.get(flagKeys.peak)) 

 

if not (parent or child): 

1686 ↛ 1692line 1686 didn't jump to line 1692, because the loop on line 1686 didn't complete for pseudoFilterKey in self.pseudoFilterKeys: 

1687 ↛ 1686line 1687 didn't jump to line 1686, because the condition on line 1687 was never false if inputRecord.get(pseudoFilterKey): 

hasPseudoFilter = True 

priorityRecord = inputRecord 

priorityFlagKeys = flagKeys 

break 

1692 ↛ 1695line 1692 didn't jump to line 1695, because the condition on line 1692 was never false if hasPseudoFilter: 

break 

 

isBad = any(inputRecord.get(flag) for flag in self.badFlags) 

if isBad or inputRecord.get(self.fluxFlagKey) or inputRecord.get(self.instFluxErrKey) == 0: 

sn = 0. 

else: 

sn = inputRecord.get(self.instFluxKey)/inputRecord.get(self.instFluxErrKey) 

1700 ↛ 1701line 1700 didn't jump to line 1701, because the condition on line 1700 was never true if numpy.isnan(sn) or sn < 0.: 

sn = 0. 

1702 ↛ 1706line 1702 didn't jump to line 1706, because the condition on line 1702 was never false if (parent or child) and priorityRecord is None: 

priorityRecord = inputRecord 

priorityFlagKeys = flagKeys 

prioritySN = sn 

if sn > maxSN: 

maxSNRecord = inputRecord 

maxSNFlagKeys = flagKeys 

maxSN = sn 

 

# If the priority band has a low S/N we would like to choose the band with the highest S/N as 

# the reference band instead. However, we only want to choose the highest S/N band if it is 

# significantly better than the priority band. Therefore, to choose a band other than the 

# priority, we require that the priority S/N is below the minimum threshold and that the 

# difference between the priority and highest S/N is larger than the difference threshold. 

# 

# For pseudo code objects we always choose the first band in the priority list. 

bestRecord = None 

bestFlagKeys = None 

if hasPseudoFilter: 

bestRecord = priorityRecord 

bestFlagKeys = priorityFlagKeys 

1723 ↛ 1725line 1723 didn't jump to line 1725, because the condition on line 1723 was never true elif (prioritySN < self.config.minSN and (maxSN - prioritySN) > self.config.minSNDiff and 

maxSNRecord is not None): 

bestRecord = maxSNRecord 

bestFlagKeys = maxSNFlagKeys 

1727 ↛ 1731line 1727 didn't jump to line 1731, because the condition on line 1727 was never false elif priorityRecord is not None: 

bestRecord = priorityRecord 

bestFlagKeys = priorityFlagKeys 

 

1731 ↛ 1736line 1731 didn't jump to line 1736, because the condition on line 1731 was never false if bestRecord is not None and bestFlagKeys is not None: 

outputRecord = mergedCatalog.addNew() 

outputRecord.assign(bestRecord, self.schemaMapper) 

outputRecord.set(bestFlagKeys.output, True) 

else: # if we didn't find any records 

raise ValueError("Error in inputs to MergeCoaddMeasurements: no valid reference for %s" % 

inputRecord.getId()) 

 

# more checking for sane inputs, since zip silently iterates over the smallest sequence 

for inputCatalog in orderedCatalogs: 

1741 ↛ 1742line 1741 didn't jump to line 1742, because the condition on line 1741 was never true if len(mergedCatalog) != len(inputCatalog): 

raise ValueError("Mismatch between catalog sizes: %s != %s" % 

(len(mergedCatalog), len(orderedCatalogs))) 

 

return mergedCatalog