Hide keyboard shortcuts

Hot-keys on this page

r m x p   toggle line displays

j k   next/prev highlighted chunk

0   (zero) top of page

1   (one) first highlighted chunk

1""" 

2This module defines methods which implement the astrophysical 

3variability models used by CatSim. InstanceCatalogs apply 

4variability by calling the applyVariability() method in 

5the Variability class. To add a new variability model to 

6this framework, users should define a method which 

7returns the delta magnitude of the variable source 

8in the LSST bands and accepts as arguments: 

9 

10 valid_dexes -- the output of numpy.where() indicating 

11 which astrophysical objects actually depend on the 

12 variability model. 

13 

14 params -- a dict whose keys are the names of parameters 

15 required by the variability model and whose values 

16 are lists of the parameters required for the 

17 variability model for all astrophysical objects 

18 in the CatSim database (even those objects that do 

19 not depend on the model; these objects can have 

20 None in the parameter lists). 

21 

22 expmjd -- the MJD of the observation. This must be 

23 able to accept a float or a numpy array. 

24 

25If expmjd is a float, the variability model should 

26return a 2-D numpy array in which the first index 

27varies over the band and the second index varies 

28over the object, i.e. 

29 

30 out_dmag[0][2] is the delta magnitude of the 2nd object 

31 in the u band 

32 

33 out_dmag[3][15] is the delta magnitude of the 15th object 

34 in the i band. 

35 

36If expmjd is a numpy array, the variability should 

37return a 3-D numpy array in which the first index 

38varies over the band, the second index varies over 

39the object, and the third index varies over the 

40time step, i.e. 

41 

42 out_dmag[0][2][15] is the delta magnitude of the 2nd 

43 object in the u band at the 15th value of expmjd 

44 

45 out_dmag[3][11][2] is the delta magnitude of the 

46 11th object in the i band at the 2nd value of 

47 expmjd 

48 

49The method implementing the variability model should be 

50marked with the decorator @register_method(key) where key 

51is a string uniquely identifying the variability model. 

52applyVariability() will call the variability method 

53by reading in the json-ized dict varParamStr from the 

54CatSim database. varParamStr should look like 

55 

56{'m':method_name, 'p':{'p1': val1, 'p2': val2,...}} 

57 

58method_name is the register_method() key referring 

59to the variabilty model. p1, p2, etc. are the parameters 

60expected by the variability model. 

61""" 

62 

63from builtins import range 

64from builtins import object 

65import numpy as np 

66import linecache 

67import math 

68import os 

69import gzip 

70import copy 

71import numbers 

72import multiprocessing 

73import json as json 

74from lsst.utils import getPackageDir 

75from lsst.sims.catalogs.decorators import register_method, compound 

76from lsst.sims.photUtils import Sed, BandpassDict 

77from lsst.sims.utils.CodeUtilities import sims_clean_up 

78from scipy.interpolate import InterpolatedUnivariateSpline 

79from scipy.interpolate import UnivariateSpline 

80from scipy.interpolate import interp1d 

81 

82import time 

83 

84__all__ = ["Variability", "VariabilityStars", "VariabilityGalaxies", 

85 "VariabilityAGN", "StellarVariabilityModels", 

86 "ExtraGalacticVariabilityModels", "MLTflaringMixin", 

87 "ParametrizedLightCurveMixin", 

88 "create_variability_cache"] 

89 

90 

91def create_variability_cache(): 

92 """ 

93 Create a blank variability cache 

94 """ 

95 cache = {'parallelizable': False, 

96 

97 '_MLT_LC_NPZ' : None, # this will be loaded from a .npz file 

98 # (.npz files are the result of numpy.savez()) 

99 

100 '_MLT_LC_NPZ_NAME' : None, # the name of the .npz file to beloaded 

101 

102 '_MLT_LC_TIME_CACHE' : {}, # a dict for storing loaded time grids 

103 

104 '_MLT_LC_DURATION_CACHE' : {}, # a dict for storing the simulated length 

105 # of the time grids 

106 

107 '_MLT_LC_MAX_TIME_CACHE' : {}, # a dict for storing the t_max of a light curve 

108 

109 '_MLT_LC_FLUX_CACHE' : {}, # a dict for storing loaded flux grids 

110 

111 '_PARAMETRIZED_LC_MODELS' : {}, # a dict for storing the parametrized light curve models 

112 

113 '_PARAMETRIZED_MODELS_LOADED' : [] # a list of all of the files from which models were loaded 

114 } 

115 

116 return cache 

117 

118_GLOBAL_VARIABILITY_CACHE = create_variability_cache() 

119 

120 

121class Variability(object): 

122 """ 

123 Variability class for adding temporal variation to the magnitudes of 

124 objects in the base catalog. 

125 

126 This class provides methods that all variability models rely on. 

127 Actual implementations of variability models will be provided by 

128 the *VariabilityModels classes. 

129 """ 

130 

131 _survey_start = 59580.0 # start time of the LSST survey being simulated (MJD) 

132 

133 variabilityInitialized = False 

134 

135 def num_variable_obj(self, params): 

136 """ 

137 Return the total number of objects in the catalog 

138 

139 Parameters 

140 ---------- 

141 params is the dict of parameter arrays passed to a variability method 

142 

143 Returns 

144 ------- 

145 The number of objects in the catalog 

146 """ 

147 params_keys = list(params.keys()) 

148 if len(params_keys) == 0: 

149 return 0 

150 

151 return len(params[params_keys[0]]) 

152 

153 def initializeVariability(self, doCache=False): 

154 """ 

155 It will only be called from applyVariability, and only 

156 if self.variabilityInitiailized == False (which this method then 

157 sets to True) 

158 

159 @param [in] doCache controls whether or not the code caches calculated 

160 light curves for future use 

161 """ 

162 # Docstring is a best approximation of what this method does. 

163 # This is older code. 

164 

165 self.variabilityInitialized=True 

166 #below are variables to cache the light curves of variability models 

167 self.variabilityLcCache = {} 

168 self.variabilityCache = doCache 

169 try: 

170 self.variabilityDataDir = os.environ.get("SIMS_SED_LIBRARY_DIR") 

171 except: 

172 raise RuntimeError("sims_sed_library must be setup to compute variability because it contains"+ 

173 " the lightcurves") 

174 

175 

176 

177 def applyVariability(self, varParams_arr, expmjd=None, 

178 variability_cache=None): 

179 """ 

180 Read in an array/list of varParamStr objects taken from the CatSim 

181 database. For each varParamStr, call the appropriate variability 

182 model to calculate magnitude offsets that need to be applied to 

183 the corresponding astrophysical offsets. Return a 2-D numpy 

184 array of magnitude offsets in which each row is an LSST band 

185 in ugrizy order and each column is an astrophysical object from 

186 the CatSim database. 

187 

188 variability_cache is a cache of data as initialized by the 

189 create_variability_cache() method (optional; if None, the 

190 method will just use a globl cache) 

191 """ 

192 t_start = time.time() 

193 if not hasattr(self, '_total_t_apply_var'): 

194 self._total_t_apply_var = 0.0 

195 

196 # construct a registry of all of the variability models 

197 # available to the InstanceCatalog 

198 if not hasattr(self, '_methodRegistry'): 

199 self._methodRegistry = {} 

200 self._method_name_to_int = {} 

201 next_int = 0 

202 for methodname in dir(self): 

203 method=getattr(self, methodname) 

204 if hasattr(method, '_registryKey'): 

205 if method._registryKey not in self._methodRegistry: 

206 self._methodRegistry[method._registryKey] = method 

207 self._method_name_to_int[method._registryKey] = next_int 

208 next_int += 1 

209 

210 if self.variabilityInitialized == False: 

211 self.initializeVariability(doCache=True) 

212 

213 

214 if isinstance(expmjd, numbers.Number) or expmjd is None: 

215 # A numpy array of magnitude offsets. Each row is 

216 # an LSST band in ugrizy order. Each column is an 

217 # astrophysical object from the CatSim database. 

218 deltaMag = np.zeros((6, len(varParams_arr))) 

219 else: 

220 # the last dimension varies over time 

221 deltaMag = np.zeros((6, len(varParams_arr), len(expmjd))) 

222 

223 # When the InstanceCatalog calls all of its getters 

224 # with an empty chunk to check column dependencies, 

225 # call all of the variability models in the 

226 # _methodRegistry to make sure that all of the column 

227 # dependencies of the variability models are detected. 

228 if len(varParams_arr) == 0: 

229 for method_name in self._methodRegistry: 

230 self._methodRegistry[method_name]([],{},0) 

231 

232 # Keep a list of all of the specific variability models 

233 # that need to be called. There is one entry for each 

234 # astrophysical object in the CatSim database. We will 

235 # ultimately run np.where on method_name_arr to determine 

236 # which objects need to be passed through which 

237 # variability methods. 

238 method_name_arr = [] 

239 

240 # also keep an array listing the methods to use 

241 # by the integers mapped with self._method_name_to_int; 

242 # this is for faster application of np.where when 

243 # figuring out which objects go with which method 

244 method_int_arr = -1*np.ones(len(varParams_arr), dtype=int) 

245 

246 # Keep a dict keyed on all of the method names in 

247 # method_name_arr. params[method_name] will be another 

248 # dict keyed on the names of the parameters required by 

249 # the method method_name. The values of this dict will 

250 # be lists of parameter values for all astrophysical 

251 # objects in the CatSim database. Even objects that 

252 # do no callon method_name will have entries in these 

253 # lists (they will be set to None). 

254 params = {} 

255 

256 for ix, varCmd in enumerate(varParams_arr): 

257 if str(varCmd) == 'None': 

258 continue 

259 

260 varCmd = json.loads(varCmd) 

261 

262 # find the key associated with the name of 

263 # the specific variability model to be applied 

264 if 'varMethodName' in varCmd: 

265 meth_key = 'varMethodName' 

266 else: 

267 meth_key = 'm' 

268 

269 # find the key associated with the list of 

270 # parameters to be supplied to the variability 

271 # model 

272 if 'pars' in varCmd: 

273 par_key = 'pars' 

274 else: 

275 par_key = 'p' 

276 

277 # if we have discovered a new variability model 

278 # that needs to be called, initialize its entries 

279 # in the params dict 

280 if varCmd[meth_key] not in method_name_arr: 

281 params[varCmd[meth_key]] = {} 

282 for p_name in varCmd[par_key]: 

283 params[varCmd[meth_key]][p_name] = [None]*len(varParams_arr) 

284 

285 method_name_arr.append(varCmd[meth_key]) 

286 if varCmd[meth_key] != 'None': 

287 try: 

288 method_int_arr[ix] = self._method_name_to_int[varCmd[meth_key]] 

289 except KeyError: 

290 raise RuntimeError("Your InstanceCatalog does not contain " \ 

291 + "a variability method corresponding to '%s'" 

292 % varCmd[meth_key]) 

293 

294 for p_name in varCmd[par_key]: 

295 params[varCmd[meth_key]][p_name][ix] = varCmd[par_key][p_name] 

296 

297 method_name_arr = np.array(method_name_arr) 

298 for method_name in params: 

299 for p_name in params[method_name]: 

300 params[method_name][p_name] = np.array(params[method_name][p_name]) 

301 

302 # Loop over all of the variability models that need to be called. 

303 # Call each variability model on the astrophysical objects that 

304 # require the model. Add the result to deltaMag. 

305 for method_name in np.unique(method_name_arr): 

306 if method_name != 'None': 

307 

308 if expmjd is None: 

309 expmjd = self.obs_metadata.mjd.TAI 

310 

311 deltaMag += self._methodRegistry[method_name](np.where(method_int_arr==self._method_name_to_int[method_name]), 

312 params[method_name], 

313 expmjd, 

314 variability_cache=variability_cache) 

315 

316 self._total_t_apply_var += time.time()-t_start 

317 return deltaMag 

318 

319 

320 def applyStdPeriodic(self, valid_dexes, params, keymap, expmjd, 

321 inDays=True, interpFactory=None): 

322 

323 """ 

324 Applies a specified variability method. 

325 

326 The params for the method are provided in the dict params{} 

327 

328 The keys for those parameters are in the dict keymap{} 

329 

330 This is because the syntax used here is not necessarily the syntax 

331 used in the data bases. 

332 

333 The method will return a dict of magnitude offsets. The dict will 

334 be keyed to the filter names. 

335 

336 @param [in] valid_dexes is the result of numpy.where() indicating 

337 which astrophysical objects from the CatSim database actually use 

338 this variability model. 

339 

340 @param [in] params is a dict of parameters for the variability model. 

341 The dict is keyed to the names of parameters. The values are arrays 

342 of parameter values. 

343 

344 @param [in] keymap is a dict mapping from the parameter naming convention 

345 used by the database to the parameter naming convention used by the 

346 variability methods below. 

347 

348 @param [in] expmjd is the mjd of the observation 

349 

350 @param [in] inDays controls whether or not the time grid 

351 of the light curve is renormalized by the period 

352 

353 @param [in] interpFactory is the method used for interpolating 

354 the light curve 

355 

356 @param [out] magoff is a 2D numpy array of magnitude offsets. Each 

357 row is an LSST band in ugrizy order. Each column is a different 

358 astrophysical object from the CatSim database. 

359 """ 

360 if isinstance(expmjd, numbers.Number): 

361 magoff = np.zeros((6, self.num_variable_obj(params))) 

362 else: 

363 magoff = np.zeros((6, self.num_variable_obj(params), len(expmjd))) 

364 expmjd = np.asarray(expmjd) 

365 for ix in valid_dexes[0]: 

366 filename = params[keymap['filename']][ix] 

367 toff = params[keymap['t0']][ix] 

368 

369 inPeriod = None 

370 if 'period' in params: 

371 inPeriod = params['period'][ix] 

372 

373 epoch = expmjd - toff 

374 if filename in self.variabilityLcCache: 

375 splines = self.variabilityLcCache[filename]['splines'] 

376 period = self.variabilityLcCache[filename]['period'] 

377 else: 

378 lc = np.loadtxt(os.path.join(self.variabilityDataDir,filename), unpack=True, comments='#') 

379 if inPeriod is None: 

380 dt = lc[0][1] - lc[0][0] 

381 period = lc[0][-1] + dt 

382 else: 

383 period = inPeriod 

384 

385 if inDays: 

386 lc[0] /= period 

387 

388 splines = {} 

389 

390 if interpFactory is not None: 

391 splines['u'] = interpFactory(lc[0], lc[1]) 

392 splines['g'] = interpFactory(lc[0], lc[2]) 

393 splines['r'] = interpFactory(lc[0], lc[3]) 

394 splines['i'] = interpFactory(lc[0], lc[4]) 

395 splines['z'] = interpFactory(lc[0], lc[5]) 

396 splines['y'] = interpFactory(lc[0], lc[6]) 

397 if self.variabilityCache: 

398 self.variabilityLcCache[filename] = {'splines':splines, 'period':period} 

399 else: 

400 splines['u'] = interp1d(lc[0], lc[1]) 

401 splines['g'] = interp1d(lc[0], lc[2]) 

402 splines['r'] = interp1d(lc[0], lc[3]) 

403 splines['i'] = interp1d(lc[0], lc[4]) 

404 splines['z'] = interp1d(lc[0], lc[5]) 

405 splines['y'] = interp1d(lc[0], lc[6]) 

406 if self.variabilityCache: 

407 self.variabilityLcCache[filename] = {'splines':splines, 'period':period} 

408 

409 phase = epoch/period - epoch//period 

410 magoff[0][ix] = splines['u'](phase) 

411 magoff[1][ix] = splines['g'](phase) 

412 magoff[2][ix] = splines['r'](phase) 

413 magoff[3][ix] = splines['i'](phase) 

414 magoff[4][ix] = splines['z'](phase) 

415 magoff[5][ix] = splines['y'](phase) 

416 

417 return magoff 

418 

419 

420class StellarVariabilityModels(Variability): 

421 """ 

422 A mixin providing standard stellar variability models. 

423 """ 

424 

425 @register_method('applyRRly') 

426 def applyRRly(self, valid_dexes, params, expmjd, 

427 variability_cache=None): 

428 

429 if len(params) == 0: 

430 return np.array([[],[],[],[],[],[]]) 

431 

432 keymap = {'filename':'filename', 't0':'tStartMjd'} 

433 return self.applyStdPeriodic(valid_dexes, params, keymap, expmjd, 

434 interpFactory=InterpolatedUnivariateSpline) 

435 

436 @register_method('applyCepheid') 

437 def applyCepheid(self, valid_dexes, params, expmjd, 

438 variability_cache=None): 

439 

440 if len(params) == 0: 

441 return np.array([[],[],[],[],[],[]]) 

442 

443 keymap = {'filename':'lcfile', 't0':'t0'} 

444 return self.applyStdPeriodic(valid_dexes, params, keymap, expmjd, inDays=False, 

445 interpFactory=InterpolatedUnivariateSpline) 

446 

447 @register_method('applyEb') 

448 def applyEb(self, valid_dexes, params, expmjd, 

449 variability_cache=None): 

450 

451 if len(params) == 0: 

452 return np.array([[],[],[],[],[],[]]) 

453 

454 keymap = {'filename':'lcfile', 't0':'t0'} 

455 d_fluxes = self.applyStdPeriodic(valid_dexes, params, keymap, expmjd, 

456 inDays=False, 

457 interpFactory=InterpolatedUnivariateSpline) 

458 if len(d_fluxes)>0: 

459 if d_fluxes.min()<0.0: 

460 raise RuntimeError("Negative delta flux in applyEb") 

461 if isinstance(expmjd, numbers.Number): 

462 dMags = np.zeros((6, self.num_variable_obj(params))) 

463 else: 

464 dMags = np.zeros((6, self.num_variable_obj(params), len(expmjd))) 

465 

466 with np.errstate(divide='ignore', invalid='ignore'): 

467 dmag_vals = -2.5*np.log10(d_fluxes) 

468 dMags += np.where(np.logical_not(np.logical_or(np.isnan(dmag_vals), 

469 np.isinf(dmag_vals))), 

470 dmag_vals, 0.0) 

471 return dMags 

472 

473 @register_method('applyMicrolensing') 

474 def applyMicrolensing(self, valid_dexes, params, expmjd_in, 

475 variability_cache=None): 

476 return self.applyMicrolens(valid_dexes, params,expmjd_in) 

477 

478 @register_method('applyMicrolens') 

479 def applyMicrolens(self, valid_dexes, params, expmjd_in, 

480 variability_cache=None): 

481 #I believe this is the correct method based on 

482 #http://www.physics.fsu.edu/Courses/spring98/AST3033/Micro/lensing.htm 

483 # 

484 #21 October 2014 

485 #This method assumes that the parameters for microlensing variability 

486 #are stored in a varParamStr column in the database. Actually, the 

487 #current microlensing event tables in the database store each 

488 #variability parameter as its own database column. 

489 #At some point, either this method or the microlensing tables in the 

490 #database will need to be changed. 

491 

492 if len(params) == 0: 

493 return np.array([[],[],[],[],[],[]]) 

494 

495 expmjd = np.asarray(expmjd_in,dtype=float) 

496 if isinstance(expmjd_in, numbers.Number): 

497 dMags = np.zeros((6, self.num_variable_obj(params))) 

498 epochs = expmjd - params['t0'][valid_dexes].astype(float) 

499 umin = params['umin'].astype(float)[valid_dexes] 

500 that = params['that'].astype(float)[valid_dexes] 

501 else: 

502 dMags = np.zeros((6, self.num_variable_obj(params), len(expmjd))) 

503 # cast epochs, umin, that into 2-D numpy arrays; the first index will iterate 

504 # over objects; the second index will iterate over times in expmjd 

505 epochs = np.array([expmjd - t0 for t0 in params['t0'][valid_dexes].astype(float)]) 

506 umin = np.array([[uu]*len(expmjd) for uu in params['umin'].astype(float)[valid_dexes]]) 

507 that = np.array([[tt]*len(expmjd) for tt in params['that'].astype(float)[valid_dexes]]) 

508 

509 u = np.sqrt(umin**2 + ((2.0*epochs/that)**2)) 

510 magnification = (u**2+2.0)/(u*np.sqrt(u**2+4.0)) 

511 dmag = -2.5*np.log10(magnification) 

512 for ix in range(6): 

513 dMags[ix][valid_dexes] += dmag 

514 return dMags 

515 

516 

517 @register_method('applyAmcvn') 

518 def applyAmcvn(self, valid_dexes, params, expmjd_in, 

519 variability_cache=None): 

520 #21 October 2014 

521 #This method assumes that the parameters for Amcvn variability 

522 #are stored in a varParamStr column in the database. Actually, the 

523 #current Amcvn event tables in the database store each 

524 #variability parameter as its own database column. 

525 #At some point, either this method or the Amcvn tables in the 

526 #database will need to be changed. 

527 

528 if len(params) == 0: 

529 return np.array([[],[],[],[],[],[]]) 

530 

531 maxyears = 10. 

532 if isinstance(expmjd_in, numbers.Number): 

533 dMag = np.zeros((6, self.num_variable_obj(params))) 

534 amplitude = params['amplitude'].astype(float)[valid_dexes] 

535 t0_arr = params['t0'].astype(float)[valid_dexes] 

536 period = params['period'].astype(float)[valid_dexes] 

537 epoch_arr = expmjd_in 

538 else: 

539 dMag = np.zeros((6, self.num_variable_obj(params), len(expmjd_in))) 

540 n_time = len(expmjd_in) 

541 t0_arr = np.array([[tt]*n_time for tt in params['t0'].astype(float)[valid_dexes]]) 

542 amplitude = np.array([[aa]*n_time for aa in params['amplitude'].astype(float)[valid_dexes]]) 

543 period = np.array([[pp]*n_time for pp in params['period'].astype(float)[valid_dexes]]) 

544 epoch_arr = np.array([expmjd_in]*len(valid_dexes[0])) 

545 

546 epoch = expmjd_in 

547 

548 t0 = params['t0'].astype(float)[valid_dexes] 

549 burst_freq = params['burst_freq'].astype(float)[valid_dexes] 

550 burst_scale = params['burst_scale'].astype(float)[valid_dexes] 

551 amp_burst = params['amp_burst'].astype(float)[valid_dexes] 

552 color_excess = params['color_excess_during_burst'].astype(float)[valid_dexes] 

553 does_burst = params['does_burst'][valid_dexes] 

554 

555 # get the light curve of the typical variability 

556 uLc = amplitude*np.cos((epoch_arr - t0_arr)/period) 

557 gLc = copy.deepcopy(uLc) 

558 rLc = copy.deepcopy(uLc) 

559 iLc = copy.deepcopy(uLc) 

560 zLc = copy.deepcopy(uLc) 

561 yLc = copy.deepcopy(uLc) 

562 

563 # add in the flux from any bursting 

564 local_bursting_dexes = np.where(does_burst==1) 

565 for i_burst in local_bursting_dexes[0]: 

566 adds = 0.0 

567 for o in np.linspace(t0[i_burst] + burst_freq[i_burst],\ 

568 t0[i_burst] + maxyears*365.25, \ 

569 np.ceil(maxyears*365.25/burst_freq[i_burst]).astype(np.int64)): 

570 tmp = np.exp( -1*(epoch - o)/burst_scale[i_burst])/np.exp(-1.) 

571 adds -= amp_burst[i_burst]*tmp*(tmp < 1.0) ## kill the contribution 

572 ## add some blue excess during the outburst 

573 uLc[i_burst] += adds + 2.0*color_excess[i_burst] 

574 gLc[i_burst] += adds + color_excess[i_burst] 

575 rLc[i_burst] += adds + 0.5*color_excess[i_burst] 

576 iLc[i_burst] += adds 

577 zLc[i_burst] += adds 

578 yLc[i_burst] += adds 

579 

580 dMag[0][valid_dexes] += uLc 

581 dMag[1][valid_dexes] += gLc 

582 dMag[2][valid_dexes] += rLc 

583 dMag[3][valid_dexes] += iLc 

584 dMag[4][valid_dexes] += zLc 

585 dMag[5][valid_dexes] += yLc 

586 return dMag 

587 

588 @register_method('applyBHMicrolens') 

589 def applyBHMicrolens(self, valid_dexes, params, expmjd_in, 

590 variability_cache=None): 

591 #21 October 2014 

592 #This method assumes that the parameters for BHMicrolensing variability 

593 #are stored in a varParamStr column in the database. Actually, the 

594 #current BHMicrolensing event tables in the database store each 

595 #variability parameter as its own database column. 

596 #At some point, either this method or the BHMicrolensing tables in the 

597 #database will need to be changed. 

598 

599 if len(params) == 0: 

600 return np.array([[],[],[],[],[],[]]) 

601 

602 if isinstance(expmjd_in, numbers.Number): 

603 magoff = np.zeros((6, self.num_variable_obj(params))) 

604 else: 

605 magoff = np.zeros((6, self.num_variable_obj(params), len(expmjd_in))) 

606 expmjd = np.asarray(expmjd_in,dtype=float) 

607 filename_arr = params['filename'] 

608 toff_arr = params['t0'].astype(float) 

609 for ix in valid_dexes[0]: 

610 toff = toff_arr[ix] 

611 filename = filename_arr[ix] 

612 epoch = expmjd - toff 

613 lc = np.loadtxt(os.path.join(self.variabilityDataDir, filename), unpack=True, comments='#') 

614 dt = lc[0][1] - lc[0][0] 

615 period = lc[0][-1] 

616 #BH lightcurves are in years 

617 lc[0] *= 365. 

618 minage = lc[0][0] 

619 maxage = lc[0][-1] 

620 #I'm assuming that these are all single point sources lensed by a 

621 #black hole. These also can be used to simulate binary systems. 

622 #Should be 8kpc away at least. 

623 magnification = InterpolatedUnivariateSpline(lc[0], lc[1]) 

624 mag_val = magnification(epoch) 

625 # If we are interpolating out of the light curve's domain, set 

626 # the magnification equal to 1 

627 mag_val = np.where(np.isnan(mag_val), 1.0, mag_val) 

628 moff = -2.5*np.log(mag_val) 

629 for ii in range(6): 

630 magoff[ii][ix] = moff 

631 

632 return magoff 

633 

634 

635class MLTflaringMixin(Variability): 

636 """ 

637 A mixin providing the model for cool dwarf stellar flares. 

638 """ 

639 

640 # the file wherein light curves for MLT dwarf flares are stored 

641 _mlt_lc_file = os.path.join(getPackageDir('sims_data'), 

642 'catUtilsData', 'mlt_shortened_lc_171012.npz') 

643 

644 def load_MLT_light_curves(self, mlt_lc_file, variability_cache): 

645 """ 

646 Load MLT light curves specified by the file mlt_lc_file into 

647 the variability_cache 

648 """ 

649 

650 self._mlt_to_int = {} 

651 self._mlt_to_int['None'] = -1 

652 self._current_mlt_dex = 0 

653 

654 if not os.path.exists(mlt_lc_file): 

655 catutils_scripts = os.path.join(getPackageDir('sims_catUtils'), 'support_scripts') 

656 raise RuntimeError("The MLT flaring light curve file:\n" 

657 + "\n%s\n" % mlt_lc_file 

658 + "\ndoes not exist." 

659 +"\n\n" 

660 + "Go into %s " % catutils_scripts 

661 + "and run get_mdwarf_flares.sh " 

662 + "to get the data") 

663 

664 variability_cache['_MLT_LC_NPZ'] = np.load(mlt_lc_file) 

665 

666 global _GLOBAL_VARIABILITY_CACHE 

667 if variability_cache is _GLOBAL_VARIABILITY_CACHE: 

668 sims_clean_up.targets.append(variability_cache['_MLT_LC_NPZ']) 

669 

670 variability_cache['_MLT_LC_NPZ_NAME'] = mlt_lc_file 

671 

672 if variability_cache['parallelizable']: 

673 variability_cache['_MLT_LC_TIME_CACHE'] = mgr.dict() 

674 variability_cache['_MLT_LC_DURATION_CACHE'] = mgr.dict() 

675 variability_cache['_MLT_LC_MAX_TIME_CACHE'] = mgr.dict() 

676 variability_cache['_MLT_LC_FLUX_CACHE'] = mgr.dict() 

677 else: 

678 variability_cache['_MLT_LC_TIME_CACHE'] = {} 

679 variability_cache['_MLT_LC_DURATION_CACHE'] = {} 

680 variability_cache['_MLT_LC_MAX_TIME_CACHE'] = {} 

681 variability_cache['_MLT_LC_FLUX_CACHE'] = {} 

682 

683 

684 def _process_mlt_class(self, lc_name_raw, lc_name_arr, lc_dex_arr, expmjd, params, time_arr, max_time, dt, 

685 flux_arr_dict, flux_factor, ebv, mlt_dust_lookup, base_fluxes, 

686 base_mags, mag_name_tuple, output_dict, do_mags): 

687 

688 ss = Sed() 

689 

690 lc_name = lc_name_raw.replace('.txt', '') 

691 

692 lc_dex_target = self._mlt_to_int[lc_name] 

693 

694 use_this_lc = np.where(lc_dex_arr==lc_dex_target)[0] 

695 

696 if isinstance(expmjd, numbers.Number): 

697 t_interp = (expmjd + params['t0'][use_this_lc]).astype(float) 

698 else: 

699 n_obj = len(use_this_lc) 

700 n_time = len(expmjd) 

701 t_interp = np.ones(shape=(n_obj, n_time))*expmjd 

702 t0_arr = params['t0'][use_this_lc].astype(float) 

703 for i_obj in range(n_obj): 

704 t_interp[i_obj,:] += t0_arr[i_obj] 

705 

706 bad_dexes = np.where(t_interp>max_time) 

707 while len(bad_dexes[0])>0: 

708 t_interp[bad_dexes] -= dt 

709 bad_dexes = np.where(t_interp>max_time) 

710 

711 local_output_dict = {} 

712 for i_mag, mag_name in enumerate(mag_name_tuple): 

713 if mag_name in flux_arr_dict: 

714 

715 flux_arr = flux_arr_dict[mag_name] 

716 

717 t_pre_interp = time.time() 

718 dflux = np.interp(t_interp, time_arr, flux_arr) 

719 self.t_spent_interp+=time.time()-t_pre_interp 

720 

721 if isinstance(expmjd, numbers.Number): 

722 dflux *= flux_factor[use_this_lc] 

723 else: 

724 for i_obj in range(n_obj): 

725 dflux[i_obj,:] *= flux_factor[use_this_lc[i_obj]] 

726 

727 dust_factor = np.interp(ebv[use_this_lc], 

728 mlt_dust_lookup['ebv'], 

729 mlt_dust_lookup[mag_name]) 

730 

731 if not isinstance(expmjd, numbers.Number): 

732 for i_obj in range(n_obj): 

733 dflux[i_obj,:] *= dust_factor[i_obj] 

734 else: 

735 dflux *= dust_factor 

736 

737 if do_mags: 

738 if isinstance(expmjd, numbers.Number): 

739 local_base_fluxes = base_fluxes[mag_name][use_this_lc] 

740 local_base_mags = base_mags[mag_name][use_this_lc] 

741 else: 

742 local_base_fluxes = np.array([base_fluxes[mag_name][use_this_lc]]*n_time).transpose() 

743 local_base_mags = np.array([base_mags[mag_name][use_this_lc]]*n_time).transpose() 

744 

745 dmag = ss.magFromFlux(local_base_fluxes + dflux) - local_base_mags 

746 

747 local_output_dict[i_mag]=dmag 

748 else: 

749 local_output_dict[i_mag]=dflux 

750 

751 output_dict[lc_name_raw] = {'dex':use_this_lc, 'dmag':local_output_dict} 

752 

753 @register_method('MLT') 

754 def applyMLTflaring(self, valid_dexes, params, expmjd, 

755 parallax=None, ebv=None, quiescent_mags=None, 

756 variability_cache=None, do_mags=True, 

757 mag_name_tuple=('u','g','r','i','z','y')): 

758 """ 

759 parallax, ebv, and quiescent_mags are optional kwargs for use if you are 

760 calling this method outside the context of an InstanceCatalog (presumably 

761 with a numpy array of expmjd) 

762 

763 parallax is the parallax of your objects in radians 

764 

765 ebv is the E(B-V) value for your objects 

766 

767 quiescent_mags is a dict keyed on ('u', 'g', 'r', 'i', 'z', 'y') 

768 with the quiescent magnitudes of the objects 

769 

770 do_mags is a boolean; if True, return delta_magnitude; 

771 if False, return delta_flux 

772 

773 mag_name_tuple is a tuple indicating which magnitudes should actually 

774 be simulated 

775 """ 

776 self.t_spent_interp = 0.0 

777 t_start = time.time() 

778 if not hasattr(self, '_total_t_MLT'): 

779 self._total_t_MLT = 0.0 

780 

781 if parallax is None: 

782 parallax = self.column_by_name('parallax') 

783 if ebv is None: 

784 ebv = self.column_by_name('ebv') 

785 

786 if variability_cache is None: 

787 global _GLOBAL_VARIABILITY_CACHE 

788 variability_cache = _GLOBAL_VARIABILITY_CACHE 

789 

790 # this needs to occur before loading the MLT light curve cache, 

791 # just in case the user wants to override the light curve cache 

792 # file by hand before generating the catalog 

793 if len(params) == 0: 

794 return np.array([[],[],[],[],[],[]]) 

795 

796 if quiescent_mags is None: 

797 quiescent_mags = {} 

798 for mag_name in ('u', 'g', 'r', 'i', 'z', 'y'): 

799 if ('lsst_%s' % mag_name in self._actually_calculated_columns or 

800 'delta_lsst_%s' % mag_name in self._actually_calculated_columns): 

801 

802 quiescent_mags[mag_name] = self.column_by_name('quiescent_lsst_%s' % mag_name) 

803 

804 if not hasattr(self, 'photParams'): 

805 raise RuntimeError("To apply MLT dwarf flaring, your " 

806 "InstanceCatalog must have a member variable " 

807 "photParams which is an instantiation of the " 

808 "class PhotometricParameters, which can be " 

809 "imported from lsst.sims.photUtils. " 

810 "This is so that your InstanceCatalog has " 

811 "knowledge of the effective area of the LSST " 

812 "mirror.") 

813 

814 if (variability_cache['_MLT_LC_NPZ'] is None 

815 or variability_cache['_MLT_LC_NPZ_NAME'] != self._mlt_lc_file 

816 or variability_cache['_MLT_LC_NPZ'].fid is None): 

817 

818 self.load_MLT_light_curves(self._mlt_lc_file, variability_cache) 

819 

820 if not hasattr(self, '_mlt_dust_lookup'): 

821 # Construct a look-up table to determine the factor 

822 # by which to multiply the flares' flux to account for 

823 # dust as a function of E(B-V). Recall that we are 

824 # modeling all MLT flares as 9000K blackbodies. 

825 

826 if not hasattr(self, 'lsstBandpassDict'): 

827 raise RuntimeError('You are asking for MLT dwarf flaring ' 

828 'magnitudes in a catalog that has not ' 

829 'defined lsstBandpassDict. The MLT ' 

830 'flaring magnitudes model does not know ' 

831 'how to apply dust extinction to the ' 

832 'flares without the member variable ' 

833 'lsstBandpassDict being defined.') 

834 

835 ebv_grid = np.arange(0.0, 7.01, 0.01) 

836 bb_wavelen = np.arange(200.0, 1500.0, 0.1) 

837 hc_over_k = 1.4387e7 # nm*K 

838 temp = 9000.0 # black body temperature in Kelvin 

839 exp_arg = hc_over_k/(temp*bb_wavelen) 

840 exp_term = 1.0/(np.exp(exp_arg) - 1.0) 

841 ln_exp_term = np.log(exp_term) 

842 

843 # Blackbody f_lambda function; 

844 # discard normalizing factors; we only care about finding the 

845 # ratio of fluxes between the case with dust extinction and 

846 # the case without dust extinction 

847 log_bb_flambda = -5.0*np.log(bb_wavelen) + ln_exp_term 

848 bb_flambda = np.exp(log_bb_flambda) 

849 bb_sed = Sed(wavelen=bb_wavelen, flambda=bb_flambda) 

850 

851 base_fluxes = self.lsstBandpassDict.fluxListForSed(bb_sed) 

852 

853 a_x, b_x = bb_sed.setupCCM_ab() 

854 self._mlt_dust_lookup = {} 

855 self._mlt_dust_lookup['ebv'] = ebv_grid 

856 list_of_bp = self.lsstBandpassDict.keys() 

857 for bp in list_of_bp: 

858 self._mlt_dust_lookup[bp] = np.zeros(len(ebv_grid)) 

859 for iebv, ebv_val in enumerate(ebv_grid): 

860 wv, fl = bb_sed.addDust(a_x, b_x, 

861 ebv=ebv_val, 

862 wavelen=bb_wavelen, 

863 flambda=bb_flambda) 

864 

865 dusty_bb = Sed(wavelen=wv, flambda=fl) 

866 dusty_fluxes = self.lsstBandpassDict.fluxListForSed(dusty_bb) 

867 for ibp, bp in enumerate(list_of_bp): 

868 self._mlt_dust_lookup[bp][iebv] = dusty_fluxes[ibp]/base_fluxes[ibp] 

869 

870 # get the distance to each star in parsecs 

871 _au_to_parsec = 1.0/206265.0 

872 dd = _au_to_parsec/parallax 

873 

874 # get the area of the sphere through which the star's energy 

875 # is radiating to get to us (in cm^2) 

876 _cm_per_parsec = 3.08576e18 

877 sphere_area = 4.0*np.pi*np.power(dd*_cm_per_parsec, 2) 

878 

879 flux_factor = 1.0/sphere_area 

880 

881 n_mags = len(mag_name_tuple) 

882 if isinstance(expmjd, numbers.Number): 

883 dMags = np.zeros((n_mags, self.num_variable_obj(params))) 

884 else: 

885 dMags = np.zeros((n_mags, self.num_variable_obj(params), len(expmjd))) 

886 

887 base_fluxes = {} 

888 base_mags = {} 

889 ss = Sed() 

890 for mag_name in mag_name_tuple: 

891 if ('lsst_%s' % mag_name in self._actually_calculated_columns or 

892 'delta_lsst_%s' % mag_name in self._actually_calculated_columns): 

893 

894 mm = quiescent_mags[mag_name] 

895 base_mags[mag_name] = mm 

896 base_fluxes[mag_name] = ss.fluxFromMag(mm) 

897 

898 lc_name_arr = params['lc'].astype(str) 

899 lc_names_unique = np.sort(np.unique(lc_name_arr)) 

900 

901 t_work = 0.0 

902 

903 # load all of the necessary light curves 

904 # t_flux_dict = 0.0 

905 

906 if not hasattr(self, '_mlt_to_int'): 

907 self._mlt_to_int = {} 

908 self._mlt_to_int['None'] = -1 

909 self._current_mlt_dex = 0 

910 

911 for lc_name in lc_names_unique: 

912 if 'None' in lc_name: 

913 continue 

914 

915 if lc_name not in self._mlt_to_int: 

916 self._mlt_to_int[lc_name] = self._current_mlt_dex 

917 self._mlt_to_int[lc_name.replace('.txt','')] = self._current_mlt_dex 

918 self._current_mlt_dex += 1 

919 

920 

921 lc_name = lc_name.replace('.txt', '') 

922 

923 if 'late' in lc_name: 

924 lc_name = lc_name.replace('in', '') 

925 

926 if lc_name not in variability_cache['_MLT_LC_DURATION_CACHE']: 

927 time_arr = variability_cache['_MLT_LC_NPZ']['%s_time' % lc_name] + self._survey_start 

928 variability_cache['_MLT_LC_TIME_CACHE'][lc_name] = time_arr 

929 dt = time_arr.max() - time_arr.min() 

930 variability_cache['_MLT_LC_DURATION_CACHE'][lc_name] = dt 

931 max_time = time_arr.max() 

932 variability_cache['_MLT_LC_MAX_TIME_CACHE'][lc_name] = max_time 

933 

934 # t_before_flux = time.time() 

935 for mag_name in mag_name_tuple: 

936 if ('lsst_%s' % mag_name in self._actually_calculated_columns or 

937 'delta_lsst_%s' % mag_name in self._actually_calculated_columns): 

938 

939 flux_name = '%s_%s' % (lc_name, mag_name) 

940 if flux_name not in variability_cache['_MLT_LC_FLUX_CACHE']: 

941 

942 flux_arr = variability_cache['_MLT_LC_NPZ'][flux_name] 

943 variability_cache['_MLT_LC_FLUX_CACHE'][flux_name] = flux_arr 

944 # t_flux_dict += time.time()-t_before_flux 

945 

946 lc_dex_arr = np.array([self._mlt_to_int[name] for name in lc_name_arr]) 

947 

948 t_set_up = time.time()-t_start 

949 

950 dmag_master_dict = {} 

951 

952 for lc_name_raw in lc_names_unique: 

953 if 'None' in lc_name_raw: 

954 continue 

955 

956 lc_name = lc_name_raw.replace('.txt', '') 

957 

958 # 2017 May 1 

959 # There isn't supposed to be a 'late_inactive' light curve. 

960 # Unfortunately, I (Scott Daniel) assigned 'late_inactive' 

961 # light curves to some of the stars on our database. Rather 

962 # than fix the database table (which will take about a week of 

963 # compute time), I am going to fix the problem here by mapping 

964 # 'late_inactive' into 'late_active'. 

965 if 'late' in lc_name: 

966 lc_name = lc_name.replace('in', '') 

967 

968 time_arr = variability_cache['_MLT_LC_TIME_CACHE'][lc_name] 

969 dt = variability_cache['_MLT_LC_DURATION_CACHE'][lc_name] 

970 max_time = variability_cache['_MLT_LC_MAX_TIME_CACHE'][lc_name] 

971 flux_arr_dict = {} 

972 for mag_name in mag_name_tuple: 

973 if ('lsst_%s' % mag_name in self._actually_calculated_columns or 

974 'delta_lsst_%s' % mag_name in self._actually_calculated_columns): 

975 

976 flux_arr_dict[mag_name] = variability_cache['_MLT_LC_FLUX_CACHE']['%s_%s' % (lc_name, mag_name)] 

977 

978 t_before_work = time.time() 

979 

980 self._process_mlt_class(lc_name_raw, lc_name_arr, lc_dex_arr, expmjd, params, time_arr, max_time, dt, 

981 flux_arr_dict, flux_factor, ebv, self._mlt_dust_lookup, 

982 base_fluxes, base_mags, mag_name_tuple, dmag_master_dict, do_mags) 

983 

984 t_work += time.time() - t_before_work 

985 

986 for lc_name in dmag_master_dict: 

987 for i_mag in dmag_master_dict[lc_name]['dmag']: 

988 dMags[i_mag][dmag_master_dict[lc_name]['dex']] += dmag_master_dict[lc_name]['dmag'][i_mag] 

989 

990 t_mlt = time.time()-t_start 

991 self._total_t_MLT += t_mlt 

992 

993 return dMags 

994 

995 

996class ParametrizedLightCurveMixin(Variability): 

997 """ 

998 This mixin models variability using parametrized functions fit 

999 to light curves. 

1000 

1001 The parametrized light curves should be stored in an ASCII file 

1002 (or a gzipped ASCII file) whose columns are: 

1003 

1004 lc_name -- a string; the original name of the light curve 

1005 

1006 n_t_steps -- an int; the number of time steps in the original light curve 

1007 

1008 t_span -- a float; t_max - t_min from the original light curve 

1009 

1010 n_components -- an int; how many Fourier components were used 

1011 in the parametrization 

1012 

1013 chisquared -- this is a series of n_components columns; the nth 

1014 chisquared column is the chisquared of the parametrization after 

1015 n components (i.e. the 5th chisquared value is the chisquared of 

1016 the parametrized light curve with respect to the original light 

1017 curve if you only use the first 5 Fourier components). This is 

1018 not actually used by this class, but it is expected when parsing 

1019 the parameter file. It mainly exists if one wishes to perform 

1020 a cut in the parametrization (e.g. only keep as many components 

1021 as are needed to reach some threshold in chisquared/n_t_steps). 

1022 

1023 median -- a float; the median flux of the original light cuve 

1024 

1025 aa -- a float (see below) 

1026 bb -- a float (see below) 

1027 cc -- a float (see below) 

1028 omega -- a float (see below) 

1029 tau -- a float (see below) 

1030 

1031 There will actually be n_components aa, bb, cc, omega, tau 

1032 columns ordered as 

1033 

1034 aa_0, bb_0, cc_0, omega_0, tau_0, aa_1, bb_1, cc_1, omega_1, tau_1, ... 

1035 

1036 The light curve is parametrized as 

1037 

1038 flux = median + \sum_i { aa_i*cos(omega_i*(t-tau_i)) + 

1039 bb_i*sin(omega_i*(t-tau_i)) + 

1040 cc_i } 

1041 """ 

1042 

1043 def load_parametrized_light_curves(self, file_name=None, variability_cache=None): 

1044 """ 

1045 This method will load the parametrized light curve models 

1046 used by the ParametrizedLightCurveMixin and store them in 

1047 a global cache. It is enough to just run this method from 

1048 any instantiation of ParametrizedLightCurveMixin. 

1049 

1050 Parameters 

1051 ---------- 

1052 file_name is the absolute path to the file being loaded. 

1053 If None, it will load the default Kepler-based light curve model. 

1054 """ 

1055 using_global = False 

1056 if variability_cache is None: 

1057 global _GLOBAL_VARIABILITY_CACHE 

1058 variability_cache = _GLOBAL_VARIABILITY_CACHE 

1059 using_global = True 

1060 

1061 if file_name is None: 

1062 sims_data_dir = getPackageDir('sims_data') 

1063 lc_dir = os.path.join(sims_data_dir, 'catUtilsData') 

1064 file_name = os.path.join(lc_dir, 'kplr_lc_params.txt.gz') 

1065 

1066 if file_name in variability_cache['_PARAMETRIZED_MODELS_LOADED']: 

1067 return 

1068 

1069 if len(variability_cache['_PARAMETRIZED_LC_MODELS']) == 0 and using_global: 

1070 sims_clean_up.targets.append(variability_cache['_PARAMETRIZED_LC_MODELS']) 

1071 sims_clean_up.targets.append(variability_cache['_PARAMETRIZED_MODELS_LOADED']) 

1072 

1073 if file_name.endswith('.gz'): 

1074 open_fn = gzip.open 

1075 else: 

1076 open_fn = open 

1077 

1078 if not os.path.exists(file_name): 

1079 if file_name.endswith('kplr_lc_params.txt.gz'): 

1080 download_script = os.path.join(getPackageDir('sims_catUtils'), 'support_scripts', 

1081 'get_kepler_light_curves.sh') 

1082 raise RuntimeError('You have not yet downloaded\n%s\n\n' % file_name 

1083 + 'Try running the script\n%s' % download_script) 

1084 else: 

1085 raise RuntimeError('The file %s does not exist' % file_name) 

1086 

1087 with open_fn(file_name, 'r') as input_file: 

1088 for line in input_file: 

1089 if type(line) == bytes: 

1090 line = line.decode("utf-8") 

1091 if line[0] == '#': 

1092 continue 

1093 params = line.strip().split() 

1094 name = params[0] 

1095 tag = int(name.split('_')[0][4:]) 

1096 if tag in variability_cache['_PARAMETRIZED_LC_MODELS']: 

1097 # In case multiple sets of models have been loaded that 

1098 # duplicate identifying integers. 

1099 raise RuntimeError("You are trying to load light curve with the " 

1100 "identifying tag %d. That has already been " % tag 

1101 + "loaded. I am unsure how to proceed") 

1102 n_c = int(params[3]) 

1103 median = float(params[4+n_c]) 

1104 local_aa = [] 

1105 local_bb = [] 

1106 local_cc = [] 

1107 local_omega = [] 

1108 local_tau = [] 

1109 

1110 for i_c in range(n_c): 

1111 base_dex = 5+n_c+i_c*5 

1112 local_aa.append(float(params[base_dex])) 

1113 local_bb.append(float(params[base_dex+1])) 

1114 local_cc.append(float(params[base_dex+2])) 

1115 local_omega.append(float(params[base_dex+3])) 

1116 local_tau.append(float(params[base_dex+4])) 

1117 local_aa = np.array(local_aa) 

1118 local_bb = np.array(local_bb) 

1119 local_cc = np.array(local_cc) 

1120 local_omega = np.array(local_omega) 

1121 local_tau = np.array(local_tau) 

1122 

1123 local_params = {} 

1124 local_params['median'] = median 

1125 local_params['a'] = local_aa 

1126 local_params['b'] = local_bb 

1127 local_params['c'] = local_cc 

1128 local_params['omega'] = local_omega 

1129 local_params['tau'] = local_tau 

1130 variability_cache['_PARAMETRIZED_LC_MODELS'][tag] = local_params 

1131 

1132 variability_cache['_PARAMETRIZED_MODELS_LOADED'].append(file_name) 

1133 

1134 def _calc_dflux(self, lc_id, expmjd, variability_cache=None): 

1135 """ 

1136 Parameters 

1137 ---------- 

1138 lc_id is an integer referring to the ID of the light curve in 

1139 the parametrized light curve model (these need to be unique 

1140 across all parametrized light curve catalogs loaded) 

1141 

1142 expmjd is either a number or an array referring to the MJD of the 

1143 observations 

1144 

1145 Returns 

1146 ------- 

1147 baseline_flux is a number indicating the quiescent flux 

1148 of the light curve 

1149 

1150 delta_flux is a number or an array of the flux above or below 

1151 the quiescent flux at each of expmjd 

1152 """ 

1153 

1154 if variability_cache is None: 

1155 global _GLOBAL_VARIABILITY_CACHE 

1156 variability_cache = _GLOBAL_VARIABILITY_CACHE 

1157 

1158 try: 

1159 model = variability_cache['_PARAMETRIZED_LC_MODELS'][lc_id] 

1160 except KeyError: 

1161 raise KeyError('A KeyError was raised on the light curve id %d. ' % lc_id 

1162 + 'You may not have loaded your parametrized light ' 

1163 + 'curve models, yet. ' 

1164 + 'See the load_parametrized_light_curves() method in the ' 

1165 + 'ParametrizedLightCurveMixin class') 

1166 

1167 tau = model['tau'] 

1168 omega = model['omega'] 

1169 aa = model['a'] 

1170 bb = model['b'] 

1171 cc = model['c'] 

1172 

1173 quiescent_flux = model['median'] + cc.sum() 

1174 

1175 omega_t = np.outer(expmjd, omega) 

1176 omega_tau = omega*tau 

1177 

1178 # use trig identities to calculate 

1179 # \sum_i a_i*cos(omega_i*(expmjd-tau_i)) + b_i*sin(omega_i*(expmjd-tau_i)) 

1180 cos_omega_tau = np.cos(omega_tau) 

1181 sin_omega_tau = np.sin(omega_tau) 

1182 a_cos_omega_tau = aa*cos_omega_tau 

1183 a_sin_omega_tau = aa*sin_omega_tau 

1184 b_cos_omega_tau = bb*cos_omega_tau 

1185 b_sin_omega_tau = bb*sin_omega_tau 

1186 

1187 cos_omega_t = np.cos(omega_t) 

1188 sin_omega_t = np.sin(omega_t) 

1189 

1190 #delta_flux = np.dot(cos_omega_t, a_cos_omega_tau) 

1191 #delta_flux += np.dot(sin_omega_t, a_sin_omega_tau) 

1192 #delta_flux += np.dot(sin_omega_t, b_cos_omega_tau) 

1193 #delta_flux -= np.dot(cos_omega_t, b_sin_omega_tau) 

1194 

1195 delta_flux = np.dot(cos_omega_t, a_cos_omega_tau-b_sin_omega_tau) 

1196 delta_flux += np.dot(sin_omega_t, a_sin_omega_tau+b_cos_omega_tau) 

1197 

1198 if len(delta_flux)==1: 

1199 delta_flux = np.float(delta_flux) 

1200 return quiescent_flux, delta_flux 

1201 

1202 def singleBandParametrizedLightCurve(self, valid_dexes, params, expmjd, 

1203 variability_cache=None): 

1204 """ 

1205 Apply the parametrized light curve model, but just return one 

1206 d_magnitude array. This works because the parametrized 

1207 light curve model does not cause colors to vary. 

1208 """ 

1209 

1210 t_start = time.time() 

1211 if not hasattr(self, '_total_t_param_lc'): 

1212 self._total_t_param_lc = 0.0 

1213 

1214 n_obj = self.num_variable_obj(params) 

1215 

1216 if variability_cache is None: 

1217 global _GLOBAL_VARIABILITY_CACHE 

1218 variability_cache = _GLOBAL_VARIABILITY_CACHE 

1219 

1220 # t_before_cast = time.time() 

1221 lc_int_arr = -1*np.ones(len(params['lc']), dtype=int) 

1222 for ii in range(len(params['lc'])): 

1223 if params['lc'][ii] is not None: 

1224 lc_int_arr[ii] = params['lc'][ii] 

1225 # print('t_cast %.2e' % (time.time()-t_before_cast)) 

1226 

1227 good = np.where(lc_int_arr>=0) 

1228 unq_lc_int = np.unique(params['lc'][good]) 

1229 

1230 # print('applyParamLC %d obj; %d unique' % (n_obj, len(unq_lc_int))) 

1231 

1232 if isinstance(expmjd, numbers.Number): 

1233 mjd_is_number = True 

1234 n_t = 1 

1235 d_mag_out = np.zeros(n_obj, dtype=float) 

1236 lc_time = expmjd - params['t0'].astype(float) 

1237 else: 

1238 mjd_is_number = False 

1239 n_t = len(expmjd) 

1240 d_mag_out = np.zeros((n_obj, n_t), dtype=float) 

1241 t0_float = params['t0'].astype(float) 

1242 lc_time = np.zeros(n_t*n_obj) 

1243 i_start = 0 

1244 for i_obj in range(n_obj): 

1245 lc_time[i_start:i_start+n_t] = expmjd - t0_float[i_obj] 

1246 i_start += n_t 

1247 

1248 # print('initialized arrays in %e' % (time.time()-t_start)) 

1249 # t_assign = 0.0 

1250 # t_flux = 0.0 

1251 t_use_this = 0.0 

1252 

1253 not_none = 0 

1254 

1255 for lc_int in unq_lc_int: 

1256 if lc_int is None: 

1257 continue 

1258 if '_PARAMETRIZED_LC_DMAG_CUTOFF' in variability_cache: 

1259 if variability_cache['_PARAMETRIZED_LC_DMAG_LOOKUP'][lc_int] < 0.75*variability_cache['_PARAMETRIZED_LC_DMAG_CUTOFF']: 

1260 continue 

1261 # t_before = time.time() 

1262 if mjd_is_number: 

1263 use_this_lc = np.where(lc_int_arr == lc_int)[0] 

1264 not_none += len(use_this_lc) 

1265 else: 

1266 use_this_lc_unq = np.where(lc_int_arr == lc_int)[0] 

1267 not_none += len(use_this_lc_unq) 

1268 template_arange = np.arange(0, n_t, dtype=int) 

1269 use_this_lc = np.array([template_arange + i_lc*n_t 

1270 for i_lc in use_this_lc_unq]).flatten() 

1271 

1272 # t_use_this += time.time()-t_before 

1273 try: 

1274 assert len(use_this_lc) % n_t == 0 

1275 except AssertionError: 

1276 raise RuntimeError("Something went wrong in applyParametrizedLightCurve\n" 

1277 "len(use_this_lc) %d ; n_t %d" % (len(use_this_lc), n_t)) 

1278 

1279 # t_before = time.time() 

1280 q_flux, d_flux = self._calc_dflux(lc_int, lc_time[use_this_lc], 

1281 variability_cache=variability_cache) 

1282 

1283 d_mag = -2.5*np.log10(1.0+d_flux/q_flux) 

1284 # t_flux += time.time()-t_before 

1285 

1286 if isinstance(d_mag, numbers.Number) and not isinstance(expmjd, numbers.Number): 

1287 # in case you only passed in one expmjd value, 

1288 # in which case self._calc_dflux will return a scalar 

1289 d_mag = np.array([d_mag]) 

1290 

1291 # t_before = time.time() 

1292 if mjd_is_number: 

1293 d_mag_out[use_this_lc] = d_mag 

1294 else: 

1295 for i_obj in range(len(use_this_lc)//n_t): 

1296 i_start = i_obj*n_t 

1297 obj_dex = use_this_lc_unq[i_obj] 

1298 d_mag_out[obj_dex] = d_mag[i_start:i_start+n_t] 

1299 

1300 # t_assign += time.time()-t_before 

1301 

1302 # print('applyParametrized took %.2e\nassignment %.2e\nflux %.2e\nuse %.2e\n' % 

1303 # (time.time()-t_start,t_assign,t_flux,t_use_this)) 

1304 

1305 # print('applying Parametrized LC to %d' % not_none) 

1306 # print('per capita %.2e\n' % ((time.time()-t_start)/float(not_none))) 

1307 

1308 # print('param time %.2e use this %.2e' % (time.time()-t_start, t_use_this)) 

1309 self._total_t_param_lc += time.time()-t_start 

1310 

1311 return d_mag_out 

1312 

1313 @register_method('kplr') # this 'kplr' tag derives from the fact that default light curves come from Kepler 

1314 def applyParametrizedLightCurve(self, valid_dexes, params, expmjd, 

1315 variability_cache=None): 

1316 

1317 if len(params) == 0: 

1318 return np.array([[], [], [], [], [], []]) 

1319 

1320 n_obj = self.num_variable_obj(params) 

1321 if isinstance(expmjd, numbers.Number): 

1322 mjd_is_number = True 

1323 d_mag_out = np.zeros((6, n_obj), dtype=float) 

1324 else: 

1325 mjd_is_number = False 

1326 n_t = len(expmjd) 

1327 d_mag_out = np.zeros((6, n_obj, n_t), dtype=float) 

1328 

1329 d_mag = self.singleBandParametrizedLightCurve(valid_dexes, params, expmjd, 

1330 variability_cache=variability_cache) 

1331 

1332 for i_filter in range(6): 

1333 d_mag_out[i_filter] = np.copy(d_mag) 

1334 

1335 return d_mag_out 

1336 

1337 

1338class ExtraGalacticVariabilityModels(Variability): 

1339 """ 

1340 A mixin providing the model for AGN variability. 

1341 """ 

1342 

1343 _agn_walk_start_date = 58580.0 

1344 _agn_threads = 1 

1345 

1346 @register_method('applyAgn') 

1347 def applyAgn(self, valid_dexes, params, expmjd, 

1348 variability_cache=None, redshift=None): 

1349 

1350 if redshift is None: 

1351 redshift_arr = self.column_by_name('redshift') 

1352 else: 

1353 redshift_arr = redshift 

1354 

1355 if len(params) == 0: 

1356 return np.array([[],[],[],[],[],[]]) 

1357 

1358 if isinstance(expmjd, numbers.Number): 

1359 dMags = np.zeros((6, self.num_variable_obj(params))) 

1360 max_mjd = expmjd 

1361 min_mjd = expmjd 

1362 mjd_is_number = True 

1363 else: 

1364 dMags = np.zeros((6, self.num_variable_obj(params), len(expmjd))) 

1365 max_mjd = max(expmjd) 

1366 min_mjd = min(expmjd) 

1367 mjd_is_number = False 

1368 

1369 seed_arr = params['seed'] 

1370 tau_arr = params['agn_tau'].astype(float) 

1371 sfu_arr = params['agn_sfu'].astype(float) 

1372 sfg_arr = params['agn_sfg'].astype(float) 

1373 sfr_arr = params['agn_sfr'].astype(float) 

1374 sfi_arr = params['agn_sfi'].astype(float) 

1375 sfz_arr = params['agn_sfz'].astype(float) 

1376 sfy_arr = params['agn_sfy'].astype(float) 

1377 

1378 duration_observer_frame = max_mjd - self._agn_walk_start_date 

1379 

1380 if duration_observer_frame < 0 or min_mjd < self._agn_walk_start_date: 

1381 raise RuntimeError("WARNING: Time offset greater than minimum epoch. " + 

1382 "Not applying variability. "+ 

1383 "expmjd: %e should be > start_date: %e " % (min_mjd, self._agn_walk_start_date) + 

1384 "in applyAgn variability method") 

1385 

1386 if self._agn_threads == 1 or len(valid_dexes[0])==1: 

1387 for i_obj in valid_dexes[0]: 

1388 seed = seed_arr[i_obj] 

1389 tau = tau_arr[i_obj] 

1390 time_dilation = 1.0+redshift_arr[i_obj] 

1391 sf_u = sfu_arr[i_obj] 

1392 dMags[0][i_obj] = self._simulate_agn(expmjd, tau, time_dilation, sf_u, seed) 

1393 else: 

1394 p_list = [] 

1395 

1396 mgr = multiprocessing.Manager() 

1397 if mjd_is_number: 

1398 out_struct = mgr.Array('d', [0]*len(valid_dexes[0])) 

1399 else: 

1400 out_struct = mgr.dict() 

1401 

1402 ################# 

1403 # Try to subdivide the AGN into batches such that the number 

1404 # of time steps simulated by each thread is close to equal 

1405 tot_steps = 0 

1406 n_steps = [] 

1407 for tt, zz in zip(tau_arr[valid_dexes], redshift_arr[valid_dexes]): 

1408 dilation = 1.0+zz 

1409 dt = tt/100.0 

1410 dur = (duration_observer_frame/dilation) 

1411 nt = dur/dt 

1412 tot_steps += nt 

1413 n_steps.append(nt) 

1414 

1415 batch_target = tot_steps/self._agn_threads 

1416 i_start_arr = [0] 

1417 i_end_arr = [] 

1418 current_batch = n_steps[0] 

1419 for ii in range(1,len(n_steps),1): 

1420 current_batch += n_steps[ii] 

1421 if ii == len(n_steps)-1: 

1422 i_end_arr.append(len(n_steps)) 

1423 elif len(i_start_arr)<self._agn_threads: 

1424 if current_batch>=batch_target: 

1425 i_end_arr.append(ii) 

1426 i_start_arr.append(ii) 

1427 current_batch = n_steps[ii] 

1428 

1429 if len(i_start_arr) != len(i_end_arr): 

1430 raise RuntimeError('len i_start %d len i_end %d; dexes %d' % 

1431 (len(i_start_arr), 

1432 len(i_end_arr), 

1433 len(valid_dexes[0]))) 

1434 assert len(i_start_arr) <= self._agn_threads 

1435 ############ 

1436 

1437 # Actually simulate the AGN on the the number of threads allotted 

1438 for i_start, i_end in zip(i_start_arr, i_end_arr): 

1439 dexes = valid_dexes[0][i_start:i_end] 

1440 if mjd_is_number: 

1441 out_dexes = range(i_start,i_end,1) 

1442 else: 

1443 out_dexes = dexes 

1444 p = multiprocessing.Process(target=self._threaded_simulate_agn, 

1445 args=(expmjd, tau_arr[dexes], 

1446 1.0+redshift_arr[dexes], 

1447 sfu_arr[dexes], 

1448 seed_arr[dexes], 

1449 out_dexes, 

1450 out_struct)) 

1451 p.start() 

1452 p_list.append(p) 

1453 for p in p_list: 

1454 p.join() 

1455 

1456 if mjd_is_number: 

1457 dMags[0][valid_dexes] = out_struct[:] 

1458 else: 

1459 for i_obj in out_struct.keys(): 

1460 dMags[0][i_obj] = out_struct[i_obj] 

1461 

1462 for i_filter, filter_name in enumerate(('g', 'r', 'i', 'z', 'y')): 

1463 for i_obj in valid_dexes[0]: 

1464 dMags[i_filter+1][i_obj] = dMags[0][i_obj]*params['agn_sf%s' % filter_name][i_obj]/params['agn_sfu'][i_obj] 

1465 

1466 return dMags 

1467 

1468 def _threaded_simulate_agn(self, expmjd, tau_arr, 

1469 time_dilation_arr, sf_u_arr, 

1470 seed_arr, dex_arr, out_struct): 

1471 

1472 if isinstance(expmjd, numbers.Number): 

1473 mjd_is_number = True 

1474 else: 

1475 mjd_is_number = False 

1476 

1477 for tau, time_dilation, sf_u, seed, dex in \ 

1478 zip(tau_arr, time_dilation_arr, sf_u_arr, seed_arr, dex_arr): 

1479 out_struct[dex] = self._simulate_agn(expmjd, tau, time_dilation, 

1480 sf_u, seed) 

1481 

1482 def _simulate_agn(self, expmjd, tau, time_dilation, sf_u, seed): 

1483 """ 

1484 Simulate the u-band light curve for a single AGN 

1485 

1486 Parameters 

1487 ---------- 

1488 expmjd -- a number or numpy array of dates for the light curver 

1489 

1490 tau -- the characteristic timescale of the AGN in days 

1491 

1492 time_dilation -- (1+z) for the AGN 

1493 

1494 sf_u -- the u-band structure function of the AGN 

1495 

1496 seed -- the seed for the random number generator 

1497 

1498 Returns 

1499 ------- 

1500 a numpy array (or number) of delta_magnitude in the u-band at expmjd 

1501 """ 

1502 

1503 if not isinstance(expmjd, numbers.Number): 

1504 d_m_out = np.zeros(len(expmjd)) 

1505 duration_observer_frame = max(expmjd) - self._agn_walk_start_date 

1506 else: 

1507 duration_observer_frame = expmjd - self._agn_walk_start_date 

1508 

1509 

1510 rng = np.random.RandomState(seed) 

1511 dt = tau/100. 

1512 duration_rest_frame = duration_observer_frame/time_dilation 

1513 nbins = int(math.ceil(duration_rest_frame/dt))+1 

1514 

1515 time_dexes = np.round((expmjd-self._agn_walk_start_date)/(time_dilation*dt)).astype(int) 

1516 time_dex_map = {} 

1517 ct_dex = 0 

1518 if not isinstance(time_dexes, numbers.Number): 

1519 for i_t_dex, t_dex in enumerate(time_dexes): 

1520 if t_dex in time_dex_map: 

1521 time_dex_map[t_dex].append(i_t_dex) 

1522 else: 

1523 time_dex_map[t_dex] = [i_t_dex] 

1524 time_dexes = set(time_dexes) 

1525 else: 

1526 time_dex_map[time_dexes] = [0] 

1527 time_dexes = set([time_dexes]) 

1528 

1529 dx2 = 0.0 

1530 x1 = 0.0 

1531 x2 = 0.0 

1532 

1533 dt_over_tau = dt/tau 

1534 es = rng.normal(0., 1., nbins)*math.sqrt(dt_over_tau) 

1535 for i_time in range(nbins): 

1536 #The second term differs from Zeljko's equation by sqrt(2.) 

1537 #because he assumes stdev = sf_u/sqrt(2) 

1538 dx1 = dx2 

1539 dx2 = -dx1*dt_over_tau + sf_u*es[i_time] + dx1 

1540 x1 = x2 

1541 x2 += dt 

1542 

1543 if i_time in time_dexes: 

1544 if isinstance(expmjd, numbers.Number): 

1545 dm_val = ((expmjd-self._agn_walk_start_date)*(dx1-dx2)/time_dilation+dx2*x1-dx1*x2)/(x1-x2) 

1546 d_m_out = dm_val 

1547 else: 

1548 for i_time_out in time_dex_map[i_time]: 

1549 local_end = (expmjd[i_time_out]-self._agn_walk_start_date)/time_dilation 

1550 dm_val = (local_end*(dx1-dx2)+dx2*x1-dx1*x2)/(x1-x2) 

1551 d_m_out[i_time_out] = dm_val 

1552 

1553 return d_m_out 

1554 

1555 

1556class _VariabilityPointSources(object): 

1557 

1558 @compound('delta_lsst_u', 'delta_lsst_g', 'delta_lsst_r', 

1559 'delta_lsst_i', 'delta_lsst_z', 'delta_lsst_y') 

1560 def get_stellar_variability(self): 

1561 """ 

1562 Getter for the change in magnitudes due to stellar 

1563 variability. The PhotometryStars mixin is clever enough 

1564 to automatically add this to the baseline magnitude. 

1565 """ 

1566 

1567 varParams = self.column_by_name('varParamStr') 

1568 dmag = self.applyVariability(varParams) 

1569 if dmag.shape != (6, len(varParams)): 

1570 raise RuntimeError("applyVariability is returning " 

1571 "an array of shape %s\n" % dmag.shape 

1572 + "should be (6, %d)" % len(varParams)) 

1573 return dmag 

1574 

1575 

1576class VariabilityStars(_VariabilityPointSources, StellarVariabilityModels, 

1577 MLTflaringMixin, ParametrizedLightCurveMixin): 

1578 """ 

1579 This is a mixin which wraps the methods from the class 

1580 StellarVariabilityModels into getters for InstanceCatalogs 

1581 (specifically, InstanceCatalogs of stars). Getters in 

1582 this method should define columns named like 

1583 

1584 delta_columnName 

1585 

1586 where columnName is the name of the baseline (non-varying) magnitude 

1587 column to which delta_columnName will be added. The getters in the 

1588 photometry mixins will know to find these columns and add them to 

1589 columnName, provided that the columns here follow this naming convention. 

1590 

1591 Thus: merely including VariabilityStars in the inheritance tree of 

1592 an InstanceCatalog daughter class will activate variability for any column 

1593 for which delta_columnName is defined. 

1594 """ 

1595 pass 

1596 

1597 

1598class VariabilityAGN(_VariabilityPointSources, ExtraGalacticVariabilityModels): 

1599 """ 

1600 This is a mixin which wraps the methods from the class 

1601 ExtraGalacticVariabilityModels into getters for InstanceCatalogs 

1602 of AGN. Getters in this method should define columns named like 

1603 

1604 delta_columnName 

1605 

1606 where columnName is the name of the baseline (non-varying) magnitude 

1607 column to which delta_columnName will be added. The getters in the 

1608 photometry mixins will know to find these columns and add them to 

1609 columnName, provided that the columns here follow this naming convention. 

1610 

1611 Thus: merely including VariabilityStars in the inheritance tree of 

1612 an InstanceCatalog daughter class will activate variability for any column 

1613 for which delta_columnName is defined. 

1614 """ 

1615 pass 

1616 

1617 

1618class VariabilityGalaxies(ExtraGalacticVariabilityModels): 

1619 """ 

1620 This is a mixin which wraps the methods from the class 

1621 ExtraGalacticVariabilityModels into getters for InstanceCatalogs 

1622 (specifically, InstanceCatalogs of galaxies). Getters in this 

1623 method should define columns named like 

1624 

1625 delta_columnName 

1626 

1627 where columnName is the name of the baseline (non-varying) magnitude 

1628 column to which delta_columnName will be added. The getters in the 

1629 photometry mixins will know to find these columns and add them to 

1630 columnName, provided that the columns here follow this naming convention. 

1631 

1632 Thus: merely including VariabilityStars in the inheritance tree of 

1633 an InstanceCatalog daughter class will activate variability for any column 

1634 for which delta_columnName is defined. 

1635 """ 

1636 

1637 @compound('delta_uAgn', 'delta_gAgn', 'delta_rAgn', 

1638 'delta_iAgn', 'delta_zAgn', 'delta_yAgn') 

1639 def get_galaxy_variability_total(self): 

1640 

1641 """ 

1642 Getter for the change in magnitude due to AGN 

1643 variability. The PhotometryGalaxies mixin is 

1644 clever enough to automatically add this to 

1645 the baseline magnitude. 

1646 """ 

1647 varParams = self.column_by_name("varParamStr") 

1648 dmag = self.applyVariability(varParams) 

1649 if dmag.shape != (6, len(varParams)): 

1650 raise RuntimeError("applyVariability is returning " 

1651 "an array of shape %s\n" % dmag.shape 

1652 + "should be (6, %d)" % len(varParams)) 

1653 return dmag