lsst.sims.maf.metrics package

Submodules

lsst.sims.maf.metrics.baseMetric module

class lsst.sims.maf.metrics.baseMetric.MetricRegistry(name, bases, dict)[source]

Bases: type

Meta class for metrics, to build a registry of metric classes.

getClass(metricname)[source]
help(doc=False)[source]
help_metric(metricname)[source]
class lsst.sims.maf.metrics.baseMetric.BaseMetric(col=None, metricName=None, maps=None, units=None, metricDtype=None, badval=-666)[source]

Bases: object

Base class for the metrics. Sets up some basic functionality for the MAF framework: after __init__ every metric will record the columns (and stackers) it requires into the column registry, and the metricName, metricDtype, and units for the metric will be set.

Parameters:
  • col (str or list) – Names of the data columns that the metric will use. The columns required for each metric is tracked in the ColRegistry, and used to retrieve data from the opsim database. Can be a single string or a list.
  • metricName (str) – Name to use for the metric (optional - if not set, will be derived).
  • maps (list of lsst.sims.maf.maps objects) – The maps that the metric will need (passed from the slicer).
  • units (str) – The units for the value returned by the metric (optional - if not set, will be derived from the ColInfo).
  • metricDtype (str) – The type of value returned by the metric - ‘int’, ‘float’, ‘object’. If not set, will be derived by introspection.
  • badval (float) – The value indicating “bad” values calculated by the metric.
colInfo
colRegistry
registry
run(dataSlice, slicePoint=None)[source]

Calculate metric values.

Parameters:
  • dataSlice (numpy.NDarray) – Values passed to metric by the slicer, which the metric will use to calculate metric values at each slicePoint.
  • slicePoint (Dict) – Dictionary of slicePoint metadata passed to each metric. E.g. the ra/dec of the healpix pixel or opsim fieldId.
Returns:

The metric value at each slicePoint.

Return type:

int, float or object

lsst.sims.maf.metrics.cadenceMetrics module

class lsst.sims.maf.metrics.cadenceMetrics.SupernovaMetric(metricName=’SupernovaMetric’, mjdCol=’observationStartMJD’, filterCol=’filter’, m5Col=’fiveSigmaDepth’, units=’‘, redshift=0.0, Tmin=-20.0, Tmax=60.0, Nbetween=7, Nfilt=2, Tless=-5.0, Nless=1, Tmore=30.0, Nmore=1, peakGap=15.0, snrCut=10.0, singleDepthLimit=23.0, resolution=5.0, uniqueBlocks=False, badval=-666, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Measure how many time series meet a given time and filter distribution requirement.

Parameters:
  • redshift (float, optional) – Redshift of the SN. Used to scale observing dates to SN restframe. Default 0.
  • Tmin (float, optional) – The minimum day to consider the SN. Default -20.
  • Tmax (float, optional) – The maximum day to consider. Default 60.
  • Nbetween (int, optional) – The number of observations to demand between Tmin and Tmax. Default 7.
  • Nfilt (int, optional) – Tumber of unique filters that must observe the SN above the snrCut. Default 2.
  • Tless (float, optional) – Minimum time to consider ‘near peak’. Default -5.
  • Nless (int, optional) – Number of observations to demand before Tless. Default 1.
  • Tmore (float, optional) – Max time to consider ‘near peak’. Default 30.
  • Nmore (int, optional) – Number of observations to demand after Tmore. Default 1.
  • peakGap (float, optional) – Maximum gap alowed between observations in the ‘near peak’ time. Default 15.
  • snrCut (float, optional) – Require snr above this limit when counting Nfilt. Default 10. NOTE THIS IS NOT YET USED/IMPLEMENTED IN THE METRIC.
  • singleDepthLimit (float, optional) – Require observations in Nfilt different filters to be this deep near the peak. This is a rough approximation for the Science Book requirements for a SNR cut. Ideally, one would import a time-variable SN SED, redshift it, and make a filter-keyed dictionary of interpolation objects so the magnitude of the SN could be calculated at each observation and then use the m5col to compute a SNR. Default 23.
  • resolution (float, optional) – Time step (days) to consider when calculating observing windows. Default 5.
  • uniqueBlocks (bool) – Should the code count the number of unique sequences that meet the requirements (True), or should all sequences that meet the conditions be counted (False).
  • filter centers are shifted to the SN restframe and only observations (The) –
  • filters between 300 < lam_rest < 900 nm are included (with) –
  • the science book, the metric demands Nfilt observations above a SNR cut. (In) –
  • we demand Nfilt observations near the peak with a given singleDepthLimt. (Here,) –
reduceMedianMaxGap(data)[source]

The median maximum gap near the peak of the light curve

reduceMedianNobs(data)[source]

Median number of observations covering the entire light curve

reduceNsequences(data)[source]

The number of sequences that met the requirements

run(dataSlice, slicePoint=None)[source]

“Calculate parameters regarding the detection of supernovae.

Parameters:
  • dataSlice (numpy.array) – Numpy structured array containing the data related to the visits provided by the slicer.
  • slicePoint (dict, optional) – Dictionary containing information about the slicepoint currently active in the slicer.
Returns:

Dict containing [‘result’, ‘maxGap’, ‘Nobs’]: ‘result’ is the number of SN sequences detected ‘maxGap’ is the maximum gap within each sequence ‘Nobs’ is the number of observations in each sequence

Return type:

dict

class lsst.sims.maf.metrics.cadenceMetrics.TemplateExistsMetric(seeingCol=’seeingFwhmGeom’, observationStartMJDCol=’observationStartMJD’, metricName=’TemplateExistsMetric’, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Calculate the fraction of images with a previous template image of desired quality.

run(dataSlice, slicePoint=None)[source]

“Calculate the fraction of images with a previous template image of desired quality.

Parameters:
  • dataSlice (numpy.array) – Numpy structured array containing the data related to the visits provided by the slicer.
  • slicePoint (dict, optional) – Dictionary containing information about the slicepoint currently active in the slicer.
Returns:

The fraction of images with a ‘good’ previous template image.

Return type:

float

class lsst.sims.maf.metrics.cadenceMetrics.UniformityMetric(observationStartMJDCol=’observationStartMJD’, units=’‘, surveyLength=10.0, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Calculate how uniformly the observations are spaced in time. Returns a value between -1 and 1. A value of zero means the observations are perfectly uniform.

Parameters:surveyLength (float, optional) – The overall duration of the survey. Default 10.
run(dataSlice, slicePoint=None)[source]

“Calculate the survey uniformity.

This is based on how a KS-test works: look at the cumulative distribution of observation dates, and compare to a perfectly uniform cumulative distribution. Perfectly uniform observations = 0, perfectly non-uniform = 1.

Parameters:
  • dataSlice (numpy.array) – Numpy structured array containing the data related to the visits provided by the slicer.
  • slicePoint (dict, optional) – Dictionary containing information about the slicepoint currently active in the slicer.
Returns:

Uniformity of ‘observationStartMJDCol’.

Return type:

float

class lsst.sims.maf.metrics.cadenceMetrics.RapidRevisitMetric(timeCol=’observationStartMJD’, minNvisits=100, dTmin=0.0004629629629629629, dTmax=0.020833333333333332, metricName=’RapidRevisit’, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Calculate uniformity of time between consecutive visits on short timescales (for RAV1).

Parameters:
  • timeCol (str, optional) – The column containing the ‘time’ value. Default observationStartMJD.
  • minNvisits (int, optional) – The minimum number of visits required within the time interval (dTmin to dTmax). Default 100.
  • dTmin (float, optional) – The minimum dTime to consider (in days). Default 40 seconds.
  • dTmax (float, optional) – The maximum dTime to consider (in days). Default 30 minutes.
run(dataSlice, slicePoint=None)[source]

Calculate the uniformity of visits within dTmin to dTmax.

Uses a the same ‘uniformity’ calculation as the UniformityMetric, based on the KS-test. A value of 0 is perfectly uniform; a value of 1 is purely non-uniform.

Parameters:
  • dataSlice (numpy.array) – Numpy structured array containing the data related to the visits provided by the slicer.
  • slicePoint (dict, optional) – Dictionary containing information about the slicepoint currently active in the slicer.
Returns:

The uniformity measurement of the visits within time interval dTmin to dTmax.

Return type:

float

class lsst.sims.maf.metrics.cadenceMetrics.NRevisitsMetric(timeCol=’observationStartMJD’, dT=30.0, normed=False, metricName=None, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Calculate the number of (consecutive) visits with time differences less than dT.

Parameters:
  • dT (float, optional) – The time interval to consider (in minutes). Default 30.
  • normed (bool, optional) – Flag to indicate whether to return the total number of consecutive visits with time differences less than dT (False), or the fraction of overall visits (True).
run(dataSlice, slicePoint=None)[source]

Count the number of consecutive visits occuring within time intervals dT.

Parameters:
  • dataSlice (numpy.array) – Numpy structured array containing the data related to the visits provided by the slicer.
  • slicePoint (dict, optional) – Dictionary containing information about the slicepoint currently active in the slicer.
Returns:

Either the total number of consecutive visits within dT or the fraction compared to overall visits.

Return type:

float

class lsst.sims.maf.metrics.cadenceMetrics.IntraNightGapsMetric(timeCol=’observationStartMJD’, nightCol=’night’, reduceFunc=<function median>, metricName=’Median Intra-Night Gap’, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Calculate the gap between consecutive observations within a night, in hours.

Parameters:reduceFunc (function, optional) – Function that can operate on array-like structures. Typically numpy function. Default np.median.
run(dataSlice, slicePoint=None)[source]

Calculate the (reduceFunc) of the gap between consecutive obervations within a night.

Parameters:
  • dataSlice (numpy.array) – Numpy structured array containing the data related to the visits provided by the slicer.
  • slicePoint (dict, optional) – Dictionary containing information about the slicepoint currently active in the slicer.
Returns:

The (reduceFunc) value of the gap, in hours.

Return type:

float

class lsst.sims.maf.metrics.cadenceMetrics.InterNightGapsMetric(timeCol=’observationStartMJD’, nightCol=’night’, reduceFunc=<function median>, metricName=’Median Inter-Night Gap’, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Calculate the gap between consecutive observations between nights, in days.

Parameters:reduceFunc (function, optional) – Function that can operate on array-like structures. Typically numpy function. Default np.median.
run(dataSlice, slicePoint=None)[source]

Calculate the (reduceFunc) of the gap between consecutive nights of observations. :param dataSlice: Numpy structured array containing the data related to the visits provided by the slicer. :type dataSlice: numpy.array :param slicePoint: Dictionary containing information about the slicepoint currently active in the slicer. :type slicePoint: dict, optional

Returns:The (reduceFunc) of the gap between consecutive nights of observations, in days.
Return type:float
class lsst.sims.maf.metrics.cadenceMetrics.AveGapMetric(timeCol=’observationStartMJD’, nightCol=’night’, reduceFunc=<function median>, metricName=’AveGap’, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Calculate the gap between consecutive observations, in hours.

Parameters:reduceFunc (function, optional) – Function that can operate on array-like structures. Typically numpy function. Default np.median.
run(dataSlice, slicePoint=None)[source]

Calculate the (reduceFunc) of the gap between consecutive observations.

Different from inter-night and intra-night gaps, between this is really just counting all of the times between consecutive observations (not time between nights or time within a night).

Parameters:
  • dataSlice (numpy.array) – Numpy structured array containing the data related to the visits provided by the slicer.
  • slicePoint (dict, optional) – Dictionary containing information about the slicepoint currently active in the slicer.
Returns:

The (reduceFunc) of the time between consecutive observations, in hours.

Return type:

float

lsst.sims.maf.metrics.calibrationMetrics module

class lsst.sims.maf.metrics.calibrationMetrics.ParallaxMetric(metricName=’parallax’, m5Col=’fiveSigmaDepth’, units=’mas’, filterCol=’filter’, seeingCol=’seeingFwhmGeom’, rmag=20.0, SedTemplate=’flat’, badval=-666, atm_err=0.01, normalize=False, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Calculate the uncertainty in a parallax measures given a serries of observations.

run(dataslice, slicePoint=None)[source]
class lsst.sims.maf.metrics.calibrationMetrics.ProperMotionMetric(metricName=’properMotion’, m5Col=’fiveSigmaDepth’, mjdCol=’observationStartMJD’, units=’mas/yr’, filterCol=’filter’, seeingCol=’seeingFwhmGeom’, rmag=20.0, SedTemplate=’flat’, badval=-666, atm_err=0.01, normalize=False, baseline=10.0, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Calculate the uncertainty in the returned proper motion. Assuming Gaussian errors.

run(dataslice, slicePoint=None)[source]
class lsst.sims.maf.metrics.calibrationMetrics.RadiusObsMetric(metricName=’radiusObs’, raCol=’fieldRA’, decCol=’fieldDec’, units=’radians’, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

find the radius in the focal plane. returns things in degrees.

reduceFullRange(distances)[source]
reduceMean(distances)[source]
reduceRMS(distances)[source]
run(dataSlice, slicePoint)[source]
class lsst.sims.maf.metrics.calibrationMetrics.ParallaxCoverageMetric(metricName=’ParallaxCoverageMetric’, m5Col=’fiveSigmaDepth’, mjdCol=’observationStartMJD’, filterCol=’filter’, seeingCol=’seeingFwhmGeom’, rmag=20.0, SedTemplate=’flat’, badval=-666, atm_err=0.01, thetaRange=0.0, snrLimit=5, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Check how well the parallax factor is distributed. Subtracts the weighted mean position of the parallax offsets, then computes the weighted mean radius of the points. If points are well distributed, the mean radius will be near 1. If phase coverage is bad, radius will be close to zero.

For points on the Ecliptic, uniform sampling should result in a metric value of ~0.5. At the poles, uniform sampling would result in a metric value of ~1. Conceptually, it is helpful to remember that the parallax motion of a star at the pole is a (nearly circular) ellipse while the motion of a star on the ecliptic is a straight line. Thus, any pair of observations seperated by 6 months will give the full parallax range for a star on the pole but only observations on very spefic dates will give the full range for a star on the ecliptic.

Optionally also demand that there are obsevations above the snrLimit kwarg spanning thetaRange radians.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.calibrationMetrics.ParallaxDcrDegenMetric(metricName=’ParallaxDcrDegenMetric’, seeingCol=’seeingFwhmGeom’, m5Col=’fiveSigmaDepth’, atm_err=0.01, rmag=20.0, SedTemplate=’flat’, filterCol=’filter’, tol=0.05, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Use the full parallax and DCR displacement vectors to find if they are degenerate.

Parameters:
  • metricName (str) – Default ‘ParallaxDcrDegenMetric’.
  • seeingCol (str) – Default ‘FWHMgeom’
  • m5Col (str) – Default ‘fiveSigmaDepth’
  • fitlerCol (str) – Default ‘filter’
  • atm_err (float) – Minimum error in photometry centroids introduced by the atmosphere (arcseconds). Default 0.01.
  • rmag (float) – r-band magnitude of the fiducual star that is being used (mag).
  • SedTemplate (str) – The SED template to use for fiducia star colors, passed to lsst.sims.utils.stellarMags. Default ‘flat’
  • tol (float) – Tolerance for how well curve_fit needs to work before believing the covariance result. Default 0.05.
Returns:

metricValue – returns the correlation coefficient between the best-fit parallax amplitude and DCR amplitude. The RA and Dec offsets are fit simultaneously. Values close to zero are good, values close to +/- 1 are bad. Experience with fitting Monte Carlo simulations suggests the astrometric fits start becoming poor around a correlation of 0.7.

Return type:

float

run(dataSlice, slicePoint=None)[source]

lsst.sims.maf.metrics.chipVendorMetric module

class lsst.sims.maf.metrics.chipVendorMetric.ChipVendorMetric(cols=None, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

See what happens if we have chips from different vendors

run(dataSlice, slicePoint=None)[source]

lsst.sims.maf.metrics.crowdingMetric module

class lsst.sims.maf.metrics.crowdingMetric.CrowdingMetric(crowding_error=0.1, seeingCol=’finSeeing’, fiveSigCol=’fiveSigmaDepth’, units=’mag’, maps=[‘StellarDensityMap’], metricName=’Crowding To Precision’, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Calculate whether the coadded depth in r has exceeded the confusion limit

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.crowdingMetric.CrowdingMagUncertMetric(rmag=20.0, seeingCol=’finSeeing’, fiveSigCol=’fiveSigmaDepth’, maps=[‘StellarDensityMap’], units=’mag’, metricName=’CrowdingMagUncert’, **kwargs)[source]

Bases: lsst.sims.maf.metrics.crowdingMetric.CrowdingMetric

Given a stellar magnitude, calculate the mean uncertainty on the magnitude from crowding.

run(dataSlice, slicePoint=None)[source]

lsst.sims.maf.metrics.exgalM5 module

class lsst.sims.maf.metrics.exgalM5.ExgalM5(m5Col=’fiveSigmaDepth’, units=’mag’, maps=[‘DustMap’], lsstFilter=’r’, wavelen_min=None, wavelen_max=None, wavelen_step=1.0, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Calculate co-added five-sigma limiting depth after dust extinction.

Uses photUtils

run(dataSlice, slicePoint=None)[source]

Compute the co-added m5 depth and then apply extinction to that magnitude.

Args:
dataSlice (np.array): slicePoint (dict):
Returns:
float that is the dust atennuated co-added m5-depth.

lsst.sims.maf.metrics.fftMetric module

class lsst.sims.maf.metrics.fftMetric.FftMetric(timesCol=’expmjd’, metricName=’Fft’, nCoeffs=100, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Calculate a truncated FFT of the exposure times.

reducePeak(fftCoeff)[source]
run(dataSlice, slicePoint=None)[source]

lsst.sims.maf.metrics.hourglassMetric module

class lsst.sims.maf.metrics.hourglassMetric.HourglassMetric(telescope=’LSST’, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Plot the filters used as a function of time. Must be used with the Hourglass Slicer.

run(dataSlice, slicePoint=None)[source]

lsst.sims.maf.metrics.longGapAGNMetric module

class lsst.sims.maf.metrics.longGapAGNMetric.LongGapAGNMetric(metricName=’longGapAGNMetric’, mjdcol=’observationStartMJD’, units=’days’, xgaps=10, badval=-666, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

max delta-t and average of the top-10 longest gaps.

reduceAverageLongestXGaps(metricval)[source]
reduceMaxGap(metricval)[source]
run(dataslice, slicePoint=None)[source]

lsst.sims.maf.metrics.moMetrics module

class lsst.sims.maf.metrics.moMetrics.BaseMoMetric(cols=None, metricName=None, units=’#’, badval=0, comment=None, childMetrics=None, appMagCol=’appMag’, appMagVCol=’appMagV’, m5Col=’fiveSigmaDepth’, nightCol=’night’, expMJDCol=’expMJD’, snrCol=’SNR’, visCol=’vis’, raCol=’ra’, decCol=’dec’, seeingCol=’FWHMgeom’, expTimeCol=’visitExpTime’, filterCol=’filter’)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Base class for the moving object metrics.

run(ssoObs, orb, Hval)[source]

Calculate the metric value.

Parameters:
  • ssoObs (np.ndarray) – The input data to the metric (same as the parent metric).
  • orb (np.ndarray) – The information about the orbit for which the metric is being calculated.
  • Hval (float) – The H value for which the metric is being calculated.
Returns:

Return type:

float or np.ndarray or dict

class lsst.sims.maf.metrics.moMetrics.NObsMetric(snrLimit=None, **kwargs)[source]

Bases: lsst.sims.maf.metrics.moMetrics.BaseMoMetric

Count the total number of observations where an object was ‘visible’.

run(ssoObs, orb, Hval)[source]
class lsst.sims.maf.metrics.moMetrics.NObsNoSinglesMetric(snrLimit=None, **kwargs)[source]

Bases: lsst.sims.maf.metrics.moMetrics.BaseMoMetric

Count the number of observations for an object, but don’t include any observations where it was a single observation on a night.

run(ssoObs, orb, Hval)[source]
class lsst.sims.maf.metrics.moMetrics.NNightsMetric(snrLimit=None, **kwargs)[source]

Bases: lsst.sims.maf.metrics.moMetrics.BaseMoMetric

Count the number of distinct nights an object is observed.

run(ssoObs, orb, Hval)[source]
class lsst.sims.maf.metrics.moMetrics.ObsArcMetric(snrLimit=None, **kwargs)[source]

Bases: lsst.sims.maf.metrics.moMetrics.BaseMoMetric

Calculate the difference between the first and last observation of an object.

run(ssoObs, orb, Hval)[source]
class lsst.sims.maf.metrics.moMetrics.DiscoveryMetric(nObsPerNight=2, tMin=0.003472222222222222, tMax=0.0625, nNightsPerWindow=3, tWindow=15, snrLimit=None, badval=None, **kwargs)[source]

Bases: lsst.sims.maf.metrics.moMetrics.BaseMoMetric

Identify the discovery opportunities for an object.

run(ssoObs, orb, Hval)[source]
class lsst.sims.maf.metrics.moMetrics.Discovery_N_ChancesMetric(parentDiscoveryMetric, nightStart=None, nightEnd=None, badval=0, **kwargs)[source]

Bases: lsst.sims.maf.metrics.moMetrics.BaseChildMetric

Child metric to be used with DiscoveryMetric. Calculates total number of discovery opportunities in a window between nightStart / nightEnd.

run(ssoObs, orb, Hval, metricValues)[source]

Return the number of different discovery chances we had for each object/H combination.

class lsst.sims.maf.metrics.moMetrics.Discovery_N_ObsMetric(parentDiscoveryMetric, i=0, badval=0, **kwargs)[source]

Bases: lsst.sims.maf.metrics.moMetrics.BaseChildMetric

Calculates the number of observations in the i-th discovery track.

run(ssoObs, orb, Hval, metricValues)[source]
class lsst.sims.maf.metrics.moMetrics.Discovery_TimeMetric(parentDiscoveryMetric, i=0, tStart=None, badval=-999, **kwargs)[source]

Bases: lsst.sims.maf.metrics.moMetrics.BaseChildMetric

Returns the time of the i-th discovery opportunity.

run(ssoObs, orb, Hval, metricValues)[source]
class lsst.sims.maf.metrics.moMetrics.Discovery_RADecMetric(parentDiscoveryMetric, i=0, badval=None, **kwargs)[source]

Bases: lsst.sims.maf.metrics.moMetrics.BaseChildMetric

Returns the RA/Dec of the i-th discovery opportunity.

run(ssoObs, orb, Hval, metricValues)[source]
class lsst.sims.maf.metrics.moMetrics.Discovery_EcLonLatMetric(parentDiscoveryMetric, i=0, badval=None, **kwargs)[source]

Bases: lsst.sims.maf.metrics.moMetrics.BaseChildMetric

Returns the ecliptic lon/lat and solar elongation (in degrees) of the i-th discovery opportunity.

run(ssoObs, orb, Hval, metricValues)[source]
class lsst.sims.maf.metrics.moMetrics.Discovery_VelocityMetric(parentDiscoveryMetric, i=0, badval=-999, **kwargs)[source]

Bases: lsst.sims.maf.metrics.moMetrics.BaseChildMetric

Returns the sky velocity of the i-th discovery opportunity.

run(ssoObs, orb, Hval, metricValues)[source]
class lsst.sims.maf.metrics.moMetrics.ActivityOverTimeMetric(window, snrLimit=5, surveyYears=10.0, metricName=None, **kwargs)[source]

Bases: lsst.sims.maf.metrics.moMetrics.BaseMoMetric

Count the time periods where we would have a chance to detect activity on a moving object. Splits observations into time periods set by ‘window’, then looks for observations within each window, and reports what fraction of the total windows receive ‘nObs’ visits.

run(ssoObs, orb, Hval)[source]
class lsst.sims.maf.metrics.moMetrics.ActivityOverPeriodMetric(binsize, snrLimit=5, qCol=’q’, eCol=’e’, tPeriCol=’tPeri’, metricName=None, **kwargs)[source]

Bases: lsst.sims.maf.metrics.moMetrics.BaseMoMetric

Count the fraction of the orbit (when split into nBins) that receive observations, in order to have a chance to detect activity.

run(ssoObs, orb, Hval)[source]
class lsst.sims.maf.metrics.moMetrics.DiscoveryChancesMetric(nObsPerNight=2, tNight=0.0625, nNightsPerWindow=3, tWindow=15, snrLimit=None, **kwargs)[source]

Bases: lsst.sims.maf.metrics.moMetrics.BaseMoMetric

Count the number of discovery opportunities for an object.

Superseded by the DiscoveryMetric + NChances child metric.

run(ssoObs, orb, Hval)[source]

SsoObs = Dataframe, orb=Dataframe, Hval=single number.

class lsst.sims.maf.metrics.moMetrics.MagicDiscoveryMetric(nObs=6, tWindow=60, snrLimit=None, **kwargs)[source]

Bases: lsst.sims.maf.metrics.moMetrics.BaseMoMetric

Count the number of discovery opportunities with very good software.

run(ssoObs, orb, Hval)[source]

SsoObs = Dataframe, orb=Dataframe, Hval=single number.

class lsst.sims.maf.metrics.moMetrics.HighVelocityMetric(psfFactor=2.0, snrLimit=None, velocityCol=’velocity’, **kwargs)[source]

Bases: lsst.sims.maf.metrics.moMetrics.BaseMoMetric

Count the number of times an asteroid is observed with a velocity high enough to make it appear trailed by a factor of (psfFactor)*PSF - i.e. velocity >= psfFactor * seeing / visitExpTime. Simply counts the total number of observations with high velocity.

run(ssoObs, orb, Hval)[source]
class lsst.sims.maf.metrics.moMetrics.HighVelocityNightsMetric(psfFactor=2.0, nObsPerNight=2, snrLimit=None, velocityCol=’velocity’, **kwargs)[source]

Bases: lsst.sims.maf.metrics.moMetrics.BaseMoMetric

Count the number of times an asteroid is observed with a velocity high enough to make it appear trailed by a factor of (psfFactor)*PSF - i.e. velocity >= psfFactor * seeing / visitExpTime, where we require nObsPerNight observations within a given night. Counts the total number of nights with enough high-velocity observations.

run(ssoObs, orb, Hval)[source]
class lsst.sims.maf.metrics.moMetrics.LightcurveInversionMetric(nObs=100, snrLimit=20.0, nDays=1825, **kwargs)[source]

Bases: lsst.sims.maf.metrics.moMetrics.BaseMoMetric

Identify objects which would have observations suitable to do lightcurve inversion.

This is roughly defined as objects which have more than nObs observations with SNR greater than snrLimit, within nDays.

run(ssoObs, orb, Hval)[source]
class lsst.sims.maf.metrics.moMetrics.ColorDeterminationMetric(nPairs=1, snrLimit=10, nHours=2.0, bOne=’g’, bTwo=’r’, **kwargs)[source]

Bases: lsst.sims.maf.metrics.moMetrics.BaseMoMetric

Identify objects which could have observations suitable to determine colors.

This is roughly defined as objects which have more than nPairs pairs of observations with SNR greater than snrLimit, in bands bandOne and bandTwo, within nHours.

run(ssoObs, orb, Hval)[source]
class lsst.sims.maf.metrics.moMetrics.PeakVMagMetric(**kwargs)[source]

Bases: lsst.sims.maf.metrics.moMetrics.BaseMoMetric

Pull out the peak V magnitude of all observations of the object.

run(ssoObs, orb, Hval)[source]
class lsst.sims.maf.metrics.moMetrics.KnownObjectsMetric(elongThresh=60.0, vMagThresh1=20.0, vMagThresh2=22.0, tSwitch=57023, elongCol=’Elongation’, expMJDCol=’MJD(UTC)’, **kwargs)[source]

Bases: lsst.sims.maf.metrics.moMetrics.BaseMoMetric

Identify objects which could be classified as ‘previously known’ based on their peak V magnitude, returning the time at which each first reached that peak V magnitude.

Parameters:
  • elongThresh (float, opt) – The cutoff in solar elongation to consider an object ‘visible’. Default 60 deg.
  • vMagThresh1 (float, opt) – The magnitude threshhold for previously known objects. Default 20.0. This is calibrated using NEOs discovered in the last 15 years and assuming a current 25% completeness.
  • vMagThresh2 (float, opt) – The magnitude threshhold for previously known objects. Default 22.0. This is based on assuming PS and other surveys will be efficient down to V=22.
  • tSwitch (float, opt) – The time to switch between evaluating against vMagThresh1 to vMagThresh2. Default 57023 (start of 2015).
run(ssoObs, orb, Hval)[source]

lsst.sims.maf.metrics.moSummaryMetrics module

lsst.sims.maf.metrics.moSummaryMetrics.integrateOverH(Mvalues, Hvalues, Hindex=0.3)[source]

Function to calculate a metric value integrated over an Hrange, assuming a power-law distribution.

Parameters:
  • Mvalues (numpy.ndarray) – The metric values at each H value.
  • Hvalues (numpy.ndarray) – The H values corresponding to each Mvalue (must be the same length).
  • Hindex (float, opt) – The power-law index expected for the H value distribution. Default is 0.3 (dN/dH = 10^(Hindex * H) ).
Returns:

The integrated or cumulative metric values.

Return type:

numpy.ndarray

class lsst.sims.maf.metrics.moSummaryMetrics.ValueAtHMetric(Hmark=22, **kwargs)[source]

Bases: lsst.sims.maf.metrics.moMetrics.BaseMoMetric

Return the metric value at a given H value.

Requires the metric values to be one-dimensional (typically, completeness values).

Parameters:Hmark (float, opt) – The H value at which to look up the metric value. Default = 22.
run(metricVals, Hvals)[source]
class lsst.sims.maf.metrics.moSummaryMetrics.MeanValueAtHMetric(Hmark=22, **kwargs)[source]

Bases: lsst.sims.maf.metrics.moMetrics.BaseMoMetric

Return the mean value of a metric at a given H.

Allows the metric values to be multi-dimensional (i.e. use a cloned H distribution).

Parameters:Hmark (float, opt) – The H value at which to look up the metric value. Default = 22.
run(metricVals, Hvals)[source]
class lsst.sims.maf.metrics.moSummaryMetrics.MoCompletenessMetric(requiredChances=1, nbins=20, minHrange=1.0, cumulative=True, Hindex=0.3, **kwargs)[source]

Bases: lsst.sims.maf.metrics.moMetrics.BaseMoMetric

Calculate the completeness (relative to the entire population), given the counts of discovery chances.

Input values of the number of discovery chances can come from the DiscoveryChances metric or the Discovery_N_Chances (child) metric.

Parameters:
  • requiredChances (int, opt) – Require at least this many discovery opportunities before counting the object as ‘found’. Default = 1.
  • nbins (int, opt) – If the H values for the metric are not a cloned distribution, then split up H into this many bins. Default 20.
  • minHrange (float, opt) – If the H values for the metric are not a cloned distribution, then split up H into at least this range (otherwise just use the min/max of the H values). Default 1.0
  • cumulative (bool, opt) – If True, calculate the cumulative completeness (completeness <= H). If False, calculate the differential completeness (completeness @ H). Default True.
  • Hindex (float, opt) – Use Hindex as the power law to integrate over H, if cumulative is True. Default 0.3.
run(discoveryChances, Hvals)[source]
class lsst.sims.maf.metrics.moSummaryMetrics.MoCompletenessAtTimeMetric(times, Hval=22, cumulative=True, Hindex=0.3, **kwargs)[source]

Bases: lsst.sims.maf.metrics.moMetrics.BaseMoMetric

Calculate the completeness (relative to the entire population) <= a given H as a function of time, given the times of each discovery.

Input values of the discovery times can come from the Discovery_Time (child) metric or the KnownObjects metric.

Parameters:
  • times (numpy.ndarray like) – The bins to distribute the discovery times into. Same units as the discovery time (typically MJD).
  • Hval (float, opt) – The value of H to count completeness at (or cumulative completeness to). Default H=22.
  • cumulative (bool, opt) – If True, calculate the cumulative completeness (completeness <= H). If False, calculate the differential completeness (completeness @ H). Default True.
  • Hindex (float, opt) – Use Hindex as the power law to integrate over H, if cumulative is True. Default 0.3.
run(discoveryTimes, Hvals)[source]

lsst.sims.maf.metrics.nightPointingMetric module

class lsst.sims.maf.metrics.nightPointingMetric.NightPointingMetric(altCol=’altitude’, azCol=’azimuth’, filterCol=’filter’, mjdCol=’expMJD’, metricName=’NightPointing’, telescope=’LSST’, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Gather relevant information for a night to plot.

run(dataSlice, slicePoint=None)[source]

lsst.sims.maf.metrics.optimalM5Metric module

class lsst.sims.maf.metrics.optimalM5Metric.OptimalM5Metric(m5Col=’fiveSigmaDepth’, optM5Col=’m5Optimal’, filterCol=’filter’, magDiff=False, normalize=False, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Compare the co-added depth of the survey to one where all the observations were taken on the meridian.

Parameters:
  • m5Col (str ('fiveSigmaDepth')) – Column name that contains the five-sigma limiting depth of each observation
  • optM5Col (str ('m5Optimal')) – The column name of the five-sigma-limiting depth if the observation had been taken on the meridian.
  • normalize (bool (False)) – If False, metric returns how many more observations would need to be taken to reach the optimal depth. If True, the number is normalized by the total number of observations already taken at that position.
  • magDiff (bool (False)) – If True, metric returns the magnitude difference between the achieved coadded depth and the optimal coadded depth.
Returns:

  • numpy.array
  • If magDiff is True, returns the magnitude difference between the
  • optimal and actual coadded depth. If normalize is False
  • (default), the result is the number of additional observations
  • (taken at the median depth) the survey needs to catch up to
  • optimal. If normalize is True, the result is divided by the
  • number of observations already taken. So if a 10-year survey
  • returns 20%, it would need to run for 12 years to reach the same
  • depth as a 10-year meridian survey.

run(dataSlice, slicePoint=None)[source]

lsst.sims.maf.metrics.pairMetric module

class lsst.sims.maf.metrics.pairMetric.PairMetric(mjdCol=’expMJD’, metricName=’Pairs’, match_min=20.0, match_max=40.0, binsize=5.0, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Count the number of pairs that could be used for Solar System object detection

run(dataSlice, slicePoint=None)[source]

lsst.sims.maf.metrics.phaseGapMetric module

class lsst.sims.maf.metrics.phaseGapMetric.PhaseGapMetric(col=’observationStartMJD’, nPeriods=5, periodMin=3.0, periodMax=35.0, nVisitsMin=3, metricName=’Phase Gap’, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Measure the maximum gap in phase coverage for observations of periodic variables.

reduceLargestGap(metricVal)[source]

At each slicepoint, return the largest phase gap value.

reduceMeanGap(metricVal)[source]

At each slicepoint, return the mean gap value.

reduceMedianGap(metricVal)[source]

At each slicepoint, return the median gap value.

reduceWorstPeriod(metricVal)[source]

At each slicepoint, return the period with the largest phase gap.

run(dataSlice, slicePoint=None)[source]

Run the PhaseGapMetric. :param dataSlice: Data for this slice. :param slicePoint: Metadata for the slice (Optional as not used here). :return: a dictionary of the periods used here and the corresponding largest gaps.

lsst.sims.maf.metrics.simpleMetrics module

class lsst.sims.maf.metrics.simpleMetrics.PassMetric(cols=None, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Just pass the entire array through

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.simpleMetrics.Coaddm5Metric(m5Col=’fiveSigmaDepth’, metricName=’CoaddM5’, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Calculate the coadded m5 value at this gridpoint.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.simpleMetrics.MaxMetric(col=None, metricName=None, maps=None, units=None, metricDtype=None, badval=-666)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Calculate the maximum of a simData column slice.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.simpleMetrics.AbsMaxMetric(col=None, metricName=None, maps=None, units=None, metricDtype=None, badval=-666)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Calculate the max of the absolute value of a simData column slice.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.simpleMetrics.MeanMetric(col=None, metricName=None, maps=None, units=None, metricDtype=None, badval=-666)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Calculate the mean of a simData column slice.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.simpleMetrics.AbsMeanMetric(col=None, metricName=None, maps=None, units=None, metricDtype=None, badval=-666)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Calculate the mean of the absolute value of a simData column slice.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.simpleMetrics.MedianMetric(col=None, metricName=None, maps=None, units=None, metricDtype=None, badval=-666)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Calculate the median of a simData column slice.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.simpleMetrics.AbsMedianMetric(col=None, metricName=None, maps=None, units=None, metricDtype=None, badval=-666)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Calculate the median of the absolute value of a simData column slice.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.simpleMetrics.MinMetric(col=None, metricName=None, maps=None, units=None, metricDtype=None, badval=-666)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Calculate the minimum of a simData column slice.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.simpleMetrics.FullRangeMetric(col=None, metricName=None, maps=None, units=None, metricDtype=None, badval=-666)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Calculate the range of a simData column slice.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.simpleMetrics.RmsMetric(col=None, metricName=None, maps=None, units=None, metricDtype=None, badval=-666)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Calculate the standard deviation of a simData column slice.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.simpleMetrics.SumMetric(col=None, metricName=None, maps=None, units=None, metricDtype=None, badval=-666)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Calculate the sum of a simData column slice.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.simpleMetrics.CountUniqueMetric(col=None, metricName=None, maps=None, units=None, metricDtype=None, badval=-666)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Return the number of unique values.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.simpleMetrics.CountMetric(col=None, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Count the length of a simData column slice.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.simpleMetrics.CountRatioMetric(col=None, normVal=1.0, metricName=None, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Count the length of a simData column slice, then divide by ‘normVal’.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.simpleMetrics.CountSubsetMetric(col=None, subset=None, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Count the length of a simData column slice which matches ‘subset’.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.simpleMetrics.RobustRmsMetric(col=None, metricName=None, maps=None, units=None, metricDtype=None, badval=-666)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Use the inter-quartile range of the data to estimate the RMS. Robust since this calculation does not include outliers in the distribution.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.simpleMetrics.MaxPercentMetric(col=None, metricName=None, maps=None, units=None, metricDtype=None, badval=-666)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Return the percent of the data which has the maximum value.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.simpleMetrics.AbsMaxPercentMetric(col=None, metricName=None, maps=None, units=None, metricDtype=None, badval=-666)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Return the percent of the data which has the absolute value of the max value of the data.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.simpleMetrics.BinaryMetric(col=None, metricName=None, maps=None, units=None, metricDtype=None, badval=-666)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Return 1 if there is data.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.simpleMetrics.FracAboveMetric(col=None, cutoff=0.5, scale=1, metricName=None, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Find the fraction of data values above a given value.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.simpleMetrics.FracBelowMetric(col=None, cutoff=0.5, scale=1, metricName=None, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Find the fraction of data values below a given value.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.simpleMetrics.PercentileMetric(col=None, percentile=90, metricName=None, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Find the value of a column at a given percentile.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.simpleMetrics.NoutliersNsigmaMetric(col=None, nSigma=3.0, metricName=None, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Calculate the # of visits less than nSigma below the mean (nSigma<0) or more than nSigma above the mean of ‘col’.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.simpleMetrics.UniqueRatioMetric(col=None, metricName=None, maps=None, units=None, metricDtype=None, badval=-666)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Return the number of unique values divided by the total number of values.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.simpleMetrics.MeanAngleMetric(col=None, metricName=None, maps=None, units=None, metricDtype=None, badval=-666)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Calculate the mean of an angular (degree) simData column slice.

‘MeanAngle’ differs from ‘Mean’ in that it accounts for wraparound at 2pi.

run(dataSlice, slicePoint=None)[source]

Calculate mean angle via unit vectors. If unit vector ‘strength’ is less than 0.1, then just set mean to 180 degrees (as this indicates nearly uniformly distributed angles).

class lsst.sims.maf.metrics.simpleMetrics.RmsAngleMetric(col=None, metricName=None, maps=None, units=None, metricDtype=None, badval=-666)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Calculate the standard deviation of an angular (degrees) simData column slice.

‘RmsAngle’ differs from ‘Rms’ in that it accounts for wraparound at 2pi.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.simpleMetrics.FullRangeAngleMetric(col=None, metricName=None, maps=None, units=None, metricDtype=None, badval=-666)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Calculate the full range of an angular (degrees) simData column slice.

‘FullRangeAngle’ differs from ‘FullRange’ in that it accounts for wraparound at 2pi.

run(dataSlice, slicePoint=None)[source]

lsst.sims.maf.metrics.slewMetrics module

class lsst.sims.maf.metrics.slewMetrics.SlewContributionMetric(col=’actDelay’, activity=None, activeCol=’activity’, inCritCol=’inCriticalPath’, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.slewMetrics.AveSlewFracMetric(col=’actDelay’, activity=None, activeCol=’activity’, idCol=’SlewHistory_slewCount’, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

run(dataSlice, slicePoint=None)[source]

lsst.sims.maf.metrics.starDensity module

class lsst.sims.maf.metrics.starDensity.StarDensityMetric(rmagLimit=25.0, units=’stars/sq arcsec’, maps=[‘StellarDensityMap’], **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Interpolate the stellar luminosity function to return the number of stars per square arcsecond brighter than the rmagLimit. Note that the map is built from CatSim stars in the range 20 < r < 28.

run(dataSlice, slicePoint=None)[source]

lsst.sims.maf.metrics.summaryMetrics module

class lsst.sims.maf.metrics.summaryMetrics.fOArea(col=’metricdata’, Asky=18000.0, Nvisit=825, metricName=’fOArea’, nside=128, norm=True, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Metric to calculate the FO Area.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.summaryMetrics.fONv(col=’metricdata’, Asky=18000.0, metricName=’fONv’, Nvisit=825, nside=128, norm=True, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Metric to calculate the FO_Nv.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.summaryMetrics.TableFractionMetric(col=’metricdata’, nbins=10)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Count the completeness (for many fields) and summarize how many fields have given completeness levels (within a series of bins). Works with completenessMetric only.

This metric is meant to be used as a summary statistic on something like the completeness metric. The output is DIFFERENT FROM SSTAR and is: element matching values 0 0 == P 1 0 < P < .1 2 .1 <= P < .2 3 .2 <= P < .3 … 10 .9 <= P < 1 11 1 == P 12 1 < P Note the 1st and last elements do NOT obey the numpy histogram conventions.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.summaryMetrics.IdentityMetric(col=None, metricName=None, maps=None, units=None, metricDtype=None, badval=-666)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Return the metric value itself .. this is primarily useful as a summary statistic for UniSlicer metrics.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.summaryMetrics.NormalizeMetric(col=’metricdata’, normVal=1, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Return a metric values divided by ‘normVal’. Useful for turning summary statistics into fractions.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.summaryMetrics.ZeropointMetric(col=’metricdata’, zp=0, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Return a metric values with the addition of ‘zp’. Useful for altering the zeropoint for summary statistics.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.summaryMetrics.TotalPowerMetric(col=’metricdata’, lmin=100.0, lmax=300.0, removeDipole=True, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Calculate the total power in the angular power spectrum between lmin/lmax.

run(dataSlice, slicePoint=None)[source]

lsst.sims.maf.metrics.technicalMetrics module

class lsst.sims.maf.metrics.technicalMetrics.NChangesMetric(col=’filter’, orderBy=’observationStartMJD’, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Compute the number of times a column value changes. (useful for filter changes in particular).

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.technicalMetrics.MinTimeBetweenStatesMetric(changeCol=’filter’, timeCol=’observationStartMJD’, metricName=None, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Compute the minimum time between changes of state in a column value. (useful for calculating fastest time between filter changes in particular). Returns delta time in minutes!

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.technicalMetrics.NStateChangesFasterThanMetric(changeCol=’filter’, timeCol=’observationStartMJD’, metricName=None, cutoff=20, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Compute the number of changes of state that happen faster than ‘cutoff’. (useful for calculating time between filter changes in particular). ‘cutoff’ should be in minutes.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.technicalMetrics.MaxStateChangesWithinMetric(changeCol=’filter’, timeCol=’observationStartMJD’, metricName=None, timespan=20, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Compute the maximum number of changes of state that occur within a given timespan. (useful for calculating time between filter changes in particular). ‘timespan’ should be in minutes.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.technicalMetrics.TeffMetric(m5Col=’fiveSigmaDepth’, filterCol=’filter’, metricName=’tEff’, fiducialDepth=None, teffBase=30.0, normed=False, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Effective time equivalent for a given set of visits.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.technicalMetrics.OpenShutterFractionMetric(metricName=’OpenShutterFraction’, slewTimeCol=’slewTime’, expTimeCol=’visitExposureTime’, visitTimeCol=’visitTime’, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Compute the fraction of time the shutter is open compared to the total time spent observing.

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.technicalMetrics.CompletenessMetric(filterColName=’filter’, metricName=’Completeness’, u=0, g=0, r=0, i=0, z=0, y=0, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Compute the completeness and joint completeness

reduceJoint(completeness)[source]

The joint completeness is just the minimum completeness for a point/field.

reduceg(completeness)[source]
reducei(completeness)[source]
reducer(completeness)[source]
reduceu(completeness)[source]
reducey(completeness)[source]
reducez(completeness)[source]
run(dataSlice, slicePoint=None)[source]

Compute the completeness for each filter, and then the minimum (joint) completeness for each slice.

class lsst.sims.maf.metrics.technicalMetrics.FilterColorsMetric(rRGB=’rRGB’, gRGB=’gRGB’, bRGB=’bRGB’, timeCol=’observationStartMJD’, t0=None, tStep=0.0004629629629629629, metricName=’FilterColors’, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Calculate an RGBA value that accounts for the filters used up to time t0.

run(dataSlice, slicePoint=None)[source]

lsst.sims.maf.metrics.tgaps module

class lsst.sims.maf.metrics.tgaps.TgapsMetric(timesCol=’observationStartMJD’, allGaps=False, bins=array([ 0.5, 1., 1.5, 2., 2.5, 3., 3.5, 4., 4.5, 5., 5.5, 6., 6.5, 7., 7.5, 8., 8.5, 9., 9.5, 10., 10.5, 11., 11.5, 12., 12.5, 13., 13.5, 14., 14.5, 15., 15.5, 16., 16.5, 17., 17.5, 18., 18.5, 19., 19.5, 20., 20.5, 21., 21.5, 22., 22.5, 23., 23.5, 24., 24.5, 25., 25.5, 26., 26.5, 27., 27.5, 28., 28.5, 29., 29.5, 30., 30.5, 31., 31.5, 32., 32.5, 33., 33.5, 34., 34.5, 35., 35.5, 36., 36.5, 37., 37.5, 38., 38.5, 39., 39.5, 40., 40.5, 41., 41.5, 42., 42.5, 43., 43.5, 44., 44.5, 45., 45.5, 46., 46.5, 47., 47.5, 48., 48.5, 49., 49.5, 50., 50.5, 51., 51.5, 52., 52.5, 53., 53.5, 54., 54.5, 55., 55.5, 56., 56.5, 57., 57.5, 58., 58.5, 59., 59.5]), units=’days’, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Histogram up all the time gaps

run(dataSlice, slicePoint=None)[source]

lsst.sims.maf.metrics.transientMetrics module

class lsst.sims.maf.metrics.transientMetrics.TransientMetric(metricName=’TransientDetectMetric’, mjdCol=’observationStartMJD’, m5Col=’fiveSigmaDepth’, filterCol=’filter’, transDuration=10.0, peakTime=5.0, riseSlope=0.0, declineSlope=0.0, surveyDuration=10.0, surveyStart=None, detectM5Plus=0.0, uPeak=20, gPeak=20, rPeak=20, iPeak=20, zPeak=20, yPeak=20, nPrePeak=0, nPerLC=1, nFilters=1, nPhaseCheck=1, countMethod=’full’, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Calculate what fraction of the transients would be detected. Best paired with a spatial slicer. We are assuming simple light curves with no color evolution.

Parameters:
  • transDuration (float, optional) – How long the transient lasts (days). Default 10.
  • peakTime (float, optional) – How long it takes to reach the peak magnitude (days). Default 5.
  • riseSlope (float, optional) – Slope of the light curve before peak time (mags/day). This should be negative since mags are backwards (magnitudes decrease towards brighter fluxes). Default 0.
  • declineSlope (float, optional) – Slope of the light curve after peak time (mags/day). This should be positive since mags are backwards. Default 0.
  • uPeak (float, optional) – Peak magnitude in u band. Default 20.
  • gPeak (float, optional) – Peak magnitude in g band. Default 20.
  • rPeak (float, optional) – Peak magnitude in r band. Default 20.
  • iPeak (float, optional) – Peak magnitude in i band. Default 20.
  • zPeak (float, optional) – Peak magnitude in z band. Default 20.
  • yPeak (float, optional) – Peak magnitude in y band. Default 20.
  • surveyDuration (float, optional) – Length of survey (years). Default 10.
  • surveyStart (float, optional) – MJD for the survey start date. Default None (uses the time of the first observation).
  • detectM5Plus (float, optional) – An observation will be used if the light curve magnitude is brighter than m5+detectM5Plus. Default 0.
  • nPrePeak (int, optional) – Number of observations (in any filter(s)) to demand before peakTime, before saying a transient has been detected. Default 0.
  • nPerLC (int, optional) – Number of sections of the light curve that must be sampled above the detectM5Plus theshold (in a single filter) for the light curve to be counted. For example, setting nPerLC = 2 means a light curve is only considered detected if there is at least 1 observation in the first half of the LC, and at least one in the second half of the LC. nPerLC = 4 means each quarter of the light curve must be detected to count. Default 1.
  • nFilters (int, optional) – Number of filters that need to be observed for an object to be counted as detected. Default 1.
  • nPhaseCheck (int, optional) – Sets the number of phases that should be checked. One can imagine pathological cadences where many objects pass the detection criteria, but would not if the observations were offset by a phase-shift. Default 1.
  • countMethod ({'full' 'partialLC'}, defaults to 'full') – Sets the method of counting max number of transients. if ‘full’, the only full light curves that fit the survey duration are counted. If ‘partialLC’, then the max number of possible transients is taken to be the integer floor
lightCurve(time, filters)[source]

Calculate the magnitude of the object at each time, in each filter.

Parameters:
  • time (numpy.ndarray) – The times of the observations.
  • filters (numpy.ndarray) – The filters of the observations.
Returns:

The magnitudes of the object at each time, in each filter.

Return type:

numpy.ndarray

run(dataSlice, slicePoint=None)[source]

” Calculate the detectability of a transient with the specified lightcurve.

Parameters:
  • dataSlice (numpy.array) – Numpy structured array containing the data related to the visits provided by the slicer.
  • slicePoint (dict, optional) – Dictionary containing information about the slicepoint currently active in the slicer.
Returns:

The total number of transients that could be detected.

Return type:

float

lsst.sims.maf.metrics.vectorMetrics module

class lsst.sims.maf.metrics.vectorMetrics.HistogramMetric(bins=None, binCol=’night’, col=’night’, units=’Count’, statistic=’count’, metricDtype=<class ‘float’>, **kwargs)[source]

Bases: lsst.sims.maf.metrics.vectorMetrics.VectorMetric

A wrapper to stats.binned_statistic

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.vectorMetrics.AccumulateMetric(col=’night’, bins=None, binCol=’night’, function=<ufunc ‘add’>, metricDtype=<class ‘float’>, **kwargs)[source]

Bases: lsst.sims.maf.metrics.vectorMetrics.VectorMetric

Calculate the accumulated stat

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.vectorMetrics.AccumulateCountMetric(col=’night’, bins=None, binCol=’night’, function=<ufunc ‘add’>, metricDtype=<class ‘float’>, **kwargs)[source]

Bases: lsst.sims.maf.metrics.vectorMetrics.AccumulateMetric

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.vectorMetrics.HistogramM5Metric(bins=None, binCol=’night’, m5Col=’fiveSigmaDepth’, units=’mag’, metricName=’HistogramM5Metric’, **kwargs)[source]

Bases: lsst.sims.maf.metrics.vectorMetrics.HistogramMetric

Calculate the coadded depth for each bin (e.g., per night).

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.vectorMetrics.AccumulateM5Metric(bins=None, binCol=’night’, m5Col=’fiveSigmaDepth’, metricName=’AccumulateM5Metric’, **kwargs)[source]

Bases: lsst.sims.maf.metrics.vectorMetrics.AccumulateMetric

run(dataSlice, slicePoint=None)[source]
class lsst.sims.maf.metrics.vectorMetrics.AccumulateUniformityMetric(bins=None, binCol=’night’, expMJDCol=’observationStartMJD’, metricName=’AccumulateUniformityMetric’, surveyLength=10.0, units=’Fraction’, **kwargs)[source]

Bases: lsst.sims.maf.metrics.vectorMetrics.AccumulateMetric

Make a 2D version of UniformityMetric

run(dataSlice, slicePoint=None)[source]

lsst.sims.maf.metrics.visitGroupsMetric module

class lsst.sims.maf.metrics.visitGroupsMetric.VisitGroupsMetric(timesCol=’observationStartMJD’, nightsCol=’night’, metricName=’VisitGroups’, deltaTmin=0.010416666666666666, deltaTmax=0.0625, minNVisits=2, window=30, minNNights=3, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

Count the number of visits per night within deltaTmin and deltaTmax.

reduceMaxSeqLunations(metricval)[source]

Count the max number of sequential lunations (unique 30 day windows) that contain at least one ‘group’: a set of more than minNVisits per night, with more than minNNights of visits within ‘window’ time period.

reduceMedian(metricval)[source]

Reduce to median number of visits per night.

reduceNLunations(metricval)[source]

Reduce to number of lunations (unique 30 day windows) that contain at least one ‘group’: a set of more than minNVisits per night, with more than minNNights of visits within ‘window’ time period.

reduceNNightsInWindow(metricval)[source]

Reduce to max number of nights with more than minNVisits, within ‘window’ over all windows.

reduceNNightsWithNVisits(metricval)[source]

Reduce to total number of nights with more than ‘minNVisits’ visits.

reduceNVisitsInWindow(metricval)[source]

Reduce to max number of total visits on all nights with more than minNVisits, within any ‘window’ (default=30 nights).

run(dataSlice, slicePoint=None)[source]

Return a dictionary of: the number of visits within a night (within delta tmin/tmax of another visit), and the nights with visits > minNVisits. Count two visits which are within tmin of each other, but which have another visit within tmin/tmax interval, as one and a half (instead of two).

So for example: 4 visits, where 1, 2, 3 were all within deltaTMax of each other, and 4 was later but within deltaTmax of visit 3 – would give you 4 visits. If visit 1 and 2 were closer together than deltaTmin, the two would be counted as 1.5 visits together (if only 1 and 2 existed, then there would be 0 visits as none would be within the qualifying time interval).

class lsst.sims.maf.metrics.visitGroupsMetric.PairFractionMetric(timesCol=’observationStartMJD’, metricName=’PairFraction’, minGap=15.0, maxGap=90.0, **kwargs)[source]

Bases: lsst.sims.maf.metrics.baseMetric.BaseMetric

What fraction of observations are part of a pair.

Note, an observation can be a memeber of more than one “pair”. For example, t=[0, 5, 30], all observations would be considered part of a pair because they all have an observation within the given window to pair with (the observation at t=30 pairs twice).

run(dataSlice, slicePoint=None)[source]

Module contents