id
int64 0
5.38k
| issuekey
stringlengths 4
16
| created
stringlengths 19
19
| title
stringlengths 5
252
| description
stringlengths 1
1.39M
| storypoint
float64 0
100
|
---|---|---|---|---|---|
2,099 |
DM-10173
|
04/10/2017 13:34:45
|
Update bokeh to version 0.12.4
|
New version of bokeh released, updating from 0.12.3 to 0.12.4.
| 0.5 |
2,100 |
DM-10183
|
04/11/2017 09:58:55
|
Investigate why maxtasksperchild=1 causes mosaic.py to hang on pybind11 stack
|
{{mosaic.py}} uses {{multiprocessing.Pool}} to read catalogs with multiple cores. When this pool is initialized with {{maxtasksperchild=1}}, {{mosaic.py}} hangs indefinitely at a consistent point in the running---that is, running with the same arguments multiple times will freeze up in the same place. This is only a problem with the pybind11 version of the stack, as this behavior does not occur in the HSC stack, which is currently still wrapped with swig. The underlying cause of this should be investigated to make sure that there is not some deeper issue that might cause problems with parallelization elsewhere.
| 2 |
2,101 |
DM-10187
|
04/11/2017 11:45:49
|
Properly handle decimal.Decimal types in dbserv
|
The WISE dataset uses fixed precision decimals and dbserv doens't properly handle them when serializing to JSON. {code} [2017-04-11 16:52:06,003] ERROR in app: Exception on /db/v0/tap/sync [POST] Traceback (most recent call last): File "/lsst/stack/Linux64/flask/0.12/lib/python/Flask-0.12-py2.7.egg/flask/app.py", line 1982, in wsgi_app response = self.full_dispatch_request() File "/lsst/stack/Linux64/flask/0.12/lib/python/Flask-0.12-py2.7.egg/flask/app.py", line 1614, in full_dispatch_request rv = self.handle_user_exception(e) File "/lsst/stack/Linux64/flask/0.12/lib/python/Flask-0.12-py2.7.egg/flask/app.py", line 1517, in handle_user_exception reraise(exc_type, exc_value, tb) File "/lsst/stack/Linux64/flask/0.12/lib/python/Flask-0.12-py2.7.egg/flask/app.py", line 1612, in full_dispatch_request rv = self.dispatch_request() File "/lsst/stack/Linux64/flask/0.12/lib/python/Flask-0.12-py2.7.egg/flask/app.py", line 1598, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/lsst/stack/Linux64/dax_dbserv/12.1-2-gb29d23c+10/python/lsst/dax/dbserv/dbREST_v0.py", line 98, in sync_query return _response(response, status_code) File "/lsst/stack/Linux64/dax_dbserv/12.1-2-gb29d23c+10/python/lsst/dax/dbserv/dbREST_v0.py", line 138, in _response response = json.dumps(response) File "/lsst/stack/Linux64/miniconda2/4.2.12.lsst1/lib/python2.7/json/__init__.py", line 244, in dumps return _default_encoder.encode(obj) File "/lsst/stack/Linux64/miniconda2/4.2.12.lsst1/lib/python2.7/json/encoder.py", line 207, in encode chunks = self.iterencode(o, _one_shot=True) File "/lsst/stack/Linux64/miniconda2/4.2.12.lsst1/lib/python2.7/json/encoder.py", line 270, in iterencode return _iterencode(o, 0) File "/lsst/stack/Linux64/miniconda2/4.2.12.lsst1/lib/python2.7/json/encoder.py", line 184, in default raise TypeError(repr(o) + " is not JSON serializable") TypeError: Decimal('0.000') is not JSON serializable {code}
| 1 |
2,102 |
DM-10188
|
04/11/2017 13:57:42
|
Fix two drawing related issues with the new rotation scheme and check regions
|
Fix two drawing related issues with the new rotation scheme - marker resize handles off when rotated - footprint rotate handles off when rotated Check both wit flip as well. Also check if the region code is working the flip and rotate. This ticket should be done as a pull request against branch dm-10065-canvas instead of dev.
| 8 |
2,103 |
DM-10195
|
04/11/2017 18:36:22
|
Improve comparison handling in Name and SpecificationSet classes of verify framework
|
I realized that not {{__ne__}} is not automatically implemented when {{__eq__}} is implemented. This can cause unexpected behavior with the {{!=}} operator. This ticket implements {{__ne__}} in existing classes in verify. I also implements {{__lt__}}, {{__le__}}, {{__gt__}}, and {{__ge__}} operators to the Name class to support Name sorting.
| 0.5 |
2,104 |
DM-10197
|
04/12/2017 09:40:53
|
Clarify wording relating to flat-fielding in LDM-151
|
On reading LDM-151, I asked [~mfisherlevine] a question about how flat-fields were being applied. He's provided some updated text. On this ticket, we'll include it in the document.
| 1 |
2,105 |
DM-10206
|
04/12/2017 17:43:57
|
Fix obs_decam compatibility with 0-indexed HDUs
|
The recent implementation of DM-9952 caused the ingestCalibs config in obs_decam to break when ingesting defects (and possibly other calibration products too). A quick fix should adjust the default HDU indexing to begin at 0 instead of 1.
| 0.5 |
2,106 |
DM-10212
|
04/13/2017 02:54:16
|
Check memory locking in containers
|
Memory locking should be controlled inside containers
| 3 |
2,107 |
DM-10217
|
04/13/2017 17:42:34
|
Locate LSE-63 source and modernize
|
In order to annotate LSE-63 with requirements I need to locate the source from [~tony], add it to github and make it work with the new Latex look and feel.
| 1 |
2,108 |
DM-10220
|
04/14/2017 10:58:48
|
Miscellaneous corrections to C++ Doxygen guidelines
|
The following minor changes should be made to the C++ Documentation style guide: * The guide should note that multi-line brief descriptions must be prefixed by {{@brief}}. * The guidelines for covering multiple parameters in a single {{@tparam}} or {{@param}} tag claim that Doxygen can't handle parameters separated by both a comma and a space. Testing has shown that this is not the case; Doxygen correctly identifies what's a parameter and what's the first word of the description. * Likewise, parameters can be documented as {{\[in, out]}} rather than {{\[in,out]}}. * The guidelines for the {{@throws}} tag should say how to namespace-qualify exceptions: the rules given for {{@see}} work, but namespace abbreviations must not be used (Doxygen won't resolve the links correctly). This ticket *should not* be started before July 2017 to give time for more errors to be identified.
| 1 |
2,109 |
DM-10221
|
04/14/2017 11:23:07
|
Allow --id to use any key in the registry
|
As described in DM-5902 it is very useful to be able to specify arbitrary registry keys in the {{--id}} of e.g. {{constructBias.py}}. Unfortunately only keys that are returned by {{butler.getKeys}} for the appropriate dataset type are currently accepted, even if other keys would be sufficient to define the desired data. Please fix this.
| 1 |
2,110 |
DM-10225
|
04/14/2017 13:04:04
|
Ingest IMGTYPE along with other header keys
|
Ingest {{IMGTYPE}} along with other header keys. Add a translator so that these end up with useful/sane values.
| 1 |
2,111 |
DM-10229
|
04/14/2017 13:55:08
|
pipe_base tests try to write to obs_test
|
And then fail when they don't have permission. That is: {code} ====================================================================== ERROR: testOutputs (__main__.ArgumentParserTestCase) Test output directories, specified in different ways ---------------------------------------------------------------------- Traceback (most recent call last): File "testArgumentParser.py", line 502, in testOutputs args = parser.parse_args(config=self.config, args=[DataPath, "--rerun", "foo"]) File "/ssd/swinbank/pipe_base/python/lsst/pipe/base/argumentParser.py", line 509, in parse_args namespace.butler = dafPersist.Butler(inputs=inputs, outputs=outputs) File "/ssd/lsstsw/stack_20170409/Linux64/daf_persistence/13.0-6-gf146911/python/lsst/daf/persistence/butler.py", line 351, in __init__ self._createRepoDatas(inputs, outputs) File "/ssd/lsstsw/stack_20170409/Linux64/daf_persistence/13.0-6-gf146911/python/lsst/daf/persistence/butler.py", line 572, in _createRepoDatas parents = self._getParentsList(inputs, outputs) File "/ssd/lsstsw/stack_20170409/Linux64/daf_persistence/13.0-6-gf146911/python/lsst/daf/persistence/butler.py", line 538, in _getParentsList cfg = Storage.getRepositoryCfg(args.cfgRoot) File "/ssd/lsstsw/stack_20170409/Linux64/daf_persistence/13.0-6-gf146911/python/lsst/daf/persistence/storage/storageContinued.py", line 82, in getRepositoryCfg ret = Storage.storages[parseRes.scheme].getRepositoryCfg(uri) File "/ssd/lsstsw/stack_20170409/Linux64/daf_persistence/13.0-6-gf146911/python/lsst/daf/persistence/posixStorage.py", line 150, in getRepositoryCfg repositoryCfg = PosixStorage._getRepositoryCfg(uri) File "/ssd/lsstsw/stack_20170409/Linux64/daf_persistence/13.0-6-gf146911/python/lsst/daf/persistence/posixStorage.py", line 131, in _getRepositoryCfg with open(loc, 'r') as f: IOError: [Errno 13] Permission denied: u'/ssd/lsstsw/stack_20170409/Linux64/obs_test/13.0-6-g7b63e3f/data/input/rerun/foo/repositoryCfg.yaml' {code}
| 1 |
2,112 |
DM-10231
|
04/14/2017 14:25:56
|
FileForWriteOnceCompareSame does not respect umask
|
Viz: {code} $ umask 0022 $ python Python 2.7.13 |Anaconda custom (64-bit)| (default, Dec 20 2016, 23:09:15) [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2 Type "help", "copyright", "credits" or "license" for more information. Anaconda is brought to you by Continuum Analytics. Please check out: http://continuum.io/thanks and https://anaconda.org >>> import lsst.daf.persistence.safeFileIo as safeFileIo >>> with safeFileIo.FileForWriteOnceCompareSame("bar") as f: ... f.write("foo") ... >>> $ ls -l bar -rw------- 1 swinbank lsst_users 3 Apr 14 15:24 bar {code} This means in particular that {{repositoryCfg.yml}} files will be written with restrictive permissions. See also DM-10229.
| 1 |
2,113 |
DM-10233
|
04/14/2017 18:50:24
|
getInfoFromMetadata() throws away errors without warning.
|
getInfoFromMetadata() throws away errors without warning. This makes debugging writing translator functions very hard.
| 1 |
2,114 |
DM-10234
|
04/17/2017 06:24:26
|
Submit lsstDebug notes to Developer Guide
|
A while back, I wrote some (brief!) notes on {{lsstDebug}}, which I think are more helpful than https://lsst-web.ncsa.illinois.edu/doxygen/x_masterDoxyDoc/base_debug.html. Rather than having them get lost, let's see if we can add them to the Dev Guide. (Ultimately, they should be in a "framework" section of the pipelines docs, à la DMTN-030. But that doesn't exist yet.)
| 1 |
2,115 |
DM-10235
|
04/17/2017 07:23:13
|
Bug in coaddDriver when selecting images by PSF quality.
|
In DM-9855, the option to select input images based on PSF quality was added and made the default for assembleCoadd in obs_subaru. However, coaddDriver has its own function to select images and sets the one in assemble coadd to null. The config in obs_subaru was overriding this and the selection of images was being run twice and having strange results. The resolution is to set the image selector in coaddDriver to the PSF quality selector and set the assembleCoadd selector to null.
| 1 |
2,116 |
DM-10236
|
04/17/2017 07:29:57
|
Properly apply the meas_mosaic solution
|
As stated in DM-9862, meas_mosaic assumes the 0,0 for each CCD to be the lower left hand corner. LSST uses a different coordinate system so meas_mosaic rotates the wcs into the LSST frame when writing to disk. However the photometric correction is still in the HSC frame, so when applying the meas_mosaic correction we need to rotate the wcs back to the HSC frame. We can then create the photometric correction, rotate it into the LSST frame and apply it to the image. We then rotate the wcs back to the LSST frame.
| 2 |
2,117 |
DM-10241
|
04/17/2017 16:53:20
|
Need to fix the East arrow in the North/East compass when close to the polar region
|
In http://irsawebdev1.ipac.caltech.edu/firefly/;a=layout.showDropDown?visible=false, search for image: RA = 45, Dec = 89, select IRAS with 5 degree cutout; Add grid, add the North/East compass. The East arrow line should be perpendicular to the North arrow line.
| 1 |
2,118 |
DM-10242
|
04/17/2017 17:44:34
|
Stop using astrometry_net by default
|
There is code in {{pipe_tasks}} that uses {{astrometry_net}} to load catalogs and fit astrometric solutions. This ticket is to move to the new reference catalog format and newer astrometry fitter. Note that when this is done in {{pipe_tasks}} that we can remove {{meas_extensions_astrometryNet}} as a dependency. I suggest this wait on DM-2186, though that is not strictly necessary.
| 1 |
2,119 |
DM-10251
|
04/18/2017 12:12:26
|
weekly release w_2017_16 failed due to lsstsw@lsst-dev breakage
|
The jenkins node running as {{lsstsw@lsst-dev}} used for publishing eups distrib packages is unable to build from current master, thus preventing the weekly release process. Speculation is that some installed products were built from a mixture of the system gcc and devtoolset-3.
| 0.5 |
2,120 |
DM-10252
|
04/18/2017 13:34:08
|
getOutputId() assumes keys will exist, and doesn't use butler to retrieve them
|
In constructCalibs.py, getOutputId() assumes certain keys will exist exist within a dataId. Whilst this information will likely exist in the registry, it _shouldn't_ already exist in the dataId (that's not what they're for), and these should be retrieved at this point using the butler.
| 1 |
2,121 |
DM-10265
|
04/19/2017 08:22:05
|
Include table persistence docs in Doxygen listing for afw
|
There's some documentation at https://github.com/lsst/afw/blob/1d3521f81d348fdc49b708342eba0be1caf5295b/doc/tablePersistence.dox which seems useful, but is basically impossible to find in Doxygen. Let's include it on the list at https://lsst-web.ncsa.illinois.edu/doxygen/x_masterDoxyDoc/afw.html.
| 1 |
2,122 |
DM-10267
|
04/19/2017 08:35:05
|
Port HSC support for PostgreSQL registries to LSST
|
At least some of the implementation was in $OBS_SUBARU_DIR/python/lsst/obs/subaru/ingest.py on the old HSC fork.
| 2 |
2,123 |
DM-10268
|
04/19/2017 09:20:33
|
Butler cannot read a repo using the realpath when it was created with a link
|
With the recent Butler changes (likely when {{_parent}} link was removed), Butler can no longer read an output repo if the repo path was from a symlink. For example, on lsst-dev, I have a symlink {{/datasets/hsc/repo/rerun/private/hchiang2/}} -> {{/scratch/hchiang2/hscRerun/}} (therefore {{/datasets/hsc/repo/rerun/private/hchiang2/xxx}} == {{/scratch/hchiang2/hscRerun/xxx}}) so I processed some hsc data and the outputs went to my scratch space (e.g. {{processCcd.py /datasets/hsc/repo --id visit=18194 ccd=9 --rerun private/hchiang2/xxx}}) Then I tried to read the output repo. I can only read it through {{dafPersist.Butler("/datasets/hsc/repo/rerun/private/hchiang2/xxx/")}}, but not {{dafPersist.Butler("/scratch/hchiang2/hscRerun/xxx/")}}. The latter had errors finding its parent: {code:java} Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/software/lsstsw/stack/Linux64/daf_persistence/13.0-6-gf146911/python/lsst/daf/persistence/butler.py", line 353, in __init__ self._repos._buildLookupLists(inputs, outputs) File "/software/lsstsw/stack/Linux64/daf_persistence/13.0-6-gf146911/python/lsst/daf/persistence/butler.py", line 219, in _buildLookupLists addRepoDataToLists(repoData, 'out') File "/software/lsstsw/stack/Linux64/daf_persistence/13.0-6-gf146911/python/lsst/daf/persistence/butler.py", line 211, in addRepoDataToLists addRepoDataToLists(self.byRepoRoot[parent], addParentAs) KeyError: '/' {code} Above was with the {{w_2017_14}} stack. The {{repositoryCfg.yaml}} file in the repo has {{\_parents: \[../../../..\]}}
| 2 |
2,124 |
DM-10270
|
04/19/2017 09:38:50
|
isrTask does not provide config option for defects
|
Most things in isrTask are configurable, but whether defects masked is not currently protected by a config option, and furthermore, the default behaviour when a defect file isn't found is a fail. A config parameter should be added to allow not doing defects.
| 1 |
2,125 |
DM-10271
|
04/19/2017 10:41:36
|
Fix order of operations when using temporary local backgrounds in detection
|
Our current implementation of the temporary local background approach to avoiding spurious detections near bright objects simply subtracts a local background from the full image before performing any detection steps. That can result in missed isolated-object detections and incorrect Footprints for large objects. Instead, we should: 1. Detect Footprints and Peaks. 2. Subtract the local background. 3. Detect Peaks within each Footprint again, and use the new set of Peaks instead of the old set if and only if there is at least one Peak in the new set. This really ought to be fixed before the HSC internal release or major HSC processing at NCSA.
| 3 |
2,126 |
DM-10272
|
04/19/2017 10:57:14
|
scons build fails under miniconda3 on el6
|
We are unable to build on el6/py3 as the {{scons}} product's {{eupspkg.cfg.sh}} references {{python2.7}}. The system python interpreter on el6 is {{python2.6}}. If scons is still compatible with 2.6, and we want to support the combination of el6/py3 (???), this could be fixed by changing the interpreter to be {{python2}}. {code:java} [el6] ***** error: from /build/EupsBuildDir/Linux64/scons-2.5.0.lsst2+1/build.log: [el6] ++ debug 'CC='\''cc'\''' [el6] ++ [[ 3 -ge 2 ]] [el6] ++ echo 'eupspkg.dumpvar (debug): CC='\''cc'\''' [el6] eupspkg.dumpvar (debug): CC='cc' [el6] + for _VAR in '"$@"' [el6] + eval 'debug "CXX='\''$CXX'\''"' [el6] ++ debug 'CXX='\''c++'\''' [el6] ++ [[ 3 -ge 2 ]] [el6] ++ echo 'eupspkg.dumpvar (debug): CXX='\''c++'\''' [el6] eupspkg.dumpvar (debug): CXX='c++' [el6] + for _VAR in '"$@"' [el6] + eval 'debug "SCONSFLAGS='\''$SCONSFLAGS'\''"' [el6] ++ debug 'SCONSFLAGS='\''opt=3'\''' [el6] ++ [[ 3 -ge 2 ]] [el6] ++ echo 'eupspkg.dumpvar (debug): SCONSFLAGS='\''opt=3'\''' [el6] eupspkg.dumpvar (debug): SCONSFLAGS='opt=3' [el6] + build [el6] + python2.7 setup.py build [el6] /build/EupsBuildDir/Linux64/scons-2.5.0.lsst2+1/scons-2.5.0.lsst2+1/ups/eupspkg.cfg.sh: line 5: python2.7: command not found [el6] + exit -4 [el6] eups distrib: Failed to build scons-2.5.0.lsst2+1.eupspkg: Command: [el6] source "/build/eups/bin/setups.sh"; export EUPS_PATH="/build"; (/build/EupsBuildDir/Linux64/scons-2.5.0.lsst2+1/build.sh) >> /build/EupsBuildDir/Linux64/scons-2.5.0.lsst2+1/build.log 2>&1 4>/build/EupsBuildDir/Linux64/scons-2.5.0.lsst2+1/build.msg [el6] exited with code 252 [el6] Removing lockfile /build/.lockDir/exclusive-root.631 {code}
| 1 |
2,127 |
DM-10275
|
04/19/2017 11:23:04
|
Add save image button for Plotly charts
|
When a chart is plotly chart, image download button should be present on it's toolbar. Start with png and jpeg (which require no payment plan).
| 5 |
2,128 |
DM-10281
|
04/19/2017 14:54:41
|
compiler warnings in astshim
|
I'm seeing several variants of the following errors in afw builds against astshim: {code} include/astshim/Object.h:426:80: warning: format not a string literal and no format arguments [-Wformat-security] void set(std::string const & setting) { astSet(getRawPtr(), setting.c_str()); } {code} This happens virtually anywhere there's a call to an AST function with the results of {{.c_str()}}. I also see two instances of signed/unsigned comparison: {code} /home/jbosch/LSST/py2/stack/Linux64/astshim/master-gb637561bb9/include/astshim/detail/utils.h:43:14: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] if (val1 != val2) { {code} This is with gcc 5.4.0 on Ubuntu 16.04.
| 2 |
2,129 |
DM-10287
|
04/20/2017 07:10:08
|
Add measurement plugin to store footprint area
|
This is a trivial feature request that would have also been very useful on DM-9962. In terms of effort it should be "in the noise" of everything else blocking DM-10266, so I'm going to try to get it in.
| 0.5 |
2,130 |
DM-10288
|
04/20/2017 08:03:33
|
afwImage.TanWcs.cast() not supported anymore in jointcalCoadd
|
The cast at : https://github.com/lsst/jointcal/blob/master/python/lsst/jointcal/jointcalCoadd.py#L54 is not supported anymore and is probably useless since the pybind11 migration
| 1 |
2,131 |
DM-10289
|
04/20/2017 08:08:46
|
record.setValidPolygon(xxx) does not accept None as a valid input anymore
|
makeCoaddTempExp is crashing at https://github.com/lsst/pipe_tasks/blob/master/python/lsst/pipe/tasks/coaddInputRecorder.py#L148 because None is not accepted anymore as a valid input for record.setValidPolygon. Just skipping this instruction when there is no valid polygon set in the exposure seems to be a possible workaround
| 1 |
2,132 |
DM-10292
|
04/20/2017 11:56:22
|
The FrameSet returned by Transform.getFrameSet can change the contained FrameSet in Python
|
The {{ast::FrameSet}} returned by {{Transform.getFrameSet}} is not immutable in Python, and modifying it modifies the frame set owned by the transform. In C++ the returned FrameSet is immutable, but that is impossible to enforce in Python, so the obvious fix is for the Python wrapper to return a copy.
| 0.5 |
2,133 |
DM-10295
|
04/20/2017 14:51:29
|
Fix metaserv v0 table listing and deploy docker containers
|
metaserv table listing uses {{information_schema}}. It should utilize DBRepo and DDT_Table in metaserv.
| 1 |
2,134 |
DM-10302
|
04/20/2017 17:16:01
|
Rename "*_flux" fields to "*_instFlux" in SourceCatalogs
|
This is the implementation ticket for RFC-322. The implementation is as follows: * Rename all of our {{*Flux_flux}}/{{*Flux_fluxSigma}} table fields to {{*Flux_instFlux}}\{{*Flux_instFluxSigma}} to hold the post-ISR counts. * Add {{*Flux_flux}} and {{*Flux_mag}} fields for the post-calibrated flux (in Maggies) and magnitudes. *Rename the {{InstFlux}} slot to {{GaussianFlux}}, and remove the {{InstMag}} slot. * Add associated documentation to the above. * Pass these changes on to the relevant database groups to update e.g. {{cat}}. Implementing this will wait until Calib is removed from the stack and replaced by PhotoCalib (not yet scheduled, but likely within the next couple months).
| 8 |
2,135 |
DM-10304
|
04/21/2017 08:41:07
|
Create tech note describing options for DM software releases
|
Following RFC-188 and discussion the 2017-04-17 DMLT meeting, discuss options for the DM release process with stakeholders and create a technote summarising important considerations.
| 3 |
2,136 |
DM-10305
|
04/21/2017 10:31:28
|
Update firefly_widgets for changes in external dependencies
|
Update firefly_widgets to account for some changes in external dependencies: * Generalize the connection parameter, for working with Firefly servers now being deployed with {{suit}} or {{irsaviewer}} in the URL, as was done for firefly_client in DM-9843 * Make some minor syntax changes to work with ipywidgets 6.0 * Update example notebooks accordingly
| 2 |
2,137 |
DM-10329
|
04/24/2017 12:56:18
|
Write up Science Pipelines perspective on SuperTask interfaces and concepts
|
Describe (my) perception of the current state of the SuperTask interface from the Science Pipelines perspective to gather feedback from other Science Pipelines stakeholders and clarify any problems to the rest of the SuperTask working group.
| 2 |
2,138 |
DM-10332
|
04/24/2017 15:14:16
|
Test deblender with exact positions
|
[~pmelchior] and I believe that the main failure of the current deblender is due to the difference between a sources peak position and its integer position in the pixel grid, which causes the symmetry operator to fail. In DM-10189 we created a translation operator that can shift the position of the peak to a fractional position and before we implement a solution to fit this position (DM-10310) it will be advantageous to check use the exact position (known in a simulated image) and see if this resolves the problem. There is no reason to proceed with DM-10310 until we are sure that better positions and the translation operator will improve the deblender performance. Since this ticket requires re-writing the functions that calculate the likelihood, this will also include updating the intensity matrix ({S}} in the NMF code) so that each source is in the center pixel. This suggestion by [~jbosch] allows each source to use the same monotonicity and symmetry operators, whose calculation were one of the more time consuming parts of the new deblender. It should also make optimization easier once parts of the code are ported to C++.
| 5 |
2,139 |
DM-10333
|
04/24/2017 15:58:24
|
duplicate keys in obs_base/policy/datasets.yaml
|
For example: 'processCcd_config' and `characterizeImage_config` both occur twice. Maybe others? Not a problem right now since the values are the same for a given key (at least for the two examples above), but could lead to errors in the future if, for example, second entry overrides the first. Should probably use a yaml loader that would raise an exception on duplicate keys, e.g., https://gist.github.com/pypt/94d747fe5180851196eb
| 1 |
2,140 |
DM-10336
|
04/24/2017 17:53:15
|
DM-10271 seems to have broken afw
|
As of DM-10271 I can no longer build afw on my Mac. I see errors such as: {code} In file included from python/lsst/afw/detection/peak.cc:33: include/lsst/afw/table/python/catalog.h:170:13: error: no matching function for call to 'PySlice_GetIndicesEx' if (PySlice_GetIndicesEx((PySliceObject*)s.ptr(), self.size(), &start, &stop, &step, &length) != 0) { ^~~~~~~~~~~~~~~~~~~~ python/lsst/afw/detection/peak.cc:114:42: note: in instantiation of function template specialization 'lsst::afw::table::python::declareCatalog<lsst::afw::detection::PeakRecord>' requested here auto clsPeakCatalog = table::python::declareCatalog<PeakRecord>(mod, "Peak"); ^ /Users/rowen/UW/LSST/lsstsw3/miniconda/include/python3.5m/sliceobject.h:43:17: note: candidate function not viable: no known conversion from 'PySliceObject *' to 'PyObject *' (aka '_object *') for 1st argument PyAPI_FUNC(int) PySlice_GetIndicesEx(PyObject *r, Py_ssize_t length, ^ {code} I am using a Python 3 lsstsw stack on macOS 10.12.4 with the current clang: Apple LLVM version 8.1.0 (clang-802.0.42)
| 0.5 |
2,141 |
DM-10338
|
04/24/2017 18:28:43
|
Mix of tabs and spaces in breaks meas_base builds
|
Seems to have been introduced in DM-430; [~pgee], please make sure your editor is configured to use spaces instead of tabs.
| 0.5 |
2,142 |
DM-10343
|
04/25/2017 10:14:15
|
Update lsst-dev shared stacks to use devtoolset-6
|
Update both shared stacks on {{lsst-dev01}} to build against devtoolset-6. Since this won't be ABI compatible with the earlier (no-devtoolset) stacks, we'll need to create entirely new stacks, which should probably build over a weekend.
| 1 |
2,143 |
DM-10344
|
04/25/2017 12:17:38
|
The Validate failed to validate the integer range
|
In HistogramOption, the numBins field used the validator: "Validate.intRange.bind(null, 1, 500, 'numBins')" to validate its value. However, when the empty string is entered in to the numBins, the validation still shows it is valid. The empty string is not in the range of 1-500, by definition, it is not valid.
| 0.5 |
2,144 |
DM-10349
|
04/25/2017 22:20:14
|
Chart expression logarithm to be the same as other languages
|
In Firefly chart column expression, we use log() as 10-based logarithm, ln() as natural logarithm. I checked Python, Perl, C, C++, Java, the convention is to use log() as natural logarithm, and log10() as the 10-based logarithm. We need to make the change and document it.
| 2 |
2,145 |
DM-10360
|
04/26/2017 14:46:09
|
Documentation for generating statistics for L2/catalog data
|
Statistics were generated at IN2P3 for correcting {{JOIN}} queries. The general process needs to be documented so we can optimize it in the future (see DM-9757 for reference).
| 2 |
2,146 |
DM-10370
|
04/26/2017 16:21:13
|
Refactor server side image code so that we cut memory size in half
|
Moving some of the image processing to the client side allows us not to use as much memory on the server. We currently keep to versions of the FITS data a 2D array and a 1D array. Remove the need to hold on to the 2D array.
| 8 |
2,147 |
DM-10379
|
04/27/2017 10:44:53
|
After a catalog search with a globe coverage map, the catalog search doesn't work anymore.
|
In today's build, either http://irsawebdev1.ipac.caltech.edu/firefly/ or http://irsawebdev1.ipac.caltech.edu/irsaviewer/, a bug is found: Catalogs, WISE, m81, search; then Catalogs, WISE, m31, search; Now the coverage map is shown as a whole globe. Now select Catalogs, it won't work. If select Images, this search panel can't proceed to search or being cancelled. The error message: CatalogSearchMethodType.jsx:179 Uncaught TypeError: Cannot read property 'x' of null at a (CatalogSearchMethodType.jsx:179) at Object.s [as reducerFunc] (CatalogSearchMethodType.jsx:395) at J (FieldGroupCntlr.js:384) at Z (FieldGroupCntlr.js:363) at p (FieldGroupCntlr.js:274) at combineReducers.js:132 at d (createStore.js:179) at middleware.js:52 at index.js:75 at Object.dispatch (index.js:14)
| 2 |
2,148 |
DM-10381
|
04/27/2017 11:40:43
|
Enhance test for meas_deblender's clipFootprintToNonzeroImpl
|
In DM-10361, [~nlust] added a quick fix for a problem in this function which was breaking the stack demo. That fix has done the trick, but it would have been nice to be able to catch the problem before it propagated as far as it did. There's already a fine unit test for this function; extend it to catch the problematic case.
| 1 |
2,149 |
DM-10386
|
04/27/2017 15:54:53
|
Add Constructor documentation to Footprints
|
Somehow in my many rebasings I lost the doxygen documentation to the Footprint constructors. Add that documentation back in.
| 1 |
2,150 |
DM-10391
|
04/27/2017 23:54:05
|
jupyterlab technote
|
Work involved in SQR-018.
| 5 |
2,151 |
DM-10392
|
04/28/2017 06:27:45
|
Upgrade kubernetes/docker on cc-in2p3 cluster
|
Work performed in DM-10314, DM-10212 and DM-10042 has to be integrated on cc-IN2P3 nodes ccqserv100->124.
| 5 |
2,152 |
DM-10393
|
04/28/2017 09:59:14
|
correct variable name in sites.xml template
|
Need to fix a typo in a variable name.
| 0 |
2,153 |
DM-10398
|
04/28/2017 14:57:24
|
Analyze run metadata from validation runs
|
Analyze and make summary plots for the metadata collected in DM-12440. This may involve investigating further any failure modes that may have cropped up.
| 8 |
2,154 |
DM-10401
|
04/29/2017 20:16:52
|
getPackageDir raises RuntimeError instead of pex::exceptions::NotFoundError
|
The documentation for {{utils.getPackageDir}} claims {{@throw lsst::pex::exceptions::NotFoundError if desired version can't be found}}, but it actually raises {{RuntimeError}}: {code} In [1]: import lsst.utils In [2]: lsst.utils.getPackageDir('dajfsfsa') --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-2-bac2a7aa8ca6> in <module>() ----> 1 lsst.utils.getPackageDir('dajfsfsa') RuntimeError: Package dajfsfsa not found {code} We should either fix the docstring, or fix what is raised (likely at the pybind11 layer). We also need to fix the {{GetPackageDirTestCase}} unittest so that it tests against the correct exception being raised (it currently tests {{Exception}}, which is unhelpful).
| 1 |
2,155 |
DM-10416
|
05/01/2017 17:14:15
|
Make lsst.afw.geom.Transform and SkyWcs pickleable
|
The new `lsst.afw.geom.Transform` classes and the subclass `SkyWcs` should be pickleable. One possibility is to pickle the string representation for the contained FrameSet.
| 0 |
2,156 |
DM-10421
|
05/01/2017 22:40:13
|
Publish LDM-294 draft to LSST the Docs
|
Publish [LDM-294|https://github.com/lsst/LDM-294] to LSST the Docs, with a landing page pointing to PDFs. This landing page is an MVP that will be improved and made generally available for all LaTeX-based DM documents.
| 1 |
2,157 |
DM-10426
|
05/02/2017 07:34:41
|
Identify stable version of kubernetes and docker on openstack
|
Kubernetes and Docker have provided quick update which have broken both openstack and cc-in2p3 setup. Stable version and configuration have to be identified.
| 2 |
2,158 |
DM-10427
|
05/02/2017 10:05:22
|
Establish sha256 hashing framework for VO API and content signatures
|
The idea of introducing SHA256 hashing in VO context is for the quick identification and verification of signatures for API method calls, as well as the returned content such as generated FITS images, which will become handy for caching and asynchronous operation.
| 20 |
2,159 |
DM-10430
|
05/02/2017 10:51:18
|
Add time stamps to the standard outputs to BatchCmdLineTask
|
Add time stamps to the logs by default in {{ctrl_pool}} jobs. In slurm jobs, this changes the log format in the JOBNAME.oJOBID log files.
| 1 |
2,160 |
DM-10433
|
05/02/2017 11:51:47
|
Build date information at wrong place in small browser window
|
Usually the application build date is at the bottom of the browser window on the dark bark. But when the browser window is small, the build date information flows up at the bottom of the visible portion of browser window. We need to fix it.
| 1 |
2,161 |
DM-10438
|
05/02/2017 13:20:08
|
Add DCR model data types
|
To use existing code in the stack for multi-band photometry, forced photometry, etc.. on the models generated with the DCR algorithm, many data types need to be added to obs_base. These will be added using the current name 'dcrModel' throughout, though it is expected that name might change following the eventual RFC to add the data type.
| 2 |
2,162 |
DM-10439
|
05/02/2017 13:43:26
|
bug in WCS match or compass overlay
|
Access PDAC http://lsst-sui-proxy01.ncsa.illinois.edu/suit, follow the steps and see the bug appear: # search CCD exposure images at position (0, 0) # filter on run = 5895 # turn on WCS match # turn on compass overlay # r-band image has east up while all other bands have east down # click on North up icon, r-band compass has North down Attachment is the portal after WCS match and north up actions.
| 2 |
2,163 |
DM-10441
|
05/02/2017 15:51:28
|
Image cutout is off the center when size specified in Ang degrees.
|
Following update from DM-10364, David Shupe reported seeing cutout images (calexp) is off the center. Further diagnosis revealed it's only the case when the cutout size is specified in Ang Degrees, not by 'pixel'.
| 2 |
2,164 |
DM-10452
|
05/03/2017 10:59:44
|
Create bboxFromIraf function in obs_base utils
|
Create (move) utility function to create a bbox from from an IRAF box from obs package to obs_base so others can use it.
| 1 |
2,165 |
DM-10453
|
05/03/2017 12:16:49
|
Fix bugs in matchPessimisticB
|
Bugs have been found in the latest refactor of matchPessimsiticB that effect performance of the matcher. failedPatternList persists between runs of MatchPessimisticBTask when running a range of ccds in the same processCcd run. failedPatternList is not currently used properly in the matcher loop. The pair_id lookup table is not filled with the correct data for half of the matrix.
| 2 |
2,166 |
DM-10455
|
05/04/2017 10:50:08
|
Use pool_recycle=3600 for long-lived database connection pools
|
We get "mysql server has gone away" errors when dbserv hasn't been used in a while. This might not be a problem in the future, but it's a problem with pooled connections too. The fix is to use pool_recycle parameter when creating sqlalchemy: http://docs.sqlalchemy.org/en/latest/core/pooling.html 3600 seconds is probably appropriate.
| 1 |
2,167 |
DM-10461
|
05/04/2017 14:58:01
|
WCS match can be a little off for plots that are nearly north
|
There is a roundoff error for computing rotation for plots that are nearly north
| 0.5 |
2,168 |
DM-10465
|
05/04/2017 17:55:03
|
Better error messages for position input
|
When user input a position like (-34, 23) or (123, -91), Firefly treated it as object name and gave the error message "Could not resolve Object: Enter valid object". It is hard for user to understand exactly what was wrong with the input. In reality it is the position out of range error. Firefly should provide better error message for situation. Another issue: "12.3, 23.4" is valid input, but "12.3, 23.4 gal" is not. And we have an example for the latter in the input pane. input "18.0, -23.0" has been interpreted into "18.0, -23.0 Equ J2000 ". but the latter is invalid input "12 34 56.89 -23 45 16.56" has been interpreted into 12h00m00.00s, -23d00m00.0s Equ J2000
| 8 |
2,169 |
DM-10477
|
05/05/2017 14:42:09
|
"WCS match" checkbox on/off behavior improvements
|
Current behavior: When there are more than one images displayed, turning "WCS match" checkbox on will zoom and rotate all the images to the same pixel size and direction as the active image (selected, with orange color highlight), turning the checkbox off does nothing. Improvement: Turning off the "WCS match" should reset the images to a zero rotation. Also include two bug fixes: - A rotation issue: clicking rotate north on north up images fliped them south - When north is up, clicking north up makes the image rotate south.
| 2 |
2,170 |
DM-10483
|
05/08/2017 09:37:32
|
Create testdata_deblender Repo
|
Create a repository to store data for testing {{meas_deblender}}. At present this is the simulated data generated by [~rearmstr] in DM-9644 but later is likely to include public HSC data of particularly difficult blends.
| 1 |
2,171 |
DM-10486
|
05/08/2017 18:44:04
|
warpExposure and warpImage do not test correctly for dest = src
|
warpExposure and warpImage are supposed to throw an exception if destImage = srcImage. However, the unit test for this is mis-written, due to checking for Exception being raised, instead of a more specific exception, hiding other problems. The test case in question is {{testWarpIntoSelf}}. Worse, on my Mac, when I correct the test I find that it is possible to warp one image or exposure into itself, even though this results in altering the supposedly const input image and produces an incorrect destination image. Thus the C++ code that attempts to check for dest == src is not working.
| 3 |
2,172 |
DM-10490
|
05/09/2017 09:38:20
|
Cache camera in HscMapper
|
The lion's share of the time in instantiating a butler goes to building the camera object, because that involves parsing a {{Config}} with many lines, and a {{stat}} call for each line to get the traceback. Multiple butler instantiations (e.g., for multiprocessing or pipe_drivers) could benefit from caching the camera once it's built.
| 1 |
2,173 |
DM-10495
|
05/09/2017 13:14:03
|
turn on travis and flake8 protections in jointcal
|
The branch protection+travisCI+flake8 system will soon be made available for activation on LSST repositories. I've volunteered be a non-SQuaRE person to test the process, using the jointcal repo (which has a very small list of commit users right now). Once the "final draft" instructions are available, I'll follow them for jointcal and provide feedback to SQuaRE.
| 1 |
2,174 |
DM-10496
|
05/09/2017 15:24:58
|
test_chebyMap.py sometimes segfaults
|
test_chebyMap.py occasionally segfaults when run on my Mac (something on the order of 1 in 10 times). When this occurs it happens almost immediately and no output is printed. One thing to try is updating to the latest AST since a known bug in the Chebyshev handling of AstPolyTran has been fixed. If that doesn't solve the problem then more digging will be required (e.g. valgrind, but that would be much easier if I could write a pure C++ program that exhibited the behavior).
| 1 |
2,175 |
DM-10502
|
05/10/2017 09:27:47
|
Update NMF deblender to use new footprints
|
Development on the new deblender has been using an older version of the stack before the new footprints were pushed. In order to complete DM-9784 it is useful to have access to the new {{Footprint}}/{{SpanSet}} API, so this ticket will update the user branch {{u/fred3m/deblender}} that the holds the new deblender work to the current version of the stack.
| 1 |
2,176 |
DM-10505
|
05/10/2017 11:51:31
|
Robustify validate_drp fitting and catching errors.
|
First attempt to run `validate_drp` on HSC data failed with the following: {code} [...] 818 sources in ccd 0 visit 29352 1428 sources in ccd 1 visit 29352 1322 sources in ccd 2 visit 29352 1202 sources in ccd 3 visit 29352 Photometric scatter (median) - SNR > 100.0 : 18.6 mmag Traceback (most recent call last): File "/home/wmwv/local/lsst/validate_drp/bin/validateDrp.py", line 98, in <module> validate.run(args.repo, **kwargs) File "/home/wmwv/local/lsst/validate_drp/python/lsst/validate/drp/validate.py", line 104, in run **kwargs) File "/home/wmwv/local/lsst/validate_drp/python/lsst/validate/drp/validate.py", line 205, in runOneFilter photomModel = PhotometricErrorModel(matchedDataset) File "/home/wmwv/local/lsst/validate_drp/python/lsst/validate/drp/photerrmodel.py", line 235, in __init__ matchRef) File "/home/wmwv/local/lsst/validate_drp/python/lsst/validate/drp/photerrmodel.py", line 246, in _compute fit_params = fitPhotErrModel(mag[bright], magErr[bright]) File "/home/wmwv/local/lsst/validate_drp/python/lsst/validate/drp/photerrmodel.py", line 132, in fitPhotErrModel photErrModel, mag, mag_err, p0=p0) File "/ssd/lsstsw/stack_20170409/Linux64/miniconda2/4.2.12.lsst2/lib/python2.7/site-packages/scipy/optimize/minpack.py", line 740, in curve_fit raise RuntimeError("Optimal parameters not found: " + errmsg) RuntimeError: Optimal parameters not found: Number of calls to function has reached maxfev = 800. {code} 1. [X] Catch errors in computing things, recover, and do something reasonable. 2. [X] Make the fitting more robust.
| 2 |
2,177 |
DM-10508
|
05/10/2017 13:41:19
|
Remove writing of warped template added in DM-8145
|
I accidentally committed a line added for debugging, which saved the warped template in ImagePsfMatchTask. Delete this line.
| 1 |
2,178 |
DM-10511
|
05/10/2017 15:46:42
|
Apply Tim's comments to LDM-151
|
Apply comments
| 1 |
2,179 |
DM-10512
|
05/10/2017 16:34:47
|
Upgrade IPython on the shared stack to >=5.2
|
The current version of IPython on the shared stack on lsst-dev is 5.1.0 Unfortunately, 5.1 introduced a nasty bug that makes it ~impossible to quit ipdb in any remotely sane manner. All versions >=5.2 fix this. For details, see the first bullet point in the changelog for the 5.2 release on http://ipython.readthedocs.io/en/stable/whatsnew/version5.html#ipython-5-0
| 1 |
2,180 |
DM-10514
|
05/11/2017 10:03:41
|
Check qserv/qserv:dev works correctly
|
This container was broken, it will be re-generated and tested on Galactica and ccqservxxx.
| 2 |
2,181 |
DM-10516
|
05/11/2017 11:57:11
|
Default Null type for Avro schema may be incorrect
|
The current Avro schemas in lsst-dm/sample-avro-alert (and also in ZTF's schemas) can give a warning when decoded in Spark using that says (e.g., shows up I think for all nullable fields): {code} [WARNING] Avro: Invalid default for field cutoutScience: null not a [{"type":"record","name":"cutout","namespace":"ztf.alert","fields":[{"name":"fileName","type":"string"},{"name":"stampData","type":"bytes","doc":"jpeg"}]},"null"] [WARNING] Avro: Invalid default for field cutoutTemplate: null not a [{"type":"record","name":"cutout","namespace":"ztf.alert","fields":[{"name":"fileName","type":"string"},{"name":"stampData","type":"bytes","doc":"jpeg"}]},"null"] {code} This happens when I use Spark package com.databricks:spark-avro_2.11:3.2.0, create a SparkSession, and read data this way: {code} data = (sparkSession .read .format("com.databricks.spark.avro") .load("ztf/with-schema/ztf_avro_packets")) {code} This might be because null needs to be in quotes and it isn't? I don't know. This story is try quotes around "null" type in default type field and see if 1) the Spark warning goes away 2) data still serializes/deserializes with plain Python modules and nullable fields
| 1 |
2,182 |
DM-10520
|
05/11/2017 14:43:12
|
"Restore to the defaults" icon doesn't work properly
|
Build firefly from today's dev (May 11, 2017, the last commit is dd74beba73c8ab99bc8751b329bcf7e36627e591). Open firefly, search image, target = m81, WISE and SDSS. Select WISE image. Change to single image mode. Zoom out the WISE image. Click the "Restore to the defaults" key: The WiSE image is restored to the original size but right after that the SDSS image pops up. The restore key should only do the restoring job, not changing images.
| 3 |
2,183 |
DM-10521
|
05/11/2017 16:07:28
|
Create script to produce release performance table
|
Take the JSON output of a {{validate_drp}} run and produce a report suitable for a Characterization Metrics Report. See, e.g., https://pipelines.lsst.io/metrics/v13_0.html#metrics-v13-0 https://github.com/lsst/pipelines_lsst_io/blob/master/metrics/v13_0.rst and the attachment for an example to how to format things.
| 3 |
2,184 |
DM-10522
|
05/11/2017 16:32:28
|
Make dcrCoadds proper coadds
|
The dcrCoadd dataset type needs to be compatible with existing tools and functions that process coadds.
| 3 |
2,185 |
DM-10530
|
05/12/2017 14:29:28
|
don't set filter if the filter ID is not UNKNOWN (instead of testing if filter is None)
|
the filter is never None, but it can be not-set. In that case the Id is Filter::UNKNOWN.
| 1 |
2,186 |
DM-10531
|
05/15/2017 11:08:16
|
Add DM-level policy on use of secure Web protocols to LDM-148
|
Implement the consensus decision on RFC-326 to state, as a DM policy, that all services facing the public Internet will use secure protocols (https:, in particular, for Web services), unless a specific technical justification for not doing so is accepted by the DM-CCB and the project ISO.
| 1 |
2,187 |
DM-10532
|
05/15/2017 11:15:54
|
Pursue project-level policy on secure external protocols
|
We would like to recommend that the DM-level policy adopted under RFC-326 be promoted to a project-level policy. This action is to begin work on that, which might take the form of submitting an LCR advocating adopting this as an OSS-level requirement, or as a separate policy document. The basic policy proposed is that all services facing the public Internet should use secure protocols, unless a specific, technically justified exception is adopted via formal change control and with sign-off from the ISO.
| 2 |
2,188 |
DM-10541
|
05/16/2017 09:36:29
|
Add properties to image classes
|
This is the most obvious and useful application of RFC-279: - Add {{image}}, {{mask}}, and {{variance}} properties to {{MaskedImage}} (and {{Exposure}}, for convenience). - Add a {{maskedImage}} property to {{Exposure}}. - Add {{array}} properties to {{Image}} and {{Mask}}. These will cut down on unnecessary characters and save us from having to make temporary variables in order to assign to an image component.
| 1 |
2,189 |
DM-10542
|
05/16/2017 15:03:35
|
Replace XYTransform::linearizeTransform
|
Many clients of {{XYTransform}} use its methods {{linearizeForwardTransform}} and {{linearizeReverseTransform}}. We do not yet have analogous code for {{Transform}}. This functionality can be implemented as either a method on {{Transform}} or a separate function; the latter would provide better encapsulation.
| 2 |
2,190 |
DM-10546
|
05/16/2017 23:28:28
|
meas.mosaic.updateExposure.applyMosaicResultsExposure will not if there is no mosaic solution
|
Furusawa-san pointed out that coaddDriver can produce outputs even if mosaicking has not been run though {{doApplyUberCal=True}} (which is the default value for HSC). This appears to be due to allowing null values of the {{wcs}} and {{ffp}} in {{updateExposure.py}}. We should allow the functions in there to fail if the appropriate values are not available, or the user can be deceived as to what corrections have actually been applied.
| 1 |
2,191 |
DM-10549
|
05/17/2017 10:02:38
|
Use all simulated peaks
|
One of the shortcomings of the NMF deblender is that undetected sources wreak havoc on detected sources, where the likelihood causes detected sources to account for the extra flux with no local peak to attribute it to. [~pmelchior] and I have discussed methods to work around this problem but in the short term it will be easier to just use the exact locations of all of the objects from the simulated catalog, as opposed to only fitting the peaks detected by the pipeline. This ticket will give the user the option of using a set of simulated peaks instead of the detected peaks.
| 3 |
2,192 |
DM-10550
|
05/17/2017 10:10:24
|
Convert comparison text to plots
|
A lot of the data comparisons, like {{compareMeasToSim}} display a lot of information in text that would be easier to digest (and more compact) in the form of plots, which need to be created.
| 2 |
2,193 |
DM-10551
|
05/17/2017 10:14:04
|
Display templates together
|
It will be useful to have a script that can compare peaks using several different deblender methods side-by-side.
| 2 |
2,194 |
DM-10552
|
05/17/2017 10:28:51
|
Upgrade display_firefly to work with more servers
|
Implement changes to display_firefly to enable it to work with more servers, and fix some issues found in testing. * Add a {{basedir}} parameter, to make use of the capability added in DM-9843, to allow the firefly backend to be used with the PDAC server and the public IRSA server. * Pass keyword arguments to the FireflyClient constructor. * Update the image and mask handling, in part for changes due to the pybind11 conversion, and to restore mask labeling. In afw.display: * Fix the \_\_getattr\_\_ method of the display interface * Reorder tests so that displays are not closed prematurely
| 2 |
2,195 |
DM-10553
|
05/17/2017 14:36:15
|
Correct simulation positions
|
There are two issues with the simulation positions, both of which are likely related. The images created in DM-9644 are split into 4 quadrants, each of which is generated separately. The intensity values in the simulated tables are only the size of the quadrants, not the entire images, so the x,y positions in the table do not correspond to the position of a source in it's intensity image. The other issue is that the x,y positions in the catalog can be off by 1/2 pixel, which might depend on the quadrant chosen. In this case all of the positions will need to be adjusted accordingly. This ticket will use the position of the source to determine the quadrant it is located in to adjust the x,y positions to the correct positions in the quadrant, and make any adjustments needed to properly align the sources with the image.
| 1 |
2,196 |
DM-10557
|
05/18/2017 12:15:42
|
Use SingleFrameMeasurementTask to test deblender results
|
Because the simulated galaxies we are using to configure the deblender are extended with very large wings, comparing the deblended flux with the simulated flux is deceptively difficult. The problem is that a large percentage (as much as 50%) of the simulated flux can lie below the noise level, where it is truncated in the deblender, making it difficult to compare the total flux of each object. It was recommended by [~jbosch] that running {{SingleFrameMeasurementTask}} on both the deblender output and simulated images will allow us to make a better comparison of the results. This ticket will implement a set of tests using {{SingleFrameMeasurementTask}}.
| 2 |
2,197 |
DM-10558
|
05/18/2017 13:08:47
|
disable or remove butler caching
|
the reused object aspect of caching (documented in LDM-463 draft) is causing confusion because shared objects can be mutated, and this is not what some developers would expect. Disable it and/or remove the code for now. If it's wanted, we can have an RFC or RFD about how to handle mutable objects. Also update LDM-463 (remove the bit about caching)
| 1 |
2,198 |
DM-10559
|
05/18/2017 15:05:49
|
afw.image.makeWcs() returns null pointer without warning
|
Line 49 of https://github.com/lsst/afw/blob/master/src/image/makeWcs.cc should log a warning before returning a None wcs, as it makes it hard to track things down without it.
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.