id
int64 0
5.38k
| issuekey
stringlengths 4
16
| created
stringlengths 19
19
| title
stringlengths 5
252
| description
stringlengths 1
1.39M
| storypoint
float64 0
100
|
---|---|---|---|---|---|
1,899 |
DM-9314
|
02/07/2017 11:34:39
|
weekly tag/release build of w_2017_6 failed: flask
|
The new flask is not building on master and this is causing the weekly tag/release build to fail. It looks like this is being triggered by DM-9268.
| 0.5 |
1,900 |
DM-9317
|
02/07/2017 13:14:20
|
Creates online help for SUIT
|
Migrate Firefly's onlinehelp system so that it can be used by SUIT. - created github repository: https://github.com/lsst/suit-onlinehelp/ - copy content from existing irsaviewer online help. - change build/config script for SUIT. - setup build/install environment for pdac To build and install online help on pdac: - log into sui-tomcat01 - switch to suiadmin - source /hydra/cm/env/env.sh - cd /hydra/cm/suit-onlinehelp - git pull - gradle install
| 2 |
1,901 |
DM-9326
|
02/07/2017 18:11:03
|
Clean up time series viewer
|
Please, to be consistent with the new name of Time Series viewer: - Rename {{lc.html}} to {{ts.html}} (Time Series replace Light-Curve) - {{wise}} should be upper case Thanks.
| 1 |
1,902 |
DM-9327
|
02/07/2017 18:50:35
|
Create IFE app for Time series viewer
|
Need a Time Series viewer app for IRSA, under IFE repository. Please, create app with no logo (for now - for later probably we will need a new icon).
| 2 |
1,903 |
DM-9334
|
02/08/2017 11:52:58
|
monocamIngestImages.py is vestigial
|
obs_monocam provides a [{{monocamIngestImages.py}}|https://github.com/lsst/obs_monocam/blob/786219475ffd0e022c98234d5b00a5c30ca668c5/bin.src/monocamIngestImages.py] which attempts to import {{lsst.obs.monocam.ingest.MonocamIngestTask}}. {{MonocamIngestTask}} [was removed in {{7862194}}|https://github.com/lsst/obs_monocam/commit/786219475ffd0e022c98234d5b00a5c30ca668c5#diff-16d2f190c33eaa0d3ef05279458bd775L95]. I conclude that {{monocamIngestImages.py}} is a confusing remnant which should be removed.
| 1 |
1,904 |
DM-9336
|
02/08/2017 13:23:41
|
Complete LCR-836 (typographic correction to LSE-69)
|
LCR-836 (already created) is for a long-pending typographic correction to LSE-69 to get it to match LSE-130's section headings. The work on this ticket is to prepare the new docgen and submit it to the CCB.
| 2 |
1,905 |
DM-9342
|
02/08/2017 15:36:46
|
histogram option update (input dialog and server support)
|
For uniform binning (fixed bin size), there are two different ways to give the input: 1. number of bins (the current option only) OR 2. bin width Label may needs to be changed as well. [March-2-2017] To have a choice of the number of bins or bin width, two radio buttons and one text box are added. By default, the number of bins is selected. When the radio button is selected, a new value either number of bins or bin width should be entered and validated. Then the histogram will be updated accordingly. Since no data range information is available before sending the request, the bin width can not be validated correction. Thus, an exception is thrown if the bin width is large than the data range. [March-16-17] Based on the modified requirements, the UI is # Add two radio buttons and two input text boxes next to the two radio buttons, one of which is number of bins and the other is bin width. # Add two input text boxes below the two set radio/text boxes. One of which is for min and the other is max After several discussions and iterations, we added the following: # For single column histogram, since the data min/max is stored in the column variable array, they can be used to pre-fill the min and max. # The bin width is calculated based on the number of bins and min and max and then the bin width is pre-filled. # The bin width is recalculated each time when number of bins or min or max changes. # When number of bin is selected, the bin width text box is disabled, when the bin width is selected, number of bins text box is disabled
| 8 |
1,906 |
DM-9346
|
02/08/2017 16:31:38
|
label change requests for Firefly/IRSA viewer
|
The following change will be in Firefly: * histogram Y axis label: Number * Charts scatter/histogram option dialog, "search" button: "OK" * Choose columns dialog "Set column or expression" button: "OK" The following changes will be in IRSA Viewer:
| 2 |
1,907 |
DM-9348
|
02/08/2017 17:09:38
|
LSST - time series viewer results (part 2, UI work)
|
This is the second part of changes to support LSST time series, which should handle band selection in UI. We need to support mission specific input, from which the x and y columns and automatic filtering of the raw table depending on the input parameters (band selection). https://github.com/Caltech-IPAC/firefly/pull/313 Attached to this ticket, please find the sample LSST multi-band table for object id 2990683841366709.
| 8 |
1,908 |
DM-9349
|
02/08/2017 19:01:00
|
Filtering selected row (ROWID filter) gives wrong results (dev version)
|
In Firfefly (+IRSAVIewer) development version, when the user select table rows and filter them out, the result table is not matching the selected rows before applying the filter. OPS is fine though. Step to reproduce: do a catalog search i.e., 2mass, 100". From the result table, select a row and apply filter. Then check that the resulting row is not the one you selected. TG 02/10/2017 The row returned is actually the previous row. The related issue is filtering a point from an image. A previous row is returned as a result.
| 2 |
1,909 |
DM-9350
|
02/08/2017 22:38:50
|
Possible to provide a drawing of the Stripe 82 sky region for context?
|
I am wondering if it would be straightforward to provide a "footprint" outline of the Stripe 82 area on the sky for use in highly zoomed-out context images in PDAC. If this is an easy task, in what format would we need to specify the coordinates of the online? I think we can do a calculation of four corners of all the coadded images and produce a sd9 region file for the outline. ============================================== How to generate the region files attached here: 1)Get four corners data for all the coadds FOVs for SDSS: curl -o sdssFourCorners.json -d 'query=SELECT+corner1Ra,corner1Decl,corner2Ra,corner2Decl,corner3Ra,corner3Decl,corner4Ra,corner4Decl+FROM+sdss_stripe82_00.DeepCoadd;' http://lsst-qserv-dax01.ncsa.illinois.edu:5000/db/v0/tap/sync 2)Convert the json file to csv using a tool on line: http://www.convertcsv.com/json-to-csv.htm 3)Sort the csv file by corner1Ra 4)Remove "^M" 5)Convert the csv file to a region file by adding "polygon(" and ") #color=green" and a header "fk5" 6)For only plot 1% or 10% FOVs: awk 'NR == 1 || NR % 100 == 0' input.reg > output_1pct.reg awk 'NR == 1 || NR % 10 == 0' input.reg > output_10pct.reg The region file can be loaded to Firefly.
| 2 |
1,910 |
DM-9353
|
02/09/2017 11:03:26
|
Update configuration for HSC calib construction
|
DM-9186 changed how the ISR configuration is set, but the configuration for calib construction wasn't updated to match.
| 0.5 |
1,911 |
DM-9356
|
02/09/2017 12:08:42
|
Summarize plans & questions for Calib Telescope work
|
Summarize the plans for processing data from the Calibration Telescope and any open questions that remain about the approach to be taken in bullet-point form. Provide them to [~mfisherlevine] for incorporation into LDM-151.
| 2 |
1,912 |
DM-9358
|
02/09/2017 14:35:34
|
Fix setting of calib_psf_candidate flag to match docstring description
|
The docstring for {{calib_psf_candidate}} reads: {code} self.candidateKey = schema.addField( "calib_psf_candidate", type="Flag", doc=("Flag set if the source was a candidate for PSF determination, " "as determined by the star selector.") ) {code} However, if the reserve fraction is set to a non-zero value, an object that was selected as a PSF candidate by the star selector but then flagged as reserved will have the {{calib_psf_candidate}} flag set to *False*. This is inconsistent with the docstring (and erroneously implies the object was not deemed a suitable candidate). Please update the setting of the flag to reflect its docstring description.
| 3 |
1,913 |
DM-9359
|
02/09/2017 14:57:18
|
Restoring coverage image after cropping it shows an empty tab entitled 'FITS data'
|
I've found a something wierd after doing a catalog search and i think is not expected. When doing a catalog search, the result tri-view shows image, table and xy-plot. The image is called 'Coverage'. Using the selection tool and cropping part of the image, the result is a smaller image which is correct. Then when trying to go back and restore to default image (click the "Restore" button on the toolbar), the result is 2 tabs, one with the coverage restored correctly and another one called 'FITS data' empty and active. I think the second tab shouldn't be displayed.
| 1 |
1,914 |
DM-9361
|
02/09/2017 15:04:52
|
Update Calib Telescope data processing section in LDM-151
|
Based on the input provided by [~aguyonnet] in DM-9356, update LDM-151 to provide a complete description of the plans for processing data from the Calibration Telescope.
| 2 |
1,915 |
DM-9363
|
02/09/2017 15:06:43
|
Construct obs package for 0.9m at CTIO
|
Data from the 0.9m telescope at CTIO is being used to prototype the processing pipeline for the Calibration Telescope. In order to ingest it into the stack, we'll need an appropriate obs package ("obs_ctio0m9"). Please create one.
| 8 |
1,916 |
DM-9364
|
02/09/2017 15:09:31
|
wcs creation is mandatory
|
Line 1044 of cameraMapper.py tries to attach a wcs, and crashes if the necessary keywords are not found in the metadata, and this shouldn't be the case. One could replace this with a try block to give every exposure a dummy wcs, but this is probably not the correct course of action - on failure the wcs should _probably_ just not be set, to allow a None wcs to exist. Whatever the fix, this should log a warning.
| 2 |
1,917 |
DM-9371
|
02/09/2017 17:13:10
|
Display image cutout in LSST PDAC
|
We need to provide option to display the image cutout, using DAX API.
| 2 |
1,918 |
DM-9379
|
02/10/2017 07:26:31
|
Changes to lsstsw to rename ctrl_events package to legacy-ctrl_events
|
This work removes: {code} ctrl_events {code} from {{etc/repos.yaml}} and moves the repo to {code} lsst-dm/legacy_ctrl_events {code}
| 0.5 |
1,919 |
DM-9381
|
02/10/2017 08:58:05
|
Add ability to highlight data subsets in analysis plots
|
Highlighting objects based on a "third" parameter, e.g. a flag setting, (other than spatial, for which we are already producing diagnostic figures) can be very useful in diagnosing pathologies (see, e.g. DM-9252). This ticket is to add this ability to the analysis plotting scripts.
| 2 |
1,920 |
DM-9382
|
02/10/2017 10:32:17
|
Phase folded table content fix
|
fix Phase folded table created for time series viewer (DM-8670) - fix the time column value (mjd) on the expanded cycle part. (the phase folded table is made to contain the data rows for two phase cycles): the value of 'mjd' column in the expanded part is duplicated from the original raw table. - make the creation of phase folded to be derived from full set of raw table, not just partial raw table shown on the page.
| 2 |
1,921 |
DM-9383
|
02/10/2017 10:47:17
|
Investigate propagation of visit flags for certain patches in HSC RC processing
|
For certain diagnostics plots, it is useful to select on specific subsets of the data. An example subset would be the objects that were used actually in the PSF modeling, i.e. those with flag *calib_psfUsed=True*. When plotting this subset for the coadds of the LSST stack processing of the HSC RC dataset (DM-6816), a large number of patches have no objects for which this flag is set. While the list is not expected to be identical to the list from the visit processing (the flag is propagated based on the fraction of visits overlapping that object and contributing to the coadd that had the flag set for said object), ending up with a list of zero objects used for PSF modeling should be very unlikely. The pattern is quite delineated, with sharp truncation lines beyond which no objects used in PSF modeling are found: !plot-t0-HSC-I-footNpix_calib_psfUsed-sky-stars_LSST.png|width=500! Of note, this does not happen for the same dataset processed through the HSC 4.0.5 stack (DM-9028): !plot-t0-HSC-I-footNpix_calib_psfUsed-sky-stars_HSC.png|width=500!
| 3 |
1,922 |
DM-9384
|
02/10/2017 10:49:59
|
Fix wrapped constructor for StatisticsControl
|
The constructor for {{lsst::afw::math::StatisticsControl}} has some arguments with default values, but the pybind11 wrapper omits them and only provides a default constructor. This breaks some existing code. I propose to wrap the constructor as written (which will be quite pleasant to use with named arguments) rather than fix the existing Python code that relies on being able to specify arguments to the constructor.
| 0.5 |
1,923 |
DM-9386
|
02/10/2017 11:42:44
|
IpacTableIpacTableFromSource returns one row less if there is no terminating new line in the table
|
IpacTableFromSource processor does not return the last row of the attached table. (This table does not have the newline after the last row.) To test, put the attached a.tbl to /hydra/workarea/firefly/temp_files and the attached test.html into /hydra/server/tomcat/webapps/firefly/demo. Use http://localhost:8080/firefly/demo/test.html to see the table. - Notice the number of rows is 157, the last row seems to be lost. - Sort on any column. Notice the number of rows is 158. We noticed this problem in LC phase folded table (the one that is being uploaded to the server). The attached file is a copy of the uploaded phase-folded table.
| 2 |
1,924 |
DM-9387
|
02/10/2017 12:51:25
|
lsst_build git fetch/clone retrying
|
We occasionally observe a "wave" of github fetch/clone failures from {{lsstsw / lsst_build}} in the jenkins env. Where a wave is several random failures over the course of a day or two and then there are no failures for weeks. I am convinced that these are on the github end as I have experienced clone failures when running {{lsstsw}} outside the the jenkins env. I am loath to retry the entire jenkins build upon any failure as this might result in a legitimate build failure unnecessarily tying up build slaves. There are two solutions that occur to me: 1) propagate errors up from {{lsst_build}} in such a way that the CI driver can determine the reason of failure and retry a set of failure modes 2) add git fetch/clone retrying support into {{lsst_build}} I am leaning towards #2 as the implementation is straight forward and contained within a single component.
| 3 |
1,925 |
DM-9394
|
02/10/2017 17:17:31
|
Add meas_extensions_convolved to lsst_distrib
|
This package is in active use and should benefit from regular CI. Adding it to lsst_distrib will require an RFC.
| 1 |
1,926 |
DM-9400
|
02/13/2017 07:02:07
|
Construct obs package for test stand 3
|
Create an obs package that provides basic access to data from test stand 3 (ie, individual CCDs). This should include loading the data from disk and providing all the standard LSST functionality, but does not need to integrate with Camera Team systems (e.g. you don't need to ingest metadata directly from eTraveller, or similar).
| 8 |
1,927 |
DM-9416
|
02/13/2017 14:40:50
|
Cleanup dead read/write StarList c++ code
|
While converting to use lsst::log in DM-8547, I started cleaning up all of the dead read/write code for StarLists. Since we're using LSST catalogs and refcats now, we shouldn't need it for anything, and we'll eventually be able to interrogate those lists from python (once we deal with DM-4043), if such functionality is desired. I'm filing this as a separate ticket so it doesn't clutter DM-8547 with a bunch of deleted lines.
| 1 |
1,928 |
DM-9423
|
02/14/2017 09:53:47
|
Port HSC patch to allow multiple filters in mosaic
|
[HSC-1398|https://hsc-jira.astro.princeton.edu/jira/browse/HSC-1398] allows multiple filters to be used in the mosaic solution. This is helpful because there are multiple r-band and i-band filters, and it's helpful to combine them. I'll also update the {{MosaicTask}} to allow use of the new LSST format reference catalogs.
| 0.5 |
1,929 |
DM-9428
|
02/14/2017 11:26:46
|
Update exceptions tutorial-level documentation for pybind11
|
pex_exceptions contains a tutorial in Doxygen that includes Swig-specific instructions for making sure C++ exceptions are propagated correctly to C++. This should be updated.
| 1 |
1,930 |
DM-9429
|
02/14/2017 12:36:03
|
Update lsst_dm_stack_demo for pybind11
|
Update lsst_dm_stack_demo for pybind11 and fix anything needed in the dependent packages
| 0.5 |
1,931 |
DM-9433
|
02/14/2017 13:42:54
|
ds9.py error code not working as intended
|
The following code in afw's display/ds9.py appears to not function as intended: {code} except Exception as e: # No usable version of display_ds9. # Let's define a version of getDisplay() which will throw an exception. e.args = ["%s (is display_ds9 setup?)" % e] def getDisplay(*args, **kwargs): raise e class DisplayImpl(object): def __init__(self, *args, **kwargs): raise e {code} To see this run examples/estimateBackground.py with display_ds9 not set up. What I see is: {code} Traceback (most recent call last): File "examples/estimateBackground.py", line 109, in <module> main() File "examples/estimateBackground.py", line 89, in main ds9.mtv(image, frame=0) File "/Users/rowen/UW/LSST/lsstsw3/build/afw/python/lsst/afw/display/ds9.py", line 84, in mtv return getDisplay(frame, create=True).mtv(data, title, wcs, *args, **kwargs) File "/Users/rowen/UW/LSST/lsstsw3/build/afw/python/lsst/afw/display/ds9.py", line 46, in getDisplay raise e NameError: name 'e' is not defined {code} I confess to some surprise. I am not an expert on Python's binding rules, but I expected the code to work.
| 0.5 |
1,932 |
DM-9434
|
02/14/2017 13:56:25
|
Fix database creation error in testTimeFuncs.py
|
An error is occurring in testTimeFuncs.py: {noformat} Unable to execute query: CREATE DATABASE test_14871027723 - * Access denied for user 'srp'@'%' to database 'test_14871027723' {0} lsst::pex::exceptions::RuntimeError: 'Unable to execute query: CREATE DATABASE test_14871027723 - * Access denied for user 'srp'@'%' to database 'test_14871027723 {noformat} The error is occurring because users don't have database creation permissions to create {noformat}test_%{noformat} databases. This only occurs on machines in the ncsa.illinois.edu domain, because the test is skipped if not run on that domain.
| 1 |
1,933 |
DM-9437
|
02/14/2017 14:55:14
|
automate jenkins workspace cleanup
|
All jenkins jobs that use {{lsstsw}} consume couscous amounts of disk space and require periodic manual workspace purging. This should either be moved to a jenkins plugin or an automated script.
| 3 |
1,934 |
DM-9438
|
02/14/2017 16:44:48
|
Switch default reference catalog for HSC to PS1 in LSST format
|
With the availability of PS1 and Gaia catalogs in the LSST format, we can remove our dependence on astrometry.net. This involves changing configuration files to use the PS1 catalog in LSST format and disable the use of astrometry.net.
| 1 |
1,935 |
DM-9439
|
02/14/2017 17:26:08
|
Package version checking is non-deterministic
|
When a command-line task runs, it [writes the versions of packages currently set up to the repository|https://github.com/lsst/pipe_base/blob/54f122d8ff9696ff2b1de8f72d33eff08773e0a3/python/lsst/pipe/base/cmdLineTask.py#L285]. We're then [unable to run the same task again without matching versions|https://github.com/lsst/pipe_base/blob/54f122d8ff9696ff2b1de8f72d33eff08773e0a3/python/lsst/pipe/base/cmdLineTask.py#L614]. This helps with reproducibility. However, when I repeatedly try to run the same task with exactly the same task with exactly the same packages set up, this version checking fails intermittently. The task exits, complaining: {code} Traceback (most recent call last): File "/Users/jds/Projects/Astronomy/LSST/src/pipe_drivers/bin/constructBias.py", line 4, in <module> BiasTask.parseAndSubmit() File "/Users/jds/Projects/Astronomy/LSST/stack/DarwinX86/ctrl_pool/12.1-7-gb57f33e/python/lsst/ctrl/pool/parallel.py", line 422, in parseAndSubmit if not cls.RunnerClass(cls, batchArgs.parent).precall(batchArgs.parent): # Write config, schema File "/Users/jds/Projects/Astronomy/LSST/stack/DarwinX86/pipe_base/12.1-5-g06c326c+6/python/lsst/pipe/base/cmdLineTask.py", line 303, in precall self._precallImpl(task, parsedCmd) File "/Users/jds/Projects/Astronomy/LSST/stack/DarwinX86/pipe_base/12.1-5-g06c326c+6/python/lsst/pipe/base/cmdLineTask.py", line 285, in _precallImpl task.writePackageVersions(parsedCmd.butler, clobber=parsedCmd.clobberVersions) File "/Users/jds/Projects/Astronomy/LSST/stack/DarwinX86/pipe_base/12.1-5-g06c326c+6/python/lsst/pipe/base/cmdLineTask.py", line 618, in writePackageVersions "); consider using --clobber-versions or --no-versions") lsst.pipe.base.task.TaskError: Version mismatch (meas_algorithms: 12.1-17-g13cfda1+6 with boost=1.60.lsst1+1 eigen=3.2.5.lsst2 vs 12.1-17-g13cfda1+6 with eigen=3.2.5.lsst2 boost=1.60.lsst1+1; coadd_utils: 12.1-1-g5961e7a+70 with boost=1.60.lsst1+1 eigen=3.2.5.lsst2 vs 12.1-1-g5961e7a+70 with eigen=3.2.5.lsst2 boost=1.60.lsst1+1; afw: 12.1-31-gb5bd9ab+1 with boost=1.60.lsst1+1 eigen=3.2.5.lsst2 vs 12.1-31-gb5bd9ab+1 with eigen=3.2.5.lsst2 boost=1.60.lsst1+1); consider using --clobber-versions or --no-versions {code} Note that the versions *are* the same, but are being reported in a different order ({{12.1-17-g13cfda1+6 with boost=1.60.lsst1+1 eigen=3.2.5.lsst2}} as compared to {{12.1-17-g13cfda1+6 with eigen=3.2.5.lsst2 boost=1.60.lsst1+1}}). Please fix this so that version checking works reliably.
| 0.5 |
1,936 |
DM-9456
|
02/15/2017 12:00:03
|
Rename FileLoader in test/util to UnitTestDataIO
|
While working on ImageHeaderTest, I found there are some methods can be utility methods. To add those methods there, the FileLoader is no longer meaningful. I rename it as UnitTestDataIO which contains all kind I/O including file, json file etc.
| 0 |
1,937 |
DM-9457
|
02/15/2017 13:23:20
|
test failure due to esutil/numpy problem
|
I'm seeing a test failure in meas_algorithms due to what looks like a failure of {{esutil}} compiled code to import the NumPy C API: {code} ====================================================================== ERROR: setUpClass (__main__.HtmIndexTestCase) ---------------------------------------------------------------------- ImportError: numpy.core.multiarray failed to import The above exception was the direct cause of the following exception: Traceback (most recent call last): File "tests/testHtmIndex.py", line 125, in setUpClass cls.indexer = IndexerRegistry['HTM'](config) File "/home/jbosch/LSST/sw/build/meas_algorithms/python/lsst/meas/algorithms/indexerRegistry.py", line 44, in makeHtmIndexer return HtmIndexer(depth=config.depth) File "/home/jbosch/LSST/sw/build/meas_algorithms/python/lsst/meas/algorithms/htmIndexer.py", line 34, in __init__ self.htm = esutil.htm.HTM(depth) File "/home/jbosch/LSST/sw/stack/Linux64/esutil/0.6.0+1/lib/python/esutil/htm/htmc.py", line 152, in __init__ this = _htmc.new_HTMC(depth) SystemError: <built-in function new_HTMC> returned a result with an error set {code} This is Python 3.6 and Numpy 1.12, so it's likely I'm seeing the problem because I'm pushing to newer versions than anyone else.
| 2 |
1,938 |
DM-9462
|
02/15/2017 16:46:29
|
Allow disabling adding underscores to pybind11 library names
|
Our final pybind11 coding conventions dictate that the wrapper library name should match the source code name. However, all of our existing pybind11 wrappers assume that the library name will always start with an underscore (whether the source file name does or not). I will add a new flag {{addUnderscore=True}} to the {{pybind11}} function. New code should specify this flag as False. Old code will continue to work unchanged.
| 0.5 |
1,939 |
DM-9464
|
02/15/2017 17:06:47
|
Link SublimeText clang-format setup instructions in docs
|
The current [clang-format docs|https://developer.lsst.io/tools/clang_format.html] link to setup instructions for vim and emacs, but not SublimeText. We should have a similar link on that page to how to integrate clang-format (including the best choice of Sublime package for it) with SublimeText.
| 1 |
1,940 |
DM-9465
|
02/15/2017 17:13:37
|
bug in utility class method objetArrayToJsonSring for Projection unit test
|
The bug is FitsHeaderToJson which is a utility class for Projection's unit test. It is under java/test. It has nothing to do with java/src. Bug description: The objetArrayToJsonSring converts a 2-dimension or 1-dimension array to a Json string. When it does the conversion, it shares a listObject and this object is cleared after each use. How since this list is added to another 2-dimension list, the values added in the 2-dimension were accidentally reset to 0.0. This bug was introduced when create unit test for Projection. But this bug does not affect the Projection's unit test since the array values are not relevant to the Projection and were not used. Analysis: The FitsHeaderToJson was created to produce the testing files for Projection. Since now there is util/ package under test. This file should move to util/ directory.
| 1 |
1,941 |
DM-9472
|
02/16/2017 09:25:45
|
Update pipelines.lsst.io installation docs for 12.1 release
|
bq. [somebody] installing the stack following the Conda instructions at pipelines.lsst.io is actually getting v12.1, rather than 12.0? bq. If so, we have a docs problem since https://pipelines.lsst.io/install/demo.html is pointing them at the lsst_dm_stack_demo for 12.0. And so it fails.
| 0.5 |
1,942 |
DM-9474
|
02/16/2017 10:02:17
|
validate_drp example/runDecamTest.sh broken on decam dataset
|
The {{validate_drp decam}} dataset was working in a test env several days ago. This dataset was added to the production {{validate_drp}} yesterday and is failing. It is also failing in the {{validate_drp hsc}} test env. {code:java} [py2] $ /bin/bash -e /tmp/hudson5838991375736374023.sh notice: lsstsw tools have been set up. Ingesting Raw data root INFO: Loading config overrride file '/home/jenkins-slave/workspace/validate_drp/dataset/decam/label/centos-7/python/py2/lsstsw/stack/Linux64/obs_decam/12.1-18-g15f9154+5/config/ingest.py' CameraMapper INFO: Unable to locate registry registry in root: /home/jenkins-slave/workspace/validate_drp/dataset/decam/label/centos-7/python/py2/validate_drp/Decam/input/registry.sqlite3 CameraMapper INFO: Unable to locate registry registry in current dir: ./registry.sqlite3 CameraMapper INFO: Loading Posix registry from /home/jenkins-slave/workspace/validate_drp/dataset/decam/label/centos-7/python/py2/validate_drp/Decam/input CameraMapper INFO: Unable to locate calibRegistry registry in root: /home/jenkins-slave/workspace/validate_drp/dataset/decam/label/centos-7/python/py2/validate_drp/Decam/input/calibRegistry.sqlite3 CameraMapper INFO: Unable to locate calibRegistry registry in current dir: ./calibRegistry.sqlite3 CameraMapper INFO: Loading Posix registry from /home/jenkins-slave/workspace/validate_drp/dataset/decam/label/centos-7/python/py2/validate_drp/Decam/input ingest.parse WARN: Unable to find value for ccdnum (derived from CCDNUM) ingest.parse WARN: Unable to find value for ccd (derived from CCDNUM) ingest.parse WARN: Unable to find value for ccdnum (derived from CCDNUM) ingest.parse WARN: Unable to find value for ccd (derived from CCDNUM) /home/jenkins-slave/workspace/validate_drp/dataset/decam/label/centos-7/python/py2/lsstsw/stack/Linux64/validation_data_decam/master-g52ac2b0d78/instcal/instcal0176837.fits.fz --<link>--> /home/jenkins-slave/workspace/validate_drp/dataset/decam/label/centos-7/python/py2/validate_drp/Decam/input/0176837/instcal0176837.fits.fz /home/jenkins-slave/workspace/validate_drp/dataset/decam/label/centos-7/python/py2/lsstsw/stack/Linux64/validation_data_decam/master-g52ac2b0d78/dqmask/dqmask0176837.fits.fz --<link>--> /home/jenkins-slave/workspace/validate_drp/dataset/decam/label/centos-7/python/py2/validate_drp/Decam/input/0176837/dqmask0176837.fits.fz /home/jenkins-slave/workspace/validate_drp/dataset/decam/label/centos-7/python/py2/lsstsw/stack/Linux64/validation_data_decam/master-g52ac2b0d78/wtmap/wtmap0176837.fits.fz --<link>--> /home/jenkins-slave/workspace/validate_drp/dataset/decam/label/centos-7/python/py2/validate_drp/Decam/input/0176837/wtmap0176837.fits.fz /home/jenkins-slave/workspace/validate_drp/dataset/decam/label/centos-7/python/py2/lsstsw/stack/Linux64/validation_data_decam/master-g52ac2b0d78/instcal/instcal0176846.fits.fz --<link>--> /home/jenkins-slave/workspace/validate_drp/dataset/decam/label/centos-7/python/py2/validate_drp/Decam/input/0176846/instcal0176846.fits.fz /home/jenkins-slave/workspace/validate_drp/dataset/decam/label/centos-7/python/py2/lsstsw/stack/Linux64/validation_data_decam/master-g52ac2b0d78/dqmask/dqmask0176846.fits.fz --<link>--> /home/jenkins-slave/workspace/validate_drp/dataset/decam/label/centos-7/python/py2/validate_drp/Decam/input/0176846/dqmask0176846.fits.fz /home/jenkins-slave/workspace/validate_drp/dataset/decam/label/centos-7/python/py2/lsstsw/stack/Linux64/validation_data_decam/master-g52ac2b0d78/wtmap/wtmap0176846.fits.fz --<link>--> /home/jenkins-slave/workspace/validate_drp/dataset/decam/label/centos-7/python/py2/validate_drp/Decam/input/0176846/wtmap0176846.fits.fz running processCcd Build step 'Execute shell' marked build as failure {code}
| 0.5 |
1,943 |
DM-9476
|
02/16/2017 10:43:24
|
ISR fails in overscan for HSC visit=90738 ccd=33
|
{code} pprice@perseus:/tigress/pprice/greco $ processCcd.py /tigress/HSC/HSC --rerun price/test --id visit=90738 ccd=33 --clobber-config --no-versions root INFO: Loading config overrride file '/tigress/pprice/greco/obs_subaru/config/processCcd.py' /tigress/HSC/LSST/stack_perseus_20170207/Linux64/miniconda2/3.19.0.lsst4/lib/python2.7/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment. warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.') Cannot import lsst.meas.extensions.convolved (No module named convolved): disabling convolved flux measurements root INFO: Loading config overrride file '/tigress/pprice/greco/obs_subaru/config/hsc/processCcd.py' root INFO: Running: /tigress/HSC/LSST/stack_perseus_20170207/Linux64/pipe_tasks/12.1-22-g7df19e7+1/bin/processCcd.py /tigress/HSC/HSC --rerun price/test --id visit=90738 ccd=33 --clobber-config --no-versions processCcd INFO: Processing {'taiObs': '2016-11-25', 'pointing': 1790, 'visit': 90738, 'dateObs': '2016-11-25', 'filter': 'HSC-G', 'field': 'SSP_WIDE', 'ccd': 33, 'expTime': 150.0} processCcd.isr INFO: Performing ISR on sensor {'taiObs': '2016-11-25', 'pointing': 1790, 'visit': 90738, 'dateObs': '2016-11-25', 'filter': 'HSC-G', 'field': 'SSP_WIDE', 'ccd': 33, 'expTime': 150.0} WARNING: Couldn't write lextab module u'angle_lextab'. [Errno 13] Permission denied: u'/tigress/HSC/LSST/stack_perseus_20170207/Linux64/miniconda2/3.19.0.lsst4/lib/python2.7/site-packages/astropy/coordinates/angle_lextab.py' WARNING: Couldn't create u'angle_parsetab'. [Errno 13] Permission denied: u'/tigress/HSC/LSST/stack_perseus_20170207/Linux64/miniconda2/3.19.0.lsst4/lib/python2.7/site-packages/astropy/coordinates/angle_parsetab.py' /tigress/HSC/LSST/stack_perseus_20170207/Linux64/ip_isr/12.1-6-gba867dc/python/lsst/ip/isr/isr.py:389: RuntimeWarning: invalid value encountered in divide weights=collapsed.data*~collapsedMask)[0]/numPerBin /tigress/HSC/LSST/stack_perseus_20170207/Linux64/ip_isr/12.1-6-gba867dc/python/lsst/ip/isr/isr.py:391: RuntimeWarning: invalid value encountered in divide weights=indices*~collapsedMask)[0]/numPerBin processCcd FATAL: Failed on dataId={'taiObs': '2016-11-25', 'pointing': 1790, 'visit': 90738, 'dateObs': '2016-11-25', 'filter': 'HSC-G', 'field': 'SSP_WIDE', 'ccd': 33, 'expTime': 150.0}: File "src/math/Interpolate.cc", line 204, in lsst::afw::math::InterpolateGsl::InterpolateGsl(const std::vector<double>&, const std::vector<double>&, lsst::afw::math::Interpolate::Style) Failed to initialise spline for type akima, length 0 {0} lsst::pex::exceptions::OutOfRangeError: 'Failed to initialise spline for type akima, length 0' Traceback (most recent call last): File "/tigress/HSC/LSST/stack_perseus_20170207/Linux64/pipe_base/12.1-5-g06c326c+10/python/lsst/pipe/base/cmdLineTask.py", line 347, in __call__ result = task.run(dataRef, **kwargs) File "/tigress/HSC/LSST/stack_perseus_20170207/Linux64/pipe_base/12.1-5-g06c326c+10/python/lsst/pipe/base/timer.py", line 121, in wrapper res = func(self, *args, **keyArgs) File "/tigress/HSC/LSST/stack_perseus_20170207/Linux64/pipe_tasks/12.1-22-g7df19e7+1/python/lsst/pipe/tasks/processCcd.py", line 181, in run exposure = self.isr.runDataRef(sensorRef).exposure File "/tigress/pprice/greco/obs_subaru/python/lsst/obs/subaru/isr.py", line 266, in runDataRef statControl=statControl, File "/tigress/HSC/LSST/stack_perseus_20170207/Linux64/ip_isr/12.1-6-gba867dc/python/lsst/ip/isr/isr.py", line 394, in overscanCorrection afwMath.stringToInterpStyle(fitType)) File "/tigress/HSC/LSST/stack_perseus_20170207/Linux64/afw/12.1-32-gb99f2ce+2/python/lsst/afw/math/mathLib.py", line 6325, in makeInterpolate return _mathLib.makeInterpolate(*args) OutOfRangeError: File "src/math/Interpolate.cc", line 204, in lsst::afw::math::InterpolateGsl::InterpolateGsl(const std::vector<double>&, const std::vector<double>&, lsst::afw::math::Interpolate::Style) Failed to initialise spline for type akima, length 0 {0} lsst::pex::exceptions::OutOfRangeError: 'Failed to initialise spline for type akima, length 0' {code}
| 1 |
1,944 |
DM-9490
|
02/16/2017 15:27:57
|
Remote api issues
|
Do the following: * make point selection work if extension is added before image * point selection is not always showing point * mask is not working when zoom is quickly changed before new mask is added * make table active row table extension * remotely turning on target match
| 8 |
1,945 |
DM-9495
|
02/16/2017 17:12:03
|
Fix all jointcal header multiple-inclusion #defines
|
As pointed out in another review by [~krzys], jointcal isn't following the {{#define LSST_BLAH}} standard for multiple inclusion prevention in its header files. Should fix this with a quick pass over the current headers. Might also be a good time to look over the existing headers and see if any can disappear or be merged elsewhere.
| 1 |
1,946 |
DM-9500
|
02/17/2017 12:42:00
|
FitsDownloadDialog.js has bugs
|
The FitsDownloadDialog only works for NO_BAND. When a 3-color image is created, it has the following issues: # it only shows a "red" color band # Saving file does not work because the null FITS file name is in the URL # It does throws the exception, for example, saving a 2mass 3-color image, the exception is: "GET http://localhost:8080/firefly//servlet/Download?file=null&return=twomass-j.fits&log=true 404 (Not Found) download @ WebUtil.js:274 resultsSuccess @ FitsDownloadDialog.jsx:356 onSuccess @ FitsDownloadDialog.jsx:282 validUpdate @ CompleteButton.jsx:25 (anonymous) @ CompleteButton.jsx:34" 1. when its s a color image, all 3 bands should be available for FITS download 2. for PNG and region file, there i son need for band choice 3. the title of the dialog should be "File Download", not "Fits download"
| 3 |
1,947 |
DM-9502
|
02/17/2017 16:45:17
|
SpherePoint throws wrong exception for invalid arguments
|
Several methods in {{SpherePoint}} throw {{pex::exceptions::OutOfRangeError}} when fed invalid arguments. Assuming the pex exceptions are intended to mimic the standard C++ exceptions of the same name (see DM-9435), this is inappropriate -- {{OutOfRangeError}} should refer to invalid indices, not other cases where an argument falls outside some interval. These methods should be changed to throw either {{pex::exceptions::DomainError}} or {{pex::exceptions::InvalidParameterError}}, which apply to generic numerical arguments.
| 1 |
1,948 |
DM-9503
|
02/17/2017 16:54:16
|
afw catalog asAstropy fails due to multiple columns of same name
|
When using {{catalog.asAstropy()}}, we experience an error when converting certain tables: {code} >>> catalog.asAstropy() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/slac/lsst/stack/DarwinX86/afw/12.1+1/python/lsst/afw/table/_syntax.py", line 272, in BaseCatalog_asAstropy return cls(columns, meta=meta, copy=False) File "/slac/lsst/stack/DarwinX86/miniconda2/3.19.0.lsst4/lib/python2.7/site-packages/astropy/table/table.py", line 360, in __init__ init_func(data, names, dtype, n_cols, copy) File "/slac/lsst/stack/DarwinX86/miniconda2/3.19.0.lsst4/lib/python2.7/site-packages/astropy/table/table.py", line 624, in _init_from_list self._init_from_cols(cols) File "/slac/lsst/stack/DarwinX86/miniconda2/3.19.0.lsst4/lib/python2.7/site-packages/astropy/table/table.py", line 697, in _init_from_cols self._make_table_from_cols(self, newcols) File "/slac/lsst/stack/DarwinX86/miniconda2/3.19.0.lsst4/lib/python2.7/site-packages/astropy/table/table.py", line 729, in _make_table_from_cols raise ValueError('Duplicate column names') ValueError: Duplicate column names {code}
| 1 |
1,949 |
DM-9504
|
02/17/2017 17:33:27
|
lsst_py3 CI failure due to meas_extensions_ngmix
|
[~npease] points out that the lsst_py3 Jenkins build is currently failing and has been since [build #454|https://ci.lsst.codes/job/stack-os-matrix/label=centos-7,python=py3/21317/console] (regardless of the misleading red circles on Jenkins). Reported error is: {code} ====================================================================== FAIL: testLeaks (__main__.TestMemory) !Check for memory leaks in the preceding tests ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/jenkins-slave/workspace/stack-os-matrix/label/centos-7/python/py3/lsstsw/stack/Linux64/utils/12.1-5-g648ee80+2/python/lsst/utils/tests.py", line 161, in testLeaks self.fail("Leaked %d block%s" % (nleak, plural)) AssertionError: Leaked 13 blocks ---------------------------------------------------------------------- Ran 10 tests in 0.688s FAILED (failures=1, skipped=2) Number of calls to function has reached maxfev = 10. Number of calls to function has reached maxfev = 10. Number of calls to function has reached maxfev = 10. Number of calls to function has reached maxfev = 10. {code}
| 2 |
1,950 |
DM-9505
|
02/17/2017 18:38:04
|
Please serve the ndarray tutorial somewhere
|
At one time the ndarray tutorial was part of the Doxygen documentation, but that is no longer the case. It really should be served somewhere -- preferably linked from the docs that are already on github.
| 1 |
1,951 |
DM-9517
|
02/20/2017 08:01:50
|
Migrate meas_algorithms to modern afwDisplay
|
meas_algorithms contains several (e.g. [#1|https://github.com/lsst/meas_algorithms/blob/0bf8251ccb5c79be6e181817359729fdb2425311/python/lsst/meas/algorithms/objectSizeStarSelector.py#L43], [#2|https://github.com/lsst/meas_algorithms/blob/d421edbfcf2fc993cbad9211c1498767479d069a/python/lsst/meas/algorithms/pcaPsfDeterminer.py#L35], there are more) explicit uses of {{lsst.afw.display.ds9}}. These should be replaced by calls to the generic (backend-independent) {{lsst.afw.display}} system.
| 1 |
1,952 |
DM-9518
|
02/20/2017 08:22:53
|
Move activemq packages to legacy status
|
With the move of ctrl_events to legacy status the activemqcpp and ctrl_activemq packages is no longer in use, Moving lsst/activemqpp to lsst-dm/legacy-activemqcpp (this is a move and a change in etc/repos.yaml) and Moving lsst/ctrl_activemq to lsst-dm/legacy-ctrl_activemq (this is just a move)
| 0.5 |
1,953 |
DM-9524
|
02/20/2017 11:54:19
|
mangle OSX EUPS tarball shebang paths
|
A method of mangling the shebangs on OSX to a valid path is needed.
| 2 |
1,954 |
DM-9528
|
02/20/2017 14:10:03
|
Cleanup pybind11 code in meas_deblender
|
Use the checklist from DM-9182 to clean up code in meas_deblender.
| 1 |
1,955 |
DM-9531
|
02/20/2017 16:53:15
|
Fix override warnings in afw
|
Compiling afw on modern clang results in this warning: {code} include/lsst/afw/geom/SpanSet.h:662:10: warning: 'isPersistable' overrides a member function but is not marked 'override' [-Winconsistent-missing-override] bool isPersistable() const { return true; } ^ {code} Please fix by marking the method {{override}}
| 0.5 |
1,956 |
DM-9534
|
02/21/2017 11:17:23
|
Output jointcal metrics via a metrics logger
|
Until we have butler metrics persistence system, we can use a dedicated logger to output each product's metrics. Since jointcal is the testbed for the new metrics system, we'll use it as an example for how to produce those logs.
| 5 |
1,957 |
DM-9535
|
02/21/2017 11:25:47
|
Assess whether differences in Brighter-Fatter implementations are contributing to the trace radii differences: LSST vs. HSC
|
As noted in DM-6817 and highlighted/further explored in DM-9411, there is a trend of increasing difference in trace radii with increasing magnitude when directly comparing outputs from the HSC vs. LSST stacks. A likely culprit is slight differences in the implementations of the Brighter-Fatter corrections between the stacks. This ticket is to assess whether this is contributing to the trace radii differences.
| 5 |
1,958 |
DM-9539
|
02/21/2017 12:03:36
|
jointcal validation framework integration (continued from S17)
|
Jointcal will be the testbed for the new metrics validation system. This epic captures the work on the jointcal side, including integrating jointcal's output into the system, and writing user documentation to allow others to plug their products into validation.
| 20 |
1,959 |
DM-9543
|
02/21/2017 17:10:56
|
calling showXYPlot API without xcol/ycol name fails
|
The API call 'showXYPlot' fails when xCol, yCol is not specified. Before migration, the function was used without specifying the column name, the xy plot was plotting the first 2 numerical columns if no column is passed in. That is no longer the case, the function seems to be missing. If not, then documentation should be updated on how to call the same function as before. Please check and fix.
| 1 |
1,960 |
DM-9549
|
02/22/2017 12:13:22
|
ctrl_stats fails on mac os
|
The Linux build for ctrl_stats succeeds, but the mac OS X build for ctrl_stats fails in the tests/testYearWrap.py. Additionally, a change in Python 3.6: {code} Unknown escapes consisting of '/' and an ASCII letter in regular expressions will now cause an error. {code} now makes some code in terminated.py fail (and possibly in other places).
| 3 |
1,961 |
DM-9552
|
02/22/2017 14:45:53
|
Reported mouse position on mouse click is a couple of pixels off
|
1. The symbol was drawn a few pixels off the center of the mouse when "lock by click" checkbox is selected. 2. Other drawing functions, like distance tool, the area selection tool are also affected.
| 2 |
1,962 |
DM-9553
|
02/22/2017 14:53:14
|
Investigate the best algorithm to compute derivatives for the Brighter-Fatter correction
|
As noted in DM-9535, the method used to compute second derivatives in the Brighter-Fatter correction implementation can lead to significant (up to 1%, which is particularly significant for weak lensing considerations) differences in the trace radii of sources, with the differences becoming larger with brighter magnitudes (see figures in DM-9535). This ticket is to investigate the optimal algorithm to use for the Brighter-Fatter correction.
| 1 |
1,963 |
DM-9556
|
02/22/2017 15:53:20
|
All NaNs in coord_ra and coord_dec columns in deepCoadd forced src tables
|
In recent runs of the stack through *multiBandDriver.py*, the persisted forced src tables for the coadds are not getting ra and dec set properly (all entries for the {{coord_ra}} and {{coord_dec}} columns are NaN). Looking back at a run in mid-Nov, 2016, these numbers were indeed set properly in the forced tables. Assuming this was not intentional, track down the cause and fix it such that these values get set properly for the persisted forced src tables.
| 1 |
1,964 |
DM-9561
|
02/23/2017 09:23:53
|
Improve Monotonicity Operator
|
Preliminary testing of the NMF deblender shows that the radial monotonicity operator as designed does not work as expected. [~pmelchior] and I have verified that the monotonicity operator itself is built properly based on our earlier design, which uses a single reference pixel (the one that lies closest to a radial line from the peak to the current pixel) for each pixel. Based on a pixels position from the peak, it lies in one of 8 octants that determines which pixel it will use as a reference, but it appears that the transition between reference pixel positions is causing the monotonicity operator to generate weird streaks in the deblended objects that are unphysical. We are redesigning the monotonicity operator to use the weight of all three neighboring pixels that are closer to the peak as a reference, which we hope will fix this issue.
| 8 |
1,965 |
DM-9564
|
02/23/2017 10:33:16
|
Set assembled Coadd Psf to modelPsf with auto-computed dimensions
|
DM-8088 changed makeCoaddTempExp so that the user no longer has to specify the pixel dimensions of the model PSF to match to. The model Psf dimensions are updated at runtime to match those of the warped calexp PSFs (which have dimensions impossible for a user to know ahead of time). AssembleCoadd currently attaches the PSF corresponding to the *user-specified model PSF* dimensions rather than the updated PSF-dimensions. This is bad because the user-specified dimensions could be way off and it's incongruous to tell users that they don't have to pay attention to this in makeCoaddTempExp, but they do in assembleCoadd. This ticket will ensure that the modelPsf dimension information flows down from the coaddTempExps to the assembled deeepCoadd in a sensible way. Most likely using the maximum dimensions of the input coaddTempExps.
| 2 |
1,966 |
DM-9578
|
02/24/2017 14:35:20
|
warning message does not disappear even with valid data if the mouse is covering the warning icon
|
if user enters invalid data and keeps his mouse over the warning icon while entering valid data into a field then the warning message does not disappear even with valid data. It stays up for the life time of the firefly webapp.
| 1 |
1,967 |
DM-9579
|
02/24/2017 15:17:03
|
nondeterministic random number seeds in MeasurePsf candidate reservation
|
{{MeasurePsfTask}} randomly reserves a fraction of its candidates for validation, in a way that is supposed to be deterministic. This seems to be broken; I've identified at least two problems: - {{CharacterizeImageTask}} does not pass the {{expId}} argument to {{MeasurePsfTask.run}}, letting it default to zero. - {{MeasurePsfTask}} uses Python's built-in {{random}} module instead of {{afw.math.Random}}. Contrary to its own documentation, calling {{random.seed(0)}} does not always produce deterministic results (though I've only been able to trigger this the first time I tried it): {code} In [3]: random.seed(0) In [4]: random.random() Out[4]: 0.7579544029403025 In [5]: random.seed(0) In [6]: random.random() Out[6]: 0.8444218515250481 In [7]: random.seed(0) In [8]: random.random() Out[8]: 0.8444218515250481 In [9]: random.seed(0) In [10]: random.random() Out[10]: 0.8444218515250481 {code} I have no idea what could be going on in the built-in {{random}} module (and it's hard to report upstream since I was only able to trigger it once), but we should switch to our own random number generator regardless.
| 1 |
1,968 |
DM-9581
|
02/24/2017 18:24:26
|
MSX image rotation not computed correctly.
|
Update From Trey - 2/28: This problem has nothing to do with WCS match. It is a problem with rotation of MSX images. They are galactic based which confuses our computation. To reproduce: Read in MSX image with target m16 and the rotate 180. The problem is obvious. In this case the relative rotation is computed wrong. The problem begins at FitsRead.createFitsReadRotated. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ORIGINAL TITLE: WCS match behavior is odd ORIGINAL DECRIPTION: Luisa reported 10 days ago on TT (#9353) on the OPS version of Firefly/IRSAViewer: WCS align works in some odd ways in IRSA Viewer when MSX images are included; it doesn't actually align the MSX image to other images properly. If you align to MSX images, though, it works fine. I wonder if whatever is going on is somehow tied to the galactic coordinates that the MSX tile should have, though of course the tool should only be paying attention to RA/Dec. Step to reproduce: OS 10.11.6 Firefox 51.0.1 search on M16, WISE ch1 (default), default size. again, new image, MSX A band (default), all other defaults again, new image, DSS poss2uk red (default), all other defaults. It returns three images of the sky. physically what is going on here is that the wise image and the DSS image that are returned should already have north up. the MSX image, however, is in galactic coordinates by default but carries the RA/Dec with it, of course. I just mean the tile is aligned with galactic coordinates by default. At this point, the DSS image should be “selected” (e.g., outlined in orange). click WCS match. WISE doesn’t change, but MSX rotates about 45 degrees. (see first attachment.) here the features in the WISE and POSS images are aligned, so they physically make sense. The MSX one isn’t right. Click on the MSX image so that it is outlined in orange. click on WCS match to turn it off. All three images move slightly for me but don’t change orientation. click on WCS match to turn it on. Now all three images are rotated. (see second attachment.) the MSX image didn’t change, but the nebulosity is now aligned properly, so this is really the correct alignment (though note that North isn't up anymore). The tool knows North isn't up; coordinates overlaid and a compass rose overlaid are aligned properly. (From TT9353) [March-1-2017 LZ] I reproduced the bug according to Trey's instruction. I checked the codes where the rotation is calculated and found that the rotation coordinate somehow is predefined as CoordinateSys.EQ_J2000. Thus, if the image is not in CoordinateSys.EQ_J2000, the calculation is wrong. My suggestion is to use the incoming coordinate instead. {code} WorldPt worldPt1 = projection.getWorldCoords(centerX, centerY - 1); WorldPt worldPt2 = projection.getWorldCoords(centerX, centerY); double positionAngle = VisUtil.getPositionAngle(worldPt1.getX(), worldPt1.getY(), worldPt2.getX(), worldPt2.getY()); if (fromNorth) { long angleToRotate= Math.round((180+ rotationAngle) % 360); if (angleToRotate==Math.round(positionAngle)) { return fitsReader; } else { return createFitsReadPositionAngle(fitsReader, -angleToRotate, inCoordinateSys);//CoordinateSys.EQ_J2000); } } else { return createFitsReadPositionAngle(fitsReader, -positionAngle+ rotationAngle, inCoordinateSys);//CoordinateSys.EQ_J2000); {code}
| 2 |
1,969 |
DM-9588
|
02/27/2017 13:41:36
|
validate_drp broken on cfht/hsc datasets
|
{{validate_drp}} has been failing since the 22nd. This is suspiciously coincidental with the merger of https://github.com/lsst-sqre/jenkins-dm-jobs/pull/58 . The first build failure appears to be shell script related but the most recent failures for both the {{cfht}} and {{hsc}} data set look like they may be the result of a change in the stack. *HSC* https://ci.lsst.codes/job/validate_drp/835/dataset=hsc,label=centos-7,python=py2/console {code:java} Traceback (most recent call last): File "/home/jenkins-slave/workspace/validate_drp/dataset/hsc/label/centos-7/python/py2/lsstsw/stack/Linux64/validate_drp/master-g3511a1277e+1/bin/validateDrp.py", line 97, in <module> validate.run(args.repo, **kwargs) File "/home/jenkins-slave/workspace/validate_drp/dataset/hsc/label/centos-7/python/py2/lsstsw/stack/Linux64/validate_drp/master-g3511a1277e+1/python/lsst/validate/drp/validate.py", line 104, in run **kwargs) File "/home/jenkins-slave/workspace/validate_drp/dataset/hsc/label/centos-7/python/py2/lsstsw/stack/Linux64/validate_drp/master-g3511a1277e+1/python/lsst/validate/drp/validate.py", line 217, in runOneFilter job=job, linkedBlobs=linkedBlobs, verbose=verbose) File "/home/jenkins-slave/workspace/validate_drp/dataset/hsc/label/centos-7/python/py2/lsstsw/stack/Linux64/validate_drp/master-g3511a1277e+1/python/lsst/validate/drp/calcsrd/amx.py", line 159, in __init__ verbose=verbose) File "/home/jenkins-slave/workspace/validate_drp/dataset/hsc/label/centos-7/python/py2/lsstsw/stack/Linux64/validate_drp/master-g3511a1277e+1/python/lsst/validate/drp/calcsrd/amx.py", line 242, in calcRmsDistances visit[obj2], ra[obj2], dec[obj2]) File "/home/jenkins-slave/workspace/validate_drp/dataset/hsc/label/centos-7/python/py2/lsstsw/stack/Linux64/validate_drp/master-g3511a1277e+1/python/lsst/validate/drp/calcsrd/amx.py", line 326, in matchVisitComputeDistance j = visit_obj2_idx[j_raw] IndexError: index 3 is out of bounds for axis 0 with size 3 Build step 'Execute shell' marked build as failure [PostBuildScript] - Execution post build scripts. {code} *CFHT* https://ci.lsst.codes/job/validate_drp/835/dataset=cfht,label=centos-7,python=py2/console {code:java} Traceback (most recent call last): File "/home/jenkins-slave/workspace/validate_drp/dataset/cfht/label/centos-7/python/py2/lsstsw/stack/Linux64/validate_drp/master-g3511a1277e+1/bin/validateDrp.py", line 97, in <module> validate.run(args.repo, **kwargs) File "/home/jenkins-slave/workspace/validate_drp/dataset/cfht/label/centos-7/python/py2/lsstsw/stack/Linux64/validate_drp/master-g3511a1277e+1/python/lsst/validate/drp/validate.py", line 104, in run **kwargs) File "/home/jenkins-slave/workspace/validate_drp/dataset/cfht/label/centos-7/python/py2/lsstsw/stack/Linux64/validate_drp/master-g3511a1277e+1/python/lsst/validate/drp/validate.py", line 204, in runOneFilter verbose=verbose) File "/home/jenkins-slave/workspace/validate_drp/dataset/cfht/label/centos-7/python/py2/lsstsw/stack/Linux64/validate_drp/master-g3511a1277e+1/python/lsst/validate/drp/matchreduce.py", line 147, in __init__ repo, dataIds, matchRadius) File "/home/jenkins-slave/workspace/validate_drp/dataset/cfht/label/centos-7/python/py2/lsstsw/stack/Linux64/validate_drp/master-g3511a1277e+1/python/lsst/validate/drp/matchreduce.py", line 229, in _loadAndMatchCatalogs oldSrc = butler.get('src', vId, immediate=True, flags=SOURCE_IO_NO_FOOTPRINTS) File "/home/jenkins-slave/workspace/validate_drp/dataset/cfht/label/centos-7/python/py2/lsstsw/stack/Linux64/daf_persistence/12.1-19-gd507bfc/python/lsst/daf/persistence/butler.py", line 845, in get location = self._locate(datasetType, dataId, write=False) File "/home/jenkins-slave/workspace/validate_drp/dataset/cfht/label/centos-7/python/py2/lsstsw/stack/Linux64/daf_persistence/12.1-19-gd507bfc/python/lsst/daf/persistence/butler.py", line 795, in _locate location = repoData.repo.map(datasetType, dataId, write=write) File "/home/jenkins-slave/workspace/validate_drp/dataset/cfht/label/centos-7/python/py2/lsstsw/stack/Linux64/daf_persistence/12.1-19-gd507bfc/python/lsst/daf/persistence/repository.py", line 198, in map loc = self._mapper.map(*args, **kwargs) File "/home/jenkins-slave/workspace/validate_drp/dataset/cfht/label/centos-7/python/py2/lsstsw/stack/Linux64/daf_persistence/12.1-19-gd507bfc/python/lsst/daf/persistence/mapper.py", line 144, in map return func(self.validate(dataId), write) File "/home/jenkins-slave/workspace/validate_drp/dataset/cfht/label/centos-7/python/py2/lsstsw/stack/Linux64/obs_base/12.1-21-gbdb6c2a+2/python/lsst/obs/base/cameraMapper.py", line 379, in mapClosure return mapping.map(mapper, dataId, write) File "/home/jenkins-slave/workspace/validate_drp/dataset/cfht/label/centos-7/python/py2/lsstsw/stack/Linux64/obs_base/12.1-21-gbdb6c2a+2/python/lsst/obs/base/mapping.py", line 124, in map actualId = self.need(iter(self.keyDict.keys()), dataId) File "/home/jenkins-slave/workspace/validate_drp/dataset/cfht/label/centos-7/python/py2/lsstsw/stack/Linux64/obs_base/12.1-21-gbdb6c2a+2/python/lsst/obs/base/mapping.py", line 257, in need lookups = self.lookup(newProps, newId) File "/home/jenkins-slave/workspace/validate_drp/dataset/cfht/label/centos-7/python/py2/lsstsw/stack/Linux64/obs_base/12.1-21-gbdb6c2a+2/python/lsst/obs/base/mapping.py", line 221, in lookup result = self.registry.lookup(properties, self.tables, lookupDataId, template=self.template) File "/home/jenkins-slave/workspace/validate_drp/dataset/cfht/label/centos-7/python/py2/lsstsw/stack/Linux64/daf_persistence/12.1-19-gd507bfc/python/lsst/daf/persistence/registries.py", line 330, in lookup c = self.conn.execute(cmd, valueList) sqlite3.OperationalError: no such column: flags {code}
| 1 |
1,970 |
DM-9590
|
02/27/2017 14:52:39
|
XY plot is unrecoverable after it fails because of column name doesn't exist
|
Sometimes, Gator or LC viewer shows an xy plot by settings xCol, yCol which doesn't exist in the table. That produce an exception and plot is not displayed, error message with the java exception is shown (*at least message should be user-friendly and not code oriented*). From there, there is no way to recover the plot, even if the user change the column name to an existing one, it won't respond and display any data.
| 8 |
1,971 |
DM-9594
|
02/27/2017 17:30:56
|
The default radio button (point) in scatter plot is not selected after line style changes.
|
Steps to reproduce: - switch between different styles - reset to default - the selected radio button but shown. Emmanuel reported that the default radio button is no longer selected after the line style changes. I tested in the DM-9343 and saw it. Then I tested it in the dev and saw it there as well. It seems an existing problem. [March-8-2017] When the setOption is called by clicking reset button, the defaultParams which contains only x and y is passed to the setOption. Since the plotStyle is not in the defaultParams, its default value is used. How the default value never triggers the listener. To fix it, the plotStyle was added to the default parameter. Thus, the plotStyle is stayed after resetting the other fields. [March-10-2017] According to Tatiana, each plot has a set of default values. When the reset button is pressed, the option parameters should be back to the default. Therefore, the above implementation is against the design. Tatiana did research and found out that the there was an implementation issue in RadioGroupInputView, see http://react.tips/radio-buttons-in-reactjs/. The RadioGroupInputView became an uncontrolled component when the "name" is used along with value and check fields. The uncontrolled component can not be handled by the React Component. Thus, the checked=true is not displayed. To fix this issue: # Remove the name attribute from the "input" tag # Since the fieldKey is not needed, fieldKey is remove as well. {code} function makeOptions(options,alignment ,value,onChange,tooltip) { const labelStyle= alignment==='vertical' ? vStyle : hStyle; return options.map((option) => ( <span key={option.value}> <div style={{display:'inline-block'}} title={tooltip}> <input type='radio' title={tooltip} value={option.value} checked={value===option.value} onChange={onChange} /> <span style={labelStyle}>{option.label}</span> </div> {alignment==='vertical' ? <br/> : ''} </span> )); } {code}
| 1 |
1,972 |
DM-9604
|
02/28/2017 09:16:33
|
Write up data/parallelization axis transformation examples for SuperTask WG
|
Produce slides or a short, informal document showing examples from DRP where we need to change data units and/or parallelization axis at a scale we expect to be handled by SuperTask.
| 2 |
1,973 |
DM-9608
|
02/28/2017 11:52:08
|
Fix LTD Dasher title processing (handle removal)
|
The Dasher rendering code that cleans up titles tries to remove handle prefixes. So that {{code}} SQR-016: Stack release playbook {{code}} becomes {{code}} Stack release playbook {{code}} There's a bug in the filter that turns this input: {{code}} Stack release playbook {{code}} into {{code}} tack release playbook {{code}} This ticket fixes that.
| 0.5 |
1,974 |
DM-9615
|
02/28/2017 13:35:29
|
Convert DCR code to use Tasks
|
The DCR code should be written so that the classes inherit from Tasks, and make use of those features.
| 8 |
1,975 |
DM-9616
|
02/28/2017 13:36:56
|
Make DCR command line task
|
It should be possible to run the new DCR template generation code from the command line.
| 3 |
1,976 |
DM-9617
|
02/28/2017 13:40:16
|
Use logging in DCR code
|
The prototype DCR code currently uses print statements to issue warnings and informative messages. These should be converted to use the standard logging system.
| 3 |
1,977 |
DM-9623
|
02/28/2017 14:58:37
|
Investigate posible matcher improvements
|
This issue is a catch all for investigative work relating to DM-7366 that has blocked DM-8113. Optimistic pattern matcher B appears to preform poorly for dense stellar fields (6-10k reference stars per visit). This issue involves finding solutions, work arounds, and possible matcher improvements for both the current stack matcher and the pure Python implementation from a previous issue in this Epic. This involves finding ways to mitigate false positive matches reliably and discovering why the stack matcher fails immediately when running on dense stellar fields.
| 20 |
1,978 |
DM-9638
|
03/01/2017 01:10:25
|
empty image tab appears unexpectedly after selecting an histogram column and cancel
|
Is a very weird bug but i ran into it when i hit the 'cancel' button of the 'chart' drop down dialog. An empty white image tab titled {{Image Meta Data}} suddenly appears next to the {{coverage} tab. To test: Open tri-view, do a catalog search, then click 'charts', then 'histogram' radio button. Then select column (this is the part that make the bug apparent), then 'ok', then 'cancel'. The empty tsab will appear next to 'coverage' tab. From Trey 3/1: This bug is probably in FireflyViewerManager.js From Tatiana 3/2 Another test case: 1. Load Firefly. 2. Do catalog search (m31, default settings) - tri-view table, xyplot, coverage shows up 3. Click on FITS Header icon - empty "Image Meta Data" tab comes up Looks like the empty white image tab titled {{Image Meta Data}} appears whenever we add a new table, which is not coverage or catalog. In the scenarios above, the tables are column table or fits header table - both of them appear to FireflyViewerManager as "image metadata" tables. The bug is in the last line of converterUtils.js:isMetaDataTable function - Boolean(converter) is always true, because we have "UNKNOWN" converter. So the question is when this UNKNOWN converter is relevant and how can we separate image metadata tables with UNKNOWN converter from other tables.
| 2 |
1,979 |
DM-9639
|
03/01/2017 06:15:38
|
Install v13 release into shared stack
|
v13 has now been tagged. Update the shared-stack build script on lsst-dev to install it, and make sure that happens successfully.
| 0.5 |
1,980 |
DM-9646
|
03/01/2017 08:03:26
|
Update DRP release notes for v13
|
The v13 release [has been tagged|https://sw.lsstcorp.org/eupspkg/tags/v13_0.list], but the [DRP release notes|https://confluence.lsstcorp.org/display/DM/Data+Release+Production+WIP+F16+Release+Notes] are a couple of months out of date. Update them and provide them to SQuaRE to accompany the release.
| 2 |
1,981 |
DM-9669
|
03/01/2017 13:19:11
|
Butler(root="foo") should not warn about mapper class instance
|
when initializing butler as Butler("foo"), butler warns daf.persistence.butler WARN: mapper ought to be an importable string or a class object (not a mapper class instance) it should not do this - a mapper instance was not passed in.
| 1 |
1,982 |
DM-9675
|
03/01/2017 17:05:02
|
Soften symmetry operator requirements
|
The current symmetry operator enforces strict symmetry, meaning that all pixels that do not have a symmetric partner in the footprint are penalized. To ease this requirement, which may be necessary if some of the flux for one (or more) of the objects lies outside of the footprint, we allow the user to use a less stringent penalty value.
| 1 |
1,983 |
DM-9681
|
03/01/2017 22:08:46
|
Write function to apportion flux based on NMF template weights
|
The main assumption of the current deblender is that galaxies have an identical profile in each band. This is not strictly true, as the color near the center of galaxies is more white. It is worth attempting to use the NMF deblender output as a template, similar to the symmetric templates generated by the SDSS deblender, and re-apportion flux based on the ratio of the templates in each pixel. This may help us recover more accurate colors while still performing better in the wings of blended galaxies than the current deblender. This ticket will refactor a version of {{lsst.meas.deblender.apportionFlux}} to re-apportion flux for NMF templates.
| 1 |
1,984 |
DM-9682
|
03/02/2017 06:14:17
|
Update git-lfs in lsst-dev shared stack
|
The {{git-lfs}} installed in the shared stack on {{lsst-dev}} is pretty elderly. Please provide a newer version.
| 0.5 |
1,985 |
DM-9686
|
03/02/2017 10:21:22
|
Add LSST branding to pipelines.lsst.io
|
Similar to DM-6120, add LSST branding to pipelines.lsst.io theme. Add: - Edit on GitHub button - LSST branding - Edit on GitHub button - LTD edition dashboard links
| 1 |
1,986 |
DM-9710
|
03/03/2017 14:37:13
|
Update the FitsReadTest.java due to the changes in rotation's calculation in FitsRead
|
After DM-9581 was implemented, the FitsRead's unit test is failed because the testing data prepared is no longer correct. The FITS file used in the unit test for rotation was created by running FitsRead.createRotationAngles. Since this method is modified, the old FITS file is no longer correct. The following needs to be done: * Generate the new testing data and store in the firefly_test_data * Re-run the unit test to make sure everything work
| 1 |
1,987 |
DM-9711
|
03/03/2017 14:41:50
|
Clean up meas_base pybind11 wrappers
|
Rebasing meas_base and other packages with `meas.base.Algorithm` subclasses for the DM-9249 changes. This will require new/different pybind11 wrappers for changed C++ interfaces.
| 2 |
1,988 |
DM-9715
|
03/03/2017 16:02:05
|
cppIndex should raise Python's built-in IndexError
|
In order to synthesis {{\_\_iter\_\_}} from {{\_\_getitem\_\_}} Python apparently requires *exactly* {{IndexError}} to be thrown, so that's what we should throw in these functions. That will require rewriting the testPybind11.cc unit test in a way that has access to Python symbols (which should be done anyway). There will likely be workaround code in afw/table/python/catalog.h that can be cleaned up after this change (I'll just catch OutOfRangeError there and re-throw as IndexError).
| 0.5 |
1,989 |
DM-9740
|
03/08/2017 10:27:38
|
Make lsst_ci run obs_cfht, obs_decam example in its scons test.
|
Make {{lsst_ci}} run {{obs_cfht}}, {{obs_decam}} examples in its {{scons}} test. The {{test}} step of {{lsst_ci}} should verify that the simple test example runs of {{obs_}} packages succeed. Monitoring the results using {{validate_drp}} will be delayed for a later ticket. Just making sure things run successfully through {{processCcd.py}} or the equivalent is the goal here. 1. [x] Add {{validation_data_cfht}}, {{validation_data_decam}} to required dependencies. 2. [x] Run command-line based test scripts. 3. [x] Wrap these scripts in scons test to report pass/fail based on just successfully completing.
| 2 |
1,990 |
DM-9743
|
03/08/2017 16:03:53
|
Field of time column name handling for Time Series Viewer
|
- The raw data table should be updated and a light curve calculation restarted (the existing phase folded table is removed) if the time column name in TS viewer changes because the table is sorted on the new column -- currently nothing happens when user changes the input value for the time column name field to be another valid one. - add time column name field for LSST
| 2 |
1,991 |
DM-9745
|
03/09/2017 11:36:21
|
Update text relating to baseline CentOS versions for building stack
|
Following RFC-293 we have agree that we can specify a minimum CentOS7 version for building the stack. Locate relevant documentation and update it with this information.
| 0.5 |
1,992 |
DM-9752
|
03/09/2017 15:19:13
|
Add jointcal to lsst_distrib
|
This ticket implements RFC-300. Once jointcal has been ported to pybind11 (DM-9187), it will be ready to join the stack in lsst_distrib. This includes the {{jointcal_cholmod}} dependency, and the optional {{testdata_jointcal}} package (which will be excluded from most installs due to size). Note to [~jhoblitt]: We'll need to add {{testdata_jointcal}} to the various exclude files once this is ready to go. I'll ping you at that time.
| 1 |
1,993 |
DM-9754
|
03/10/2017 12:40:36
|
re-separate python Storage & cpp Storage
|
due to a naming collision, there are separate classes both called Storage in cpp and python. They got combined during the pybind11 conversion and are re-separated here.
| 1 |
1,994 |
DM-9757
|
03/10/2017 13:02:44
|
Add stat table usage options to mysql config file
|
For mariadb query optimization (DM-9175) in qserv, the relevant parameters must be set persistently in the mysql configuration file so that the generated stats tables are used to run queries on table chunks. The options to be added are: {code} set use_stat_tables='PREFERABLY'; set optimizer_use_condition_selectivity=3; {code} Their default values are {{'NEVER'}} and {{1}}.
| 1 |
1,995 |
DM-9758
|
03/10/2017 13:17:17
|
astshim fails to build on linux
|
astshim fails to build on linux using Jenkins. Here is a build log https://ci.lsst.codes/job/stack-os-matrix/22127/label=centos-7,python=py2/console Also fix build warnings
| 2 |
1,996 |
DM-9761
|
03/10/2017 14:15:16
|
butlerProxy test does not clean up after itself properly.
|
the tearDown function needs to be updated to reflect recent changes to the path in the setUp function.
| 1 |
1,997 |
DM-9764
|
03/10/2017 15:33:38
|
SOURCE_IO_NO_FOOTPRINTS and related enums should be properly wrapped in pybind11
|
In order to use {{lsst.afw.table.SOURCE_IO_NO_FOOTPRINTS}} in python, it has to be explicitly cast to {{int()}}. This is a bug in the pybind11 wrapper.
| 0.5 |
1,998 |
DM-9765
|
03/10/2017 16:02:45
|
Suspicious numerical precision code in Angle
|
The implementation of {{Angle::wrapNear}} includes several corrections for possible truncation error, the last of which is to rescale the wrapped angle by (1 - 2ε) if it is too far below {{refAng}}. This requirement is odd, for example because it is sensitive to the absolute values of {{refAng}} and the angle to be wrapped. However, I have not been able to create a test case that exposes a problem in the wrapping, either by trying to carefully tune the value to give zero wrapped angle or by trying to work with very large angles like 10000π. Attempts to reproduce the suspected bug should continue, and if successful the bug should be fixed.
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.