id
int64 0
5.38k
| issuekey
stringlengths 4
16
| created
stringlengths 19
19
| title
stringlengths 5
252
| description
stringlengths 1
1.39M
| storypoint
float64 0
100
|
---|---|---|---|---|---|
2,899 |
DM-14630
|
05/31/2018 23:04:11
|
Use overlay2 on openstack
|
Some tuning is required to use overlay2 on Centos7. It will be done on openstack testbench.
| 2 |
2,900 |
DM-14635
|
06/01/2018 12:21:59
|
Add support for pagmo2 optimizers in pyprofit
|
The pyprofit ([https://github.com/lsst-dm/pyprofit)] code originally supported only the optimizers from scipy.optimize, using L-BFGS-B by default. I added support for using pagmo2 ([https://github.com/esa/pagmo2/|https://github.com/esa/pagmo2/)]), an optimization library with a wider array of optimizers and interfaces to other libraries, as well as both python and C++ interfaces. I tested a few examples and L-BFGS-B with numerical gradients seems to work better than derivative-free optimizers for N<20 parameters, but it remains to be seen how general that result is. Pycharm run configurations for HSC tests are available here; this will be kept up to date (see also DM-14647): [https://github.com/lsst-dm/pyprofit/blob/master/.idea/runConfigurations/pyprofit_fit_hsc.xml] At the time of writing, the command to run the HSC example is python3 $PROJECT_DIR$/examples/hsc.py Galaxy cutout arguments: -radec 134.67675665 0.19143266 -size 19.9asec Arguments to test pagmo vs scipy optimize (enabling automatic gradients for pagmo): -optlib pygmo -grad 1 -optlib scipy The -algo <algorithm> flag can be set, but valid arguments depend on the optlib: [https://esa.github.io/pagmo2/docs/python/algorithms/py_algorithms.html#pygmo.cmaes] [https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html]
| 2 |
2,901 |
DM-14636
|
06/01/2018 12:42:48
|
Test ProFit and GalSim galaxy modelling speed and accuracy
|
I wrote and ran some systematic tests in pyprofit to compare the accuracy and speed of galaxy profile integration and convolution using pyprofit/libprofit and GalSim. When evaluating the profile and convolving with an analytic PSF, GalSim is considerably faster for n<4 and more accurate in most cases, especially for small axis ratios. However, ProFit can be faster for large Sersic indices (n>4) and supports an essentially infinite range, whereas GalSim is limited to 0.3<n<6.2. Furthermore, the tests I ran used 3x oversampling in libprofit, which has further room for optimization - as does libprofit's profile integration scheme. Thus, it's worth keeping support for libprofit in the future. pyprofit code: [https://github.com/lsst-dm/pyprofit] pyprofit arguments to generate a table of benchmarks (in pycharm run configuration format, which is human-readable xml): https://github.com/lsst-dm/pyprofit/blob/master/.idea/runConfigurations/pyprofit_bench_integ_long.xml benchmarking notebook: [https://github.com/lsst-dm/modelling_research/blob/master/jupyternotebooks/pyprofit_benchmarks_plot.ipynb] The plots should render better in a browser now, but I attached some copies anyway.
| 3 |
2,902 |
DM-14637
|
06/01/2018 12:59:18
|
Test using Tractor for source modelling
|
Tractor ([http://thetractor.org|http://thetractor/], [https://github.com/dstndstn/tractor/]) is a python/C++ package for Bayesian source modelling, much like ProFit ([https://github.com/icrar/ProFit)|https://github.com/icrar/ProFit).]. However, Tractor has a number of extra useful features, including support for multi-band fitting, integration with astrometry.net and built-in descriptions of multi-component models. I installed Tractor and adapted the existing SDSS test to fit HSC images via the quarry tool ([https://hsc-release.mtk.nao.ac.jp/das_quarry/|https://hsc-release.mtk.nao.ac.jp/das_quarry/)]); the script is attached. Unfortunately, I found Tractor to be rather sparsely documented and not entirely intuitive to use, with custom optimizers that would be difficult to maintain. Furthermore, it has a lot of python2-isms and some C++ code wrapped with Swig which would require significant effort to make compliant with LSST standards, and so I don't think it's worth pursuing further. It may be worth contacting the Tractor authors (mainly Dustin Lang), though.
| 2 |
2,903 |
DM-14648
|
06/01/2018 13:28:02
|
Add GalSim support to pyprofit
|
pyprofit previously exclusively used libprofit to generate models; I added support for integrating and convolving with empirical PSFs using galsim. Analytic PSFs are a WIP. Changes are mainly in make_model_galsim: [https://github.com/lsst-dm/pyprofit/blob/master/python/profit.py] Pycharm run configurations for HSC tests are available here; this will be kept up to date (see DM-14647): [https://github.com/lsst-dm/pyprofit/blob/master/.idea/runConfigurations/pyprofit_fit_hsc.xml] At the time of writing, the command to run the HSC example is python3 $PROJECT_DIR$/examples/hsc.py Galaxy cutout arguments: -radec 134.67675665 0.19143266 -size 19.9asec Enable galsim: -galsim 1
| 2 |
2,904 |
DM-14668
|
06/04/2018 08:33:48
|
Templates for EUPS table files should not include versions
|
We no longer recommend the use of versions in the {{setupRequired}} or {{setupOptional}} statements in hand-written table files. Those table files are automatically expanded to include exact versions when products are installed, which provides much more rigorous dependency version handling. The version inequalities we used to include manually never provided much rigor, and provide none now that stack packages are versioned together.
| 1 |
2,905 |
DM-14673
|
06/04/2018 14:13:53
|
Label for marker disappears when it should not
|
# Display an image # add a marker # make the marker bigger # move it around # Please notice when any edge of the indication box is out of image, the label disappears. Fix suggestion: Check the preference of where the indication box should be, when that corner (plus some margin) is in the image, the label should be displayed. Copied from GitHub: (6/13/2018): This development fixes the following issues, * when the marker/footprint is moved towards the edge of the display area, the label for the marker/footprint disappears. fixing: showing the label when the marker/footprint is moved out of plot area and showing the label in alternate corner location in case the label is being moved out of plot area. * when the marker/footprint is relocated farther from the center or the target of the HiPS, the footprint is shown out of shape as the image is zoomed. fixing: instead of translating each region component on image domain as regular image, each component is re-rendered in world coordinate per original region description. test: start image search add a marker or footprint overlay move the marker/footprint to anywhere close to the plot border and view the text display start HiPS search (no FOV entered) add a footprint overlay click (relocate) at a point close to the curved border of the HiPS zoom the image to see more detail of the footprint by zooming and rotating the HIPS
| 3 |
2,906 |
DM-14674
|
06/04/2018 15:26:24
|
Extend class Chunker in package sphgeom to validate chunk numbers
|
Extend class Chunker of package sphgeom with a method testing chunk validity for a given partitioning configuration. This method is needed by Qserv replication system.
| 1 |
2,907 |
DM-14685
|
06/05/2018 12:11:03
|
Replace afw.geom with lsst.geom in ap_association
|
DM-14429 moved afw.geom into lsst.geom. Since ap_association is not yet part of lsst_distrib this change did not propogate through the package. This ticket will change app instances of afwGeom to geom.
| 2 |
2,908 |
DM-14690
|
06/05/2018 13:01:59
|
Add ability to construct centered boxes
|
As discussed on [GitHub|https://github.com/lsst/afw/pull/357#discussion_r192899255], it would be useful if {{Box2I}} and {{Box2D}} had methods for creating boxes centered on a particular (fractional) point. Given that the {{Box*}} classes cannot afford more constructors, and we still don't have a good way to discover functions, this capability is best implemented as static factory methods: {noformat} Box2I Box2I::makeCenteredBox(Point2D const& center, Box2I::Extent const& size); Box2D Box2D::makeCenteredBox(Point2D const& center, Box2D::Extent const& size); {noformat} I do not plan to add a {{Box2I::makeCenteredBox(Point2I, Extent)}} method: I'm worried that it might lead to confusion with {{(Point2D, Extent)}}, and as [~jbosch] pointed out it's not clear what the best behavior is for even-sized boxes. The desired functionality is already present in the implementation for {{Exposure::getCutout}}, so this work is just a matter of factoring the code, writing new unit tests, and being very careful about the distinctions between {{Box2I}} and {{Box2D}}.
| 2 |
2,909 |
DM-14694
|
06/05/2018 16:26:42
|
Pixel coordinates off by half in Firefly relative to LSST conventions
|
When using {{lsst.afw.display.dot}} with the Firefly backend, positions appear at a half-pixel offset from where they should; LSST convention is to label the *center* of the lower-left pixel (0, 0). I also see the same half-pixel offset in the mouse pixel coordinates displayed in the upper right corner of the Firefly GUI, so I'm guessing this is just because Firefly (presumably) using a coordinate convention in which integers are pixel boundaries rather than pixel centers. If it's possible to address that, it'd be very nice to also include the "xy0" offset our image objects carry as well (note that {{dot}} already takes care of this offset correctly, presumably by removing it in Python before passing the position to Firefly). I'd really appreciate a quick fix so I don't have to work around this in the tutorial notebooks for LSST@Europe3 next week. If the only short-term options is a workaround for {{dot}} in the display_firefly Python code, I can do that much myself; please let me know if I should.
| 2 |
2,910 |
DM-14695
|
06/05/2018 16:32:31
|
Mask overlays missing with more than one call to mtv on the same display object
|
When using the Firefly backend for {{lsst.afw.display}}, the first {{MaskedImage}} or {{Exposure}} displayed via {{Display.mtv}} has a wonderful mask overlay (much better than DS9's!). Unfortunately, if I call {{mtv}} a second time on the same {{Display}} instance, the mask overlays do not appear, even though the image itself is updated. If I close the image window in Firefly itself before calling {{Display.mtv}}, things work as expected. If a quick fix is possible, it'd be great to get it in this week so I can avoid demoing the workaround at LSST@Europe3 next week (but it's not a terrible workaround if quick fix is not possible).
| 2 |
2,911 |
DM-14699
|
06/06/2018 09:16:51
|
Silence NumPy FutureWarnings in meas_deblender
|
NumPy is changing the default behavior of {{numpy.linalg.lstsq}}'s {{rcond}} parameter, and now warns on any call that uses the default. meas_deblender does that extensively, resulting in lots of warnings.
| 0.5 |
2,912 |
DM-14712
|
06/06/2018 19:43:08
|
PPDB Performance test on NCSA Oracle instances
|
Run ap_proto against Oracle instance at NCSA and extract some performance numbers from these tests.
| 20 |
2,913 |
DM-14715
|
06/07/2018 10:46:41
|
Fix deployment issues for recent tech notes
|
Please sort out travis and deployment for recent tech notes: DMTN-080 (which was created manually), 81, 82, and 83, 84. The latter 4 were created by sqrbot but deployment credentials were not created (I did turn on travis manually).
| 0.5 |
2,914 |
DM-14725
|
06/07/2018 13:45:12
|
Eliminate explicit use of ndarray::EigenView in C++ code
|
ndarray::EigenView relies on undocumented internals of Eigen. Stop using it, in order to allow upgrading Eigen. The full conversion consists of two parts: 1) Stop using {{ndarray::EigenView}} explicitly in C++ code. That is what this ticket is about. 2) Stop using {{ndarray::EigenView}} indirectly via {{ndarray::Array::asEigen}} by having that function return an {{Eigen::Map}}. That is DM-14728
| 1 |
2,915 |
DM-14727
|
06/07/2018 14:20:42
|
Add header service requirements into model and make RFC
|
Take the requirements spreadsheet from [~felipe], add it into the MagicDraw model in LDM-638, make the document, and submit to CCB for review.
| 2 |
2,916 |
DM-14732
|
06/08/2018 06:21:01
|
Regions appear on subsequent afw Displays with Firefly backend
|
When symbols are overlaid on an {{afw.display.Display}} with the Firefly backend, subsequent {{mtv}} commands will have the regions layer from the first display also overlaid. This regions layer is not removed by using {{erase}}.
| 2 |
2,917 |
DM-14734
|
06/08/2018 10:23:22
|
Allow zoom to be set before mtv in afw Displays for Firefly backend
|
A common pattern for using {{afw.display}} is to display an image with a zoom level. With the Firefly backend, currently it is necessary to {{mtv}} (display) the image and then issue a zoom command. This displays the entire image, which can take some time for large images, and then applies the zoom. This improvement is for the backend to keep track of the last commanded zoom level, and to include it in the call to {{FIreflyClient.show_fits}}.
| 2 |
2,918 |
DM-14741
|
06/08/2018 14:50:19
|
Set Cores for Firefly docker container
|
Until we go to Java 10 we need a way to control the number of cores the firefly docker container uses. The docker container will name allow for a environment variable JVM_CORES
| 1 |
2,919 |
DM-14742
|
06/08/2018 15:38:47
|
Let ap_verify run ap_pipe with dataset-specific configs
|
{{lsst.ap.verify.DatasetIngestTask}} currently accepts configs from a dataset, allowing for dataset-specific file masks and similar features. It would be useful for {{ap_verify}}'s calls to {{ApPipeTask}} to have the same capability, so that dataset-specific options (for example, the distinction between "deep" and "goodSeeing" templates) can be set for the instance(s) of {{ApPipeTask}} created by {{ap_verify}}. Unlike {{DatasetIngestTask}}, {{ApPipeTask}} does not need special support for obs-specific configs as it already provides them automatically as a {{CmdLineTask}}.
| 2 |
2,920 |
DM-14747
|
06/11/2018 02:04:33
|
Change Timeouts for webserv to 2 hours
|
Make sure the blocking timeout for webserv is close to 2 hours.
| 2 |
2,921 |
DM-14749
|
06/11/2018 06:57:21
|
Update Qserv deploy documentation
|
Update READMEs and cleanup examples
| 1 |
2,922 |
DM-14758
|
06/11/2018 13:00:33
|
Text defined in region 'text' is not displayed
|
the following region text sample has no text display: text 2101 2131 "my label"
| 1 |
2,923 |
DM-14759
|
06/11/2018 13:08:31
|
Support XY0 for image readout and make it the default in slate.html
|
Add support for XY0 using LTV1/2 or CRVAL\{1/2}A. XY0 will be default for slate.html entry point. Therefore it will be the default for the API.
| 2 |
2,924 |
DM-14763
|
06/11/2018 22:38:37
|
Improve region ID handling in display_firefly
|
When a display is redefined, the region layer Id is set to None. When a display is erase, the region layer Id is also set to None. When a display is redefined and then a region is added, the previous regions show up as well. Since for {{afw.display}} we always use "lsstRegions" plus the frame number, there is no need to set the regionLayerId to None. Alternatively, if a display is not re-defined in a Python session then the problem is avoided.
| 2 |
2,925 |
DM-14772
|
06/12/2018 13:50:31
|
Allow "." and "_" in LTD edition names for LSST Science Pipelines EUPS tags
|
Allow "." and "_" in LTD edition names since those are used for Git and EUPS tags by the LSST Stack (LSST Science Pipelines). This is needed to properly version and deploy pipelines.lsst.io from the LSST Science PIpelines' code base.
| 0.5 |
2,926 |
DM-14774
|
06/12/2018 17:18:50
|
Upgrade Plotly library
|
We are using plotly-1.28.2.min.js As of today, the latest release is plotly-1.38.3.min.js. A lot of bugs in scattergl have been fixed, we might want to use scattergl for larger scatters. One of the bugs still present in scattergl is disappearing error bars on relayout. For this reason, we might not want to use scattergl as a default plot when the number of points is reasonably low.
| 8 |
2,927 |
DM-14778
|
06/13/2018 11:56:51
|
Fix asinh
|
We still do not have asinh quite right, the following should be done: * Determine what is wrong, maybe why it is wrapping? * Understand why have a beta parameter exposed to the user and not a Q parameter. You might want to talk to Lijun about this since she did the original work but no longer works for LSST. * From the UI- determine if we we need a slider entry instead of a number text box entry. Note- when asinh was originally done we had not yet brought in a slider component. It makes more sense now. * Work with [~shupe] to validate the implementation plan * FitsRead.java has most of the stretch code as static methods. Move these out of FitsRead.java into a Stretch.java * [~shupe] thinks we have the same problem with Power law Gamma. If that is a simple fix after the above work then fix it, otherwise we will make a second ticket. 6/25/2018 This ticket will deal with changing asinh algorithm to be consistent with asinh stretch implemented in [https://github.com/lsst/afw/blob/master/python/lsst/afw/display/rgb.py] and [https://github.com/astropy/astropy/blob/master/astropy/visualization/lupton_rgb.py] The parametrization using Q is explained in the footnote on page 3 of [https://arxiv.org/pdf/astro-ph/0312483.pdf] The algorithm will accept Q parameter from 0.1 to 10, which should be controlled by a slider. The mapping from flux to color value is {{255 * 0.1 * asinh(Q*(x-xMin)/(xMax-xMin)) / asinh(0.1*Q)}} Below xMin, the color will be 0; above xMax, the equation has to be applied and then clipped to 244 (because we use 255 for blank pixel). Per [~shupe] {quote}Zscale + linear is often used with CCD images to show faint features. I am thinking of Lupton’s asinh as keeping zscale + linear at low values, and bending the stretch function over at intermediate and large intensities. It uses only a fraction of the color range for the max value from zscale; but it will show brighter features above zscale. Lupton’s formulation assumes that xMax is far below the bright features in the image. He wants to see the features above xMax as computed by zscale. This is different from how I have traditionally thought about these stretches. {quote} ________________ The first and last items in the original description above were fixed by DM-14780. Stretch code will stay in FitsRead.java to avoid conflicts with Lijun's IRSA-1498 ticket, which takes care of code refactoring. Per [~shupe], it would also be helpful to display boundaries, calculated by z-stretch algorithm rather than the original data boundaries. I will see if I can add the calculated values to z-stretch dialog.
| 8 |
2,928 |
DM-14779
|
06/13/2018 12:27:19
|
make last Git commit ID available in Firefly build
|
As we have more projects using Firefly at different stages of its development, it is necessary for project to know which version of Firefly it is using. Git commit ID uniquely identifies the point of the build. * provide an API call to get the commit ID string * display the commit ID as part of the build information ** dev currently: "v1.0.0_Development-0 Built On:Sat Jun 09 18:05:11 PDT 2018" ** IRSA ops: v3.0.0_Final-2629 Built On:Tue May 15 16:07:17 PDT 2018 We may want more discussion on how/where to display this in operations. Implemented as the following: Show a detail view of the version info when the shorten version is clicked. * click on the version info on the lower-right corner to see the details Exposes the same information via the JS API as well as from Java server-side environment. * In JS, open a demo page, open console. In console, enter {{> window.firefly.util.getVersion()}}
| 2 |
2,929 |
DM-14780
|
06/13/2018 14:47:06
|
Data values in lower and upper range are not mapped correctly for power law and asinh stretch
|
The attached screenshot shows the problem caused by the bug: data values <= Lower range should be black, data values >= Upper range should be white.
| 2 |
2,930 |
DM-14781
|
06/13/2018 17:22:51
|
Upgrade Eigen to 3.2.10
|
As part of DM-14305 upgrading Eigen to 3.3 we need to transition to a version new enough that pybind11 supports it. The earliest version of Eigen that our pybind11 supports is 3.2.7. This intermediate step is required in order to release a version of ndarray that no longer uses EigenView for pybind11, and thus allows us to switch to the standard pybind11 wrappers. I will go to 3.2.10, the latest 3.2 release, in order to get as many bug fixes and other improvements as we can.
| 1 |
2,931 |
DM-14785
|
06/14/2018 12:01:33
|
Make sure .user_setups lands in the right place
|
Currently, the system looks for a the {{.user_setups}} file in {{$HOME}} and adds a template file there if it doesn't exist. It should, instead, be looking in {{$HOME/notebooks}} as that is where the file is actually sourced.
| 0.5 |
2,932 |
DM-14794
|
06/15/2018 10:10:21
|
Notebooks and Scientific Input for SQuaRE Science Platform services
|
This epic covers scientific input into SQuaRE Science Platform services including the development of example notebooks appropriate to Commissioning and other early users,and science-level design input on aspects of SQuaRE developed services.
| 40 |
2,933 |
DM-14804
|
06/15/2018 13:40:46
|
SelectionSet.load_single_package fails silently
|
When reading in the attached file via _lsst.verify.SelectionSet.load_single_package_, it returns an empty set. This means there is something wrong with the file, but the load fails silently. It should yell at me as to the issue with what I'm passing it.
| 0.5 |
2,934 |
DM-14806
|
06/15/2018 16:29:25
|
convert stack demo to be a regular eups product
|
The "demo" currently exists as a special snowflake feature of jenkins jobs and is not packaged as an an eups product. A script is currently invoked by the CI machinery after an {{lsstsw/lsst_build}} or {{eups distrib install}} has completed which downloads the demo repo from github as a tarball. An unfortunate consequence of this implementation is that changes on the master branch of the demo result in previous git tags no longer working with the demo when built by {{ci-scripts/lsstsw}} and requires knowledge of the correct git ref to use after a direct {{eups distrib install}}. This also presents an irritation when tagging an official release as there is no source of truth as to where the tag should be located, requiring human intervention. For at least the third time, the demo on master has changed during the release process and now fails with the current release candidate. A much less error prone solution would be to convert the demo into a regular eups product, which is a dependency of either {{lsst_distrib}} and/or {{lsst_ci}}. This would result in demo metadata being incorporated into eups distrib tags and solving the science-pipeline/demo version mismatch problem both for end users with a local installation and under CI. I believe the basic tasks to accomplish this would be: - convert {{lsst/lsst_dm_stack_demo}} into an eups product – essentially add a {{ups}} dir and a table while which depends on {{lsst_apps}} - add {{lsst_dm_stack_demo}} as a dependency of {{lsst_ci}} and/or {{lsst_distrib}} - add a test script under {{lsst_ci/tests/}} to trigger a demo run - remove {{lsst-sqre/ci-scripts/runManifestDemo.sh}} and update {{lsst-sqre/ci-scripts/lsstswBuild.sh}} to not run the demo - update various jenkins jobs in {{lsst-sqre/jenkins-dm-jobs}} to {{setup lsst_dm_stack_demo}} rather then invoking {{runManifestDemosh.sh}}
| 5 |
2,935 |
DM-14812
|
06/18/2018 12:20:16
|
Make alert printer print every Nth alert
|
Current alert_stream alert deserializer/printer will print all received alerts to stdout, which will overwhelm the logs. Change this to every Nth alert as a command line argument.
| 0.5 |
2,936 |
DM-14814
|
06/18/2018 13:24:19
|
Change invalid pixel handling by Exposure::getCutout
|
Currently, {{Exposure::getCutout}} returns "blank" pixels (in the sense of e.g. {{Exposure(dimensions)}}) for parts of the cutout that extend off the edge of the original image. The more natural behavior to veteran stack users is to fill the off-image pixels with the value of [{{afw::math::edgePixel}}|http://doxygen.lsst.codes/stack/doxygen/x_masterDoxyDoc/namespacelsst_1_1afw_1_1math.html#a44ebf02a4fe421404fe9cbd6e9a6f699]. I propose that, if the {{NO_DATA}} flag has been deleted (e.g., with {{Mask::removeMaskPlane}}), {{getCutout}} should return edge pixels with a blank mask but with value and variance conforming to {{edgePixel}}'s behavior. However, I'm open to throwing an exception instead. This ticket shall modify {{getCutout}} as described and add unit tests for pixel values, which we neglected to do before.
| 2 |
2,937 |
DM-14819
|
06/18/2018 15:06:21
|
Refactor LoadReferenceObjectsTask for SuperTask compatibility
|
Some of the work currently done by LoadReferenceTask - the actual lookup of which shards overlap an area of interest - will be done by preflight (outside Task code) in the Gen3 era. That will ultimately let us greatly simplify it, but in the meantime we will need to make sure it provides APIs appropriate for both CmdLineTask usage and SuperTask usage. Reference catalogs also have a lot in common with catalogs of simulated objects we might want to insert into real images - they'll have different columns, but we'd want to shard them and look them up the same way. We should look for opportunities in this refactor to support that use case as well.
| 8 |
2,938 |
DM-14822
|
06/18/2018 15:52:08
|
Gen3 get/put with DatasetRef only
|
We've accidentally made it a bit painful to use the DatasetRef and Butler classes together, because Butler's {{get}} and {{put}} interfaces require one to unpack the contents of a DatasetRef. Make sure {{Butler.get(datasetRef)}} and {{Butler.put(obj, datasetRef)}} work. Note that the former is not just a renaming of {{Butler.getDirect}}; it should not permit loading datasets outside the {{Butler}}'s collection, and it should not require the {{DatasetRef}} to have already been associated with a dataset_id (though it should take advantage of one, if present).
| 2 |
2,939 |
DM-14824
|
06/18/2018 16:12:05
|
Add syntactic sugar for ConfigFields of *DatasetConfigs
|
As we discovered in the SuperTask conversion kickoff meeting, writing (e.g.) {code:java} myInput = ConfigField( dtype=InputDatasetConfig, doc="Input image dataset type definition.", default=InputDatasetConfig( name="name", units=("Camera", "Visit", "Sensor"), storageClass="Exposure" ) ){code} is much too verbose. Add custom {{Field}} objects so the above can be written as {code:java} myInput = InputDatasetField( doc="Input image dataset type definition.", name="name", units=("Camera", "Visit", "Sensor"), storageClass="Exposure") ){code}
| 2 |
2,940 |
DM-14827
|
06/18/2018 17:44:47
|
Update WISE table references to use logical db name from metaserv
|
Test WISE metadata and update the references to WISE tables to use logical db.
| 1 |
2,941 |
DM-14834
|
06/19/2018 10:11:52
|
Use pybind11's native Eigen wrapping instead of ndarray EigenView
|
Update our pybind11 wrappers as needed to use pybind11's native Eigen support. This is necessary in order to upgrade to Eigen 3.3. Changes include: - Build ndarray without EigenView support and with native pybind11 Eigen support. The latter is not necessary but avoids the need to change all our wrappers that wrap code that uses Eigen to explicitly import pybind11/eigen.h - Update code and tests and needed. For example {{geom}} has one failing test because pybind11's Eigen wrappers are more lenient than ndarray, so it is now possible to construct an lsst.geom.Extent2I from an lsst.geom.Extent2D.
| 2 |
2,942 |
DM-14837
|
06/19/2018 11:02:59
|
Firefly improvements to intro-with-globular notebook
|
Some improvements are available for the Firefly parts of [~jbosch]'s intro-with-globular notebook used in the Lyon 2018 hands-on session and later demos. * Define the Display so the proper URL is available before the IFrame cell, so that `lsp-demo.lsst.codes` need not be hardcoded into the notebook * Dial down mask transparency via afw.display * Show how the {{firefly_client.plot}} convenience module can upload the catalog and overlay on the images.
| 1 |
2,943 |
DM-14840
|
06/19/2018 14:03:05
|
Make mask transparency and color "sticky" in display_firefly
|
{{afw.display}} provides methods for setting mask plane colors and transparencies. These should be "sticky" in the display_firefly backend, meaning that once they are set for a Display object, they should be applied when an image is sent again to that display. The display_firefly backend also needs to ignore masks whose color is set to "ignore" or "IGNORE". Related to this, {{afw.display}} provides {{setDefaultMaskTransparency}} and {{setDefaultMaskPlaneColor}} which are used when Display instances are created. Fix a small bug in {{setDefaultMaskTransparency}} and verify that both of these work with the display_firefly backend.
| 3 |
2,944 |
DM-14841
|
06/19/2018 14:28:51
|
NERSC password file has moved so fd leak checker fails tests
|
As reported on Slack, {{pipe_tasks}} is failing to build at NERSC with: {code} File open: /var/lib/sss/mcpath/passwd {code} {{utils}} has a filter for {{/var/lib/sss/mc/passwd}} (from DM-7186) so it seems that file moved in the past few weeks. Update the filter to be more general.
| 0.5 |
2,945 |
DM-14842
|
06/19/2018 15:00:14
|
Fix deprecation warnings from PropertyList/Set.get
|
Fix deprecation warnings from PropertySet.get and PropertyList.get by switching to getScalar or getArray
| 3 |
2,946 |
DM-14844
|
06/19/2018 16:09:19
|
Two FITS tests in afw assume they run relative to AFW_DIR
|
Tests in test_propertyListPersistence.py and test_footprint.py assume that test FITS files are in {{tests/data}} rather than looking for them relative to the location of the test file. This breaks them if you attempt to run the afw tests from outside AFW_DIR. The fix is to use {{__file__}}. There may also be an executable C++ test that fails in the same way.
| 0.5 |
2,947 |
DM-14846
|
06/19/2018 17:21:38
|
display_ginga won't build
|
The {{display_ginga}} package has configuration issues that prevent it being built. Specifically, there is a missing import and a missing package setup.
| 1 |
2,948 |
DM-14847
|
06/19/2018 18:09:25
|
Missing keys in association DB
|
The changes to the DB schema in DM-14620 have invalidated -flux handling in {{ap_pipe}} and- metrics computations in {{ap_verify}}. This prevents {{ap_verify}} or its tests from running. Fix things so that all unit and integration tests pass.
| 1 |
2,949 |
DM-14848
|
06/19/2018 18:15:21
|
ingest_dataset.py must be run from final working directory
|
{{ingest_dataset.py}} internally uses relative paths for calibration files, causing failed defect lookups if later tasks are run from a different directory than {{ingest_dataset.py}} was. Use absolute paths throughout and test that the resulting repositories can be passed as-is to {{ap_pipe}}.
| 1 |
2,950 |
DM-14849
|
06/19/2018 18:43:18
|
Metadata-based metrics not measured
|
Integration tests of {{ap_verify}} produce all expected metrics extracted from the Butler repository or the association database, but none from {{ApPipeTask}} metadata -- not even timing measurements. Find out what is going on and add integration tests and, if possible, regression tests to {{ap_verify}}.
| 2 |
2,951 |
DM-14861
|
06/20/2018 16:16:23
|
Disable CC requirement for obs_base
|
Remove the need for a C++ compiler as it's python-only.
| 1 |
2,952 |
DM-14863
|
06/21/2018 11:47:48
|
Place initial data backbone requirements in model and generate document
|
[~mgower] has sent me an initial set of requirements for the Data Backbone Services. I will put them in the model and upload an initial draft to DocuShare.
| 1 |
2,953 |
DM-14867
|
06/21/2018 17:57:53
|
Firefly API: user can't replace a failed image coverage
|
When using Firefly API (i.e. Gator case), if the image search fails, then the user can't replace the image anymore because toolbar is not accessible anymore. To test the problem: Go to Gator, do a 100" search on WISE around position '298.0 29.87' and change coverage image with SDSS u band then try to change back again. Please fix, thanks.
| 1 |
2,954 |
DM-14870
|
06/22/2018 10:50:02
|
eups.lsst.codes sync from s3 does not update objects of identical size
|
[~fritzm] reported yesterday that the {{qserv-dev}} eups distrib tag was not updating after being published. The s3 object was confirmed to be correct but was not syncing to the k8s service. It was assumed at the time that this was a random case of s3 eventual consistency taking an excessively long time and the k8s pod was always getting an old version of the object. However, > 12 hours seems excessive for this. Upon further investigation this morning, it appears that {{aws s3 sync}} from {{awscli}}, which is used to to perform the sync, *does not* checksum the local file to determine if it is in-sync with s3. All it does it look at the file size by default, and can optionally compare timestamps (which isn't enabled) -- there is no option to force checksums (ie., {{rsync -c}}). This is rather unfortunate as s3 does have an {{ETag}} (md5) for all objects. Eg., {code:java} $ aws s3api head-object --bucket eups.lsst.codes --key stack/src/tags/qserv-dev.list { "AcceptRanges": "bytes", "LastModified": "Fri, 22 Jun 2018 01:14:45 GMT", "ContentLength": 2495, "ETag": "\"04d0d2da6b4b1107bb03453177813201\"", "VersionId": "null", "ContentType": "binary/octet-stream", "Metadata": {} } $ aws s3 cp s3://eups.lsst.codes/stack/src/tags/qserv-dev.list . download: s3://eups.lsst.codes/stack/src/tags/qserv-dev.list to ./qserv-dev.list $ md5sum qserv-dev.list 04d0d2da6b4b1107bb03453177813201 qserv-dev.list {code} Demonstration that the s3 object and the stale eups.lsst.codes file are the same size: {code:java} [root@pkgroot-rc-jh4lf /]# grep BUILD= /var/www/html/stack/src/tags/qserv-dev.list #BUILD=b3668 [root@pkgroot-rc-jh4lf /]# ls -la /var/www/html/stack/src/tags/qserv-dev.list -rw-r--r-- 1 root root 2495 Jun 21 12:42 /var/www/html/stack/src/tags/qserv-dev.list $ grep BUILD= qserv-dev.list #BUILD=b3670 $ ls -la qserv-dev.list -rw-rw-r--. 1 jhoblitt jhoblitt 2495 Jun 21 18:14 qserv-dev.list {code}
| 1 |
2,955 |
DM-14874
|
06/22/2018 13:11:50
|
Add options to select against duplicate coadd sources
|
We should only ever be plotting the "primary" sources for coadd QA-ing. The description for the *detect_isPrimary* flag is: {code:java} doc="true if source has no children and is in the inner region of a coadd patch and is in the inner region of a coadd tract"{code} We are currently selecting on *deblend_nChild* *= 0* (which is appropriate at the visit level).
| 5 |
2,956 |
DM-14878
|
06/24/2018 20:43:02
|
generateAcronyms.py seems to not complain on missing acronym
|
I noticed that the acronym DBA did not show up in my generated acronyms. There is a lot of output which is not needed so it may have complained. When I added it to myacronyms.txt it showed up fine .. now I wonder if I am missing others ... [~tjenness] I may try to look at this next week :) unless its just too verbose output
| 1 |
2,957 |
DM-14879
|
06/25/2018 06:24:24
|
Use last Qserv version in Qserv deploy
|
Update Qserv version with the last container image in Qserv deploy tool
| 2 |
2,958 |
DM-14885
|
06/25/2018 17:24:26
|
Use either nanoseconds or MJD in PPDB CcdVisit table
|
The prompt product database currently has a column called expMidptMJD which is in units of seconds. These timestamps are constructed via {{exposure.getId().getVisitId().getDate().nsec() * 1.e-9}}, which means zero seconds corresponds to the daf_base DateTime nanosecond zeropoint. However, none of this is clear without some digging. Since this field has MJD in the name, it would make sense to have MJD as units here. Another option would be to return nanoseconds and change the column name accordingly, since that is the default time unit for our daf_base DateTime objects. Either way, MJD should be a float and nanoseconds should be an int in order to have consistent behavior with DateTime.
| 2 |
2,959 |
DM-14886
|
06/25/2018 18:49:10
|
multiple filters on a heat map failed
|
reproduce # do a search on WISE Multiepoch photometry table with the example polygon: 20.7 21.5, 20.5 20.5, 21.5 20.5, 21.5 21.5 # use the rectangle filter over the image, a tiny rectange returned about 2000 entries # do any filter on image, table column, or on plot, it will fail. Error message look like this Column not found: decimate_key(ra,dec,20.5099374,20.500176,100,100,0.00990003299934003,0.009995321952220013) Analysis guess: This only happens when the plot is heatmap.
| 3 |
2,960 |
DM-14894
|
06/26/2018 11:56:33
|
Add pipeBase.timeMethod to score and match methods in AssociationTask
|
Timing the match scoring and matching methods would be useful in the context routing out slow downs in the association step of ap_pipe.
| 1 |
2,961 |
DM-14910
|
06/26/2018 16:13:34
|
Firefly zscale differs from other implementations
|
The upper output (z2) of Firefly's zscale implementation is often significantly greater than z2 from other implementations, including those in ds9, astropy and lsst.afw.display. Comparison on coadd file calexp-HSC-I-9813-4,4.fits: ||Implementation||Contrast (%)||Nsamples||Samples_per_line||z1||z2|| |Firefly|25|600|120|-0.0491993|0.409582| |ds9|25|600|120|-0.0491993|0.145803| |lsst.afw.display.displayLib|25|600|N/A|-0.0577342|0.155637| |astropy.visualization|25|600|N/A|-0.0550090|0.170189| |Firefly|25|1000|120|-0.0487988|0.951774| |ds9|25|1000|120|-0.0487988|0.15138| |lsst.afw.display.displayLib|25|1000|N/A|-0.0577120|0.165587| |astropy.visualization|25|1000|N/A|-0.0535599|0.171046| |Firefly|80|600|120|-0.0379127|0.132309| |ds9|80|600|120|-0.0373926|0.0524557| |lsst.afw.display.displayLib|80|600|N/A|-0.0392628|0.0535469| |astropy.visualization|80|600|N/A|-0.0437599|0.0578543|
| 2 |
2,962 |
DM-14914
|
06/27/2018 09:09:41
|
ccutils from ci-scripts appears to be unsetting error exit
|
I've seen build failures a few times over the last several month that should have died earlier then then did. This was observed again last night with a tarball build and my theory is that {{ccutils}} from {{ci-scripts}} is unsetting {{errexit}} as an accidental side effect (the tarball build would have failed either way from what appears to be a broken eups install). From https://ci.lsst.codes/blue/organizations/jenkins/release%2Ftarball/detail/tarball/2958/pipeline : {code:java} + . ./loadLSST.bash ++ export PATH=/build/python/miniconda3-4.3.21/bin:/opt/rh/devtoolset-6/root/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin ++ PATH=/build/python/miniconda3-4.3.21/bin:/opt/rh/devtoolset-6/root/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin ++++ dirname ./loadLSST.bash +++ cd . +++ pwd ++ LSST_HOME=/build ++ EUPS_DIR=/build/eups/2.1.4 ++ source /build/eups/2.1.4/bin/setups.sh +++ export EUPS_SHELL=sh +++ EUPS_SHELL=sh +++ export EUPS_DIR=/build/eups/2.1.4 +++ EUPS_DIR=/build/eups/2.1.4 ++++ echo /build/eups/2.1.4 ++++ sed -e 's/ /-+-/g' +++ eupslocalpath=/build/eups/2.1.4 ++++ /build/python/miniconda3-4.3.21/bin/python -E -c ' from __future__ import print_function import sys pp = [] for d in sys.argv[1].split(":"): if d and d not in pp: pp += [d] if not sys.argv[2] in pp: pp = [sys.argv[2]] + pp print(":".join(pp))' '' /build/stack/miniconda3-4.3.21-10a4fa6 +++ export EUPS_PATH=/build/stack/miniconda3-4.3.21-10a4fa6 +++ EUPS_PATH=/build/stack/miniconda3-4.3.21-10a4fa6 +++ _eups_path=/build/stack/miniconda3-4.3.21-10a4fa6 ++++ /build/eups/2.1.4/bin/eups_setup DYLD_LIBRARY_PATH= eups -r /build/eups/2.1.4 setup: Unable to take shared lock on /build/stack/miniconda3-4.3.21-10a4fa6: an exclusive lock is held by [user=jenkins-slave, pid=590] +++ eval false ++++ false +++ export 'SETUP_EUPS=eups LOCAL:/build/eups/2.1.4 -f (none) -Z (none)' +++ SETUP_EUPS='eups LOCAL:/build/eups/2.1.4 -f (none) -Z (none)' +++ export EUPS_PATH=/build/stack/miniconda3-4.3.21-10a4fa6 +++ EUPS_PATH=/build/stack/miniconda3-4.3.21-10a4fa6 +++ unset eupslocalpath _eups_path +++ '[' X '!=' X -a -f /build/eups/2.1.4/etc/bash_completion.d/eups ']' ++ export -f setup ./loadLSST.bash: line 11: export: setup: not a function ++ export -f unsetup ./loadLSST.bash: line 12: export: unsetup: not a function ++ export 'EUPS_PKGROOT=https://****/stack/redhat/el6/devtoolset-6/miniconda3-4.3.21-10a4fa6|https://****/stack/src' ++ EUPS_PKGROOT='https://****/stack/redhat/el6/devtoolset-6/miniconda3-4.3.21-10a4fa6|https://****/stack/src' + for prod in lsst_distrib + eups distrib install lsst_distrib -t d_2018_06_27 -vvv /build/scripts/run.sh: line 36: eups: command not found + export EUPS_PKGROOT=/distrib + EUPS_PKGROOT=/distrib + [[ -e /distrib ]] + rm -f '/distrib/*.list' + for prod in lsst_distrib + eups distrib create --server-dir /distrib -d tarball lsst_distrib -t d_2018_06_27 -vvv /build/scripts/run.sh: line 50: eups: command not found + eups distrib declare --server-dir /distrib -t d_2018_06_27 -vvv /build/scripts/run.sh: line 52: eups: command not found script returned exit code 127 {code}
| 1 |
2,963 |
DM-14915
|
06/27/2018 09:25:16
|
rewrite_shebang is not run in ctrl_orca
|
For some reasons {{ctrl_orca}} is missing the {{bin}} folder, at least in recent releases (e.g. docker images v15, w_2018_23, ...). Usually a {{bin}} folder is created for packages with a {{bin.src}} folder during the build process. But the {{bin}} folder is not made in {{ctrl_orca}}; {{rewrite_shebang}} is not run in the build process. This is also true in {{lsstsw}} build and the shared stack in {{/software}} on LSST machines. Manually {{scons}}-ing it works.
| 1 |
2,964 |
DM-14919
|
06/27/2018 12:36:11
|
Pointer file error in ap_verify_testdata
|
I re-downloaded {{ap_verify_testdata}} and got the following error: {noformat} Pointer file error: Unable to parse pointer at: "refcats/gaia_example.tar.gz" Checking out files: 100% (21/21), done. {noformat} -The file is treated as an invalid symlink by unix.- The file is downloaded correctly, but Git will refuse to make (or even revert) any changes to it. This appears to be a [bug in how the repository was created|https://github.com/git-lfs/git-lfs/issues/1828] in the first place. Fix {{ap_verify_testdata}} as described in the linked discussion, and re-download the other datasets to make sure they haven't been corrupted in the same way.
| 2 |
2,965 |
DM-14921
|
06/27/2018 13:26:34
|
Remove Python 2 references from pipelines.lsst.io
|
With the {{v16_0}} release, the LSST Science Pipelines don't support Python 2.7 anymore. Thus we need to remove references to Python 2.7 from the installation documentation.
| 0.5 |
2,966 |
DM-14928
|
06/28/2018 11:48:55
|
Fix error in DM-14765 implementation
|
Had forward reference to extracting instrument name. Instrument name extraction wasn't using getCamera() but instead incorrectly treating object as its name. Instrument name should be upper-case to match {{verify_metrics}} package definition.
| 1 |
2,967 |
DM-14932
|
06/28/2018 13:29:40
|
Add utility functions for creating SkyWcss from boresight/rotator + cameraGeom
|
Given an accurate boresight and rotator angle from a camera, we should be able to construct our best initial estimate of the WCS for a sensor by combining that information with our own camera geometry, as this will probably produce better results that relying on any WCS provided directly by the camera. In fact, for LSST, it's not clear that there even will be a sensor WCS provided directly by the camera, because LSST raw data has amps in different HDUs and no full-sensor HDU that could unproblematically hold a full-sensor WCS. To do without using astshim code directly, we should provide utility functions in afw with something like the following signatures: {code:java} TransformPoint2ToSpherePoint makeIntermediateWorldCoordsToSky( SpherePoint const & position, // (ra, dec) that corresponds to FIELD_ANGLE origin Angle rotation // rotation about FIELD_ANGLE origin ); std::shared_ptr<SkyWcs> makeSkyWcs( TransformPoint2ToPoint2 const & pixToIwc, TransformPoint2ToSpherePoint const & iwcToSky );{code}
| 2 |
2,968 |
DM-14993
|
07/02/2018 09:29:32
|
Process DES SN field on lsst-dev in automated way; produce basic metrics
|
In previous epic, I was able to process DES SN field from raw/cals through ccd, coadd and difference processing. Now we want to automate biweekly difference image processing to study changes in the pipeline. This will sometime involve full end-to-end processing as above and sometimes be just difference imaging, depending on which parts of the pipeline need to be tested. I will also add some basic metrics/evaluation of this process.
| 40 |
2,969 |
DM-14998
|
07/02/2018 12:16:05
|
Document schema naming conventions
|
Schema naming conventions changed from "."-separated with no case consistency to "_" and camelCase with the introduction of meas_base. I remember writing the new conventions down somewhere, but the first two places I looked: - afw::table::Schema class docs - afw::table overview page in Doxygen ...document the old convention. Fix those, ideally by locating the original text (maybe Confluence somewhere?) and transferring it to those locations.
| 1 |
2,970 |
DM-14999
|
07/02/2018 13:23:43
|
conda bleed package list should not be code
|
The conda "bleed" package list is inline in {{lsstsw/bin/deploy}} as "code", which it is merely "data" which should be editable without modifying "code".
| 1 |
2,971 |
DM-15007
|
07/03/2018 10:23:43
|
Add ability to exclude packages/modules from package-toctree and module-toctree
|
Add the ability to exclude named packages and modules from the piplines.lsst.io documentation through a "skip" field in the package-toctree and module-toctree directives. The "skip" field takes a comma-delimited list of names. This field is useful for temporarily removing packages that break the documentation build.
| 1 |
2,972 |
DM-15008
|
07/03/2018 10:40:08
|
anetAstrometry.py uses self.distortionContext, which does not exist
|
James Mullaney reported the following bug on [confluence|https://community.lsst.org/t/bug-report-in-meas-extensions-astrometrynet/3013]: We're using the astrometry.net extension to perform astrometry. In v15 and v16 of the stack, I've noticed that meas_extensions_astrometryNet/python/lsst/meas/extensions/astrometryNet/anetAstrometry.py refers to self.distortionContext (see [here|https://github.com/lsst/meas_extensions_astrometryNet/blob/519ff978fa8fd669da3308833b2808ba4e3ed595/python/lsst/meas/extensions/astrometryNet/anetAstrometry.py#L227]). However, it seems that the distortionContext function was removed on the 4th of November, 2017 by [this commit|https://github.com/lsst/meas_extensions_astrometryNet/blob/519ff978fa8fd669da3308833b2808ba4e3ed595/python/lsst/meas/extensions/astrometryNet/anetAstrometry.py#L227]. I may be mistaken, but this seems to be a bug introduced by not catching all references to distortionContext when making that commit.
| 2 |
2,973 |
DM-15010
|
07/03/2018 11:03:25
|
Add a sims build triggered by the lsst_distrib weekly build
|
The sims team would like an automatically built weekly sims version that is built on the weekly build of {{lsst_distrib}}. They would like the result to be a binary distribution that would allow end users to install a full stack of sims plus DM dependencies that are all consistent with the weekly build. I.e. {code} $> eups distrib install lsst_sims -t w_2018_13 {code} This is currently possible to do by hand, but it would be useful to them to have it automatically build. They are very clear that a failing {{lsst_sims}} weekly build should not imply a failing {{lsst_distrib}} build.
| 1 |
2,974 |
DM-15011
|
07/03/2018 12:48:29
|
implement separate Visit and Chip fitting for photometry
|
While doing additional testing of DM-14510, I noticed that the astrometry model fit could be thrown off by doing a line search during the initialization {{DistortionVisit}} fit step. This reminded me that I haven't implemented the separate "Visit" and "Chip" fitting option for Photometry: it may help things by pushing the toward the global minimum on the first step (as it does in astrometry). It should be very easy to implement.
| 2 |
2,975 |
DM-15014
|
07/04/2018 03:29:59
|
Write error when creating k8s manifests
|
Qserv_deployt try to write to a root owned folder when creating Qserv k8s manifests for the first time
| 1 |
2,976 |
DM-15023
|
07/05/2018 13:46:29
|
meas_modelfit is not compatible with Eigen 3.3.4
|
meas_modelfit has some issues with Eigen 3.3.4. I am making this a separate ticket from DM-14305 in hopes that these can be fixed in a way that is backwards compatible. The first issue is that this does not compile: {code} ndarray::asEigenMatrix(*ix) = component._mu + std::sqrt(_df/rng.chisq(_df)) * (component._sigmaLLT.matrixL() * _workspace); {code} but the solution to that is trivial: swap {{component._sigmaLLT.matrixL()}} and {{\_workspace}}, though [~jbosch] suggested a solution that reduces duplication with nearby code. A more serious concern is that the following code in {{TruncatedGaussian.cc}} {{TruncatedGaussian::maximize}} raises an assertion error that a matrix is empty when {{running tests/test_truncatedGaussian.py}}: {code} Eigen::FullPivLU<Matrix> solver(G.topLeftCorner(n - k, n - k)); {code} To fix that problem [~jbosch] suggests returning a vector of all zeros if n == k.
| 0.5 |
2,977 |
DM-15036
|
07/06/2018 09:45:51
|
Restore MariaDB JDBC driver to latest 2.2.5 after mysql-proxy protocol fix
|
[~salnikov] has upgraded the MySQL protocol support in mysql-proxy, so CLIENT_DEPRECATE_EOF is now being handled properly through the proxy. So no need to use the older JDBC driver anymore - it's time to 'undo' DM-14924.
| 1 |
2,978 |
DM-15043
|
07/06/2018 16:45:56
|
Broken build in meas_algorithms
|
A last-minute change for DM-9937, combined with sloppy testing, caused non-compiling C++ to get committed to {{master}}. This ticket is to fix the broken code.
| 0 |
2,979 |
DM-15044
|
07/08/2018 16:46:56
|
Seemingly Large demo change with bleeding edge pipelines build
|
In testing a bleeding edge conda install on my Mac, which includes numpy 1.14, astropy 3.0.3 and matplotlib 2.2.2, lsst_distrib builds fine but when I run lsst_dm_stack_demo I get nearly 100,000 failures, 8000 of which are failing at the 1e-06 level. I have no idea whether this is reasonable but it sounds very large. The detected-sources txt file is attached. I have not tested ci_hsc or lsst_ci. I will see if I can run lsst_ci on this system.
| 3 |
2,980 |
DM-15049
|
07/09/2018 12:47:33
|
Continue QuantumGraph implementation
|
This is a continuation of work started in DM-14334. Main goal of this ticket is to extend preflight solver with joins between skymap units and camera units so that we have more or less complete solution for building quantum graphs.
| 8 |
2,981 |
DM-15072
|
07/10/2018 09:29:36
|
Kombu error with LTD Keeper 1.11.0
|
There may be an incompatibility in an implicit dependency found through Celery that is preventing the LTD Keeper workers from booting up: {code} Traceback (most recent call last): File "/usr/local/bin/celery", line 11, in <module> sys.exit(main()) File "/usr/local/lib/python3.5/site-packages/celery/__main__.py", line 14, in main _main() File "/usr/local/lib/python3.5/site-packages/celery/bin/celery.py", line 326, in main cmd.execute_from_commandline(argv) File "/usr/local/lib/python3.5/site-packages/celery/bin/celery.py", line 488, in execute_from_commandline super(CeleryCommand, self).execute_from_commandline(argv))) File "/usr/local/lib/python3.5/site-packages/celery/bin/base.py", line 281, in execute_from_commandline return self.handle_argv(self.prog_name, argv[1:]) File "/usr/local/lib/python3.5/site-packages/celery/bin/celery.py", line 480, in handle_argv return self.execute(command, argv) File "/usr/local/lib/python3.5/site-packages/celery/bin/celery.py", line 412, in execute ).run_from_argv(self.prog_name, argv[1:], command=argv[0]) File "/usr/local/lib/python3.5/site-packages/celery/bin/worker.py", line 221, in run_from_argv return self(*args, **options) File "/usr/local/lib/python3.5/site-packages/celery/bin/base.py", line 244, in __call__ ret = self.run(*args, **kwargs) File "/usr/local/lib/python3.5/site-packages/celery/bin/worker.py", line 255, in run **kwargs) File "/usr/local/lib/python3.5/site-packages/celery/worker/worker.py", line 99, in __init__ self.setup_instance(**self.prepare_args(**kwargs)) File "/usr/local/lib/python3.5/site-packages/celery/worker/worker.py", line 122, in setup_instance self.should_use_eventloop() if use_eventloop is None File "/usr/local/lib/python3.5/site-packages/celery/worker/worker.py", line 241, in should_use_eventloop self._conninfo.transport.implements.async and File "/usr/local/lib/python3.5/site-packages/kombu/transport/base.py", line 125, in __getattr__ raise AttributeError(key) AttributeError: async {code} https://console.cloud.google.com/logs/viewer?project=plasma-geode-127520&authuser=1&_ga=2.223555801.-353975650.1530195679&pli=1&minLogLevel=0&expandAll=false×tamp=2018-07-09T22%3A49%3A09.738000000Z&customFacets&limitCustomFacetWidth=true&dateRangeStart=2018-07-09T21%3A49%3A09.990Z&dateRangeEnd=2018-07-09T22%3A49%3A09.990Z&interval=PT1H&resource=container%2Fcluster_name%2Flsst-docs&scrollTimestamp=2018-07-09T22%3A36%3A18.000000000Z&advancedFilter=resource.type%3D%22container%22%0Aresource.labels.pod_id%3D%22keeper-worker-deployment-5457bf656-zrpmf%22%0Aresource.labels.zone%3D%22us-central1-b%22%0Aresource.labels.project_id%3D%22plasma-geode-127520%22%0Aresource.labels.cluster_name%3D%22lsst-docs%22%0Aresource.labels.container_name%3D%22keeper-worker%22%0Aresource.labels.namespace_id%3D%22ltd-prod%22%0Aresource.labels.instance_id%3D%2244590524507142367%22%0Atimestamp%3D%222018-07-09T22%3A36%3A18.000000000Z%22%0AinsertId%3D%22towqstg1iy3niu%22 Appears with LTD Keeper 1.11.0. The tickets-DM-14122 Docker image deploys ok. But the 1.9.0 image does not. 1.9.0 was built retroactively much later than the associated ticket was merged (and tickets-DM-14122 was built). This means I think there was a change in kombu that we're picking up as a floating implicit dependency of the pinned celery dep.
| 0.5 |
2,982 |
DM-15082
|
07/10/2018 15:00:28
|
Switch to YamlStorage instead of BoostStorage in all obs packages
|
Now that YamlStorage has been implemented, we should use it.
| 2 |
2,983 |
DM-15084
|
07/10/2018 16:59:46
|
Enable extra checks/warnings in qserv compilation
|
We do not have every possible check enabled during C++ compilation, adding {{-Wextra}} to compilation flags should help us to uncover possible problematic code.
| 0.5 |
2,984 |
DM-15085
|
07/11/2018 07:26:15
|
Fix gen3-middleware ci_hsc SConscript
|
From [~salnikov] on Slack: {quote}OK, I think I know what is happening. If I do `rm DATA/gen3.sqlite3 DATA/butler.yaml; makeButlerRepo.py DATA; bin/gen3.py` then it runs successfully (or it fails with different exception). If I do `rm DATA/gen3.sqlite3 DATA/butler.yaml; scons gen3repo` then it fails with the above exception (no such table: Camera). I watched DATA directory and I see that `gen3.sqlite3` is removed between execution of `makeButlerRepo.py` and `gen3.py`. Looking at SConscript I think this happens because scons believes that `gen3.sqlite3` is produced by `gen3.py` when in reality it is produced by `makeButlerRepo.py` and updated by `gen3.py`. Scons by default removes target file before rebuilding it, so it does that before executing `gen3.py`. I'm not sure what is the best way to fix this but scons has `Precious` function to prevent removal of target before rebuilding (https://scons.org/doc/3.0.1/HTML/scons-user.html#chap-file-removal), maybe we should use that {quote}
| 1 |
2,985 |
DM-15090
|
07/11/2018 16:43:09
|
Stop using file in Python code
|
We are still using the old `file` class in some Python code, even though it is not available in Python 3. It should be replaced with `open`. Also, pep8 catches this so I suggest making the packages pep8 compliant, enabling automatic checking and removing python 2 support (all of which should be trivial).
| 0.5 |
2,986 |
DM-15098
|
07/13/2018 09:25:27
|
Add Registry.getRegion(DataId)
|
Add {{Registry.getRegion(DataId) -> sphgeom.Region}} that returns the intersection of all regions associated with the {{DataUnit}} in the {{DataId}}.
| 8 |
2,987 |
DM-15104
|
07/13/2018 13:18:47
|
Move SourceDeblendTask out of MeasureCoaddSources
|
Currently the deblender is run as part of the {{MeasureCoaddSources}} task. In order to make it easier to swap out the current and future deblenders , {{SourceDeblendTask}} should be pulled out and placed in a new {{DeblendCoaddSourcesTask}}, along with an option to run {{MultibandDeblendTask}} instead. The current version of SCARLET also needs to be added to the stack as a 3rd party package.
| 8 |
2,988 |
DM-15105
|
07/13/2018 13:21:17
|
Fix bare except in obs_subaru and other pep8 fixes
|
obs_subaru has many bare {{except}} usages. Fix these and make the codebase pass flake8.
| 0.5 |
2,989 |
DM-15109
|
07/13/2018 14:04:28
|
Adapt pipe_analysis to RFC-498 implementation
|
Adapt the {{pipe_analysis}} scripts to be compatible with the changes coming out of RFC-498. Also make accommodations for it to be backwards compatible with catalogs run on older, pre-RFC-498, processed catalogs.
| 3 |
2,990 |
DM-15133
|
07/16/2018 16:13:15
|
Empty matches in coaddAnalysis.py at COSMOS field?
|
Using {{w_2018_28}} and {{pipe_analysis}} at commit {{ebd48cf}}, {{coaddAnalysis.py}} failed in the COSMOS tract=9813 filters HSC-G, HSC-R, HSC-Z, and HSC-Y: {code:java} Traceback (most recent call last): File "/software/lsstsw/stack3_20171023/stack/miniconda3-4.3.21-10a4fa6/Linux64/pipe_base/16.0-2-g852da13+6/python/lsst/pipe/base/cmdLineTask.py", line 392, in __call__ result = task.run(dataRef, **kwargs) File "/home/hchiang2/stack/pipe_analysis/python/lsst/pipe/analysis/coaddAnalysis.py", line 348, in run hscRun=repoInfo.hscRun, wcs=repoInfo.wcs) File "/home/hchiang2/stack/pipe_analysis/python/lsst/pipe/analysis/coaddAnalysis.py", line 450, in readSrcMatches if not hasattr(matches[0].first, "schema"): IndexError: list index out of range {code} The command I ran was {{coaddAnalysis.py /datasets/hsc/repo/ --calib /datasets/hsc/repo/CALIB --rerun RC/w_2018_28/DM-14988 --id tract=9813 filter=HSC-G -c doWriteParquetTables=True}} (and 3 other filters)
| 2 |
2,991 |
DM-15138
|
07/17/2018 10:43:22
|
Incorrect instructions in ap_verify readme
|
The {{ap_verify}} readme says to run {{python/lsst/ap/verify/ap_verify.py}}, even though this file has not been executable for almost a year. While the Sphinx documentation has been kept up-to-date, it appears the readme has bitrotted. Correct the example command line, and proofread the readme for any other out-of-date or missing information.
| 1 |
2,992 |
DM-15139
|
07/17/2018 11:07:31
|
Rename invert() and getInverse() to inverted()
|
Implement RFC-500 by renaming the {{invert}} method of geom {{AffineTransform}}, geom {{LinearTransform}} and afw {{GridTransform}} and the {{getInverse}} method of afw {{Transform}} and astshim {{Mapping}} to {{inverted}}. Also rename the {{simplify}} method of astshim {{Mapping}} to {{simplifed}} for the same reason. Note that this is little used outside of astshim.
| 2 |
2,993 |
DM-15152
|
07/17/2018 13:39:32
|
crosstalk correction was moved above assembleCcd, which broke it
|
In cc1e91fc90, running the crosstalk task was moved above assembleCcd, which broke the crosstalk correction in ip_isr. Due to the fact that HSC uses a custom ISR this fact was not visible in ci_hsc, and in fact no tests were checking for this error. The tests of the crosstalk task itself were unaffected by changes in {{isrTask}}. The problem only showed up when we started processing lsstCamera data (ts8 and in particular phosim) through {{obs_lsstCam}}. I propose that we revert this change. It was made to support DECam, but I do not understand why it was necessary so I think it's better to fix {{obs_decam}} than {{ip_isr}}, but would like to understand why the DECam code was written this way before taking a final decision. In discussion, Meredith pointed out that she didn't want to have to assemble CCDs to process inter-CCD crosstalk, but I don't think that this would be necessary (you'd use the {{Detector}} to iterate over the amplifier segments, applying suitable flips as described in the {{Detector's amplifier objects)}}
| 1 |
2,994 |
DM-15162
|
07/19/2018 09:44:01
|
Improve documentation for DataIdContainer
|
Enhance the documentation for {{DataIdContainer}} to document the fields and expand the documentation for the methods. For example what does "castDataIds" actually do -- what does it mean to cast a data ID to the correct type? Also either explain why the user cannot set the dataset type in the constructor or else add that capability.
| 0.5 |
2,995 |
DM-15169
|
07/19/2018 11:26:58
|
Replace all use of "mock" with "unittest.mock"
|
We have Python code in using the {{mock}} package, which is a 3rd party library. Change this to use {{unittest.mock}}, which is part of the standard library.
| 3 |
2,996 |
DM-15177
|
07/20/2018 13:39:46
|
Data display bug in SUIT caused by wrong datatype in JSON
|
During the test of the newly deployed lsst-pdac/portal/suit, I stumbled on something that is obviously wrong. A position search on "NEOWISE-R Year 1 Single Exposure (L1b) Source Table" at (0.014937 -0.008658) with radius 10" returned data with w1flux and w1sigflux empty while w1mpro and w1sigmpro both have data. Upon further study, the w1flux and w1sigflux columns seem to have "char" as data type, while the meta data showed that they should be "float". We need further investigation into this. The downloaded data is attached. 7/25/2018 Further investigation revealed that those two fields had datatype "UNKNOWN" when returned from DAX API dbserv V1. Ticket DM-15206 has been created for this. 8/28/2018 An unknown datatype's value could not be passed properly and displayed. Firefly does not need to be modified.
| 1 |
2,997 |
DM-15178
|
07/20/2018 13:50:29
|
Investigate position errors in scarlet blends
|
Lorena Mezini, an undergraduate working with Erin Sheldon, noticed that when scarlet fits positions, the mean position errors are ~{{0.15}} in both x and y (see attached plot). What's stranger is that one of the objects in her simulations seems to have a mean positional error of 0 in x and y while the other shows the offset. Her dataset were generated using galsim, with very low noise and fixed bounding boxes (25 pixels) on single band images. The sources both have very small ellipticity (~.01) with random orientations, the same shear, and identical FWHM. The plot shows an example with 100k blends, each with 2 sources separated by 11 pixels. This ticket is to investigate this behavior to understand the conditions that lead to two seemingly identical objects being deblended differently.
| 5 |
2,998 |
DM-15179
|
07/20/2018 15:03:43
|
Table: Do not send ROW_IDX and ROW_NUM columns when not requested
|
Currently, columns ROW_IDX and ROW_NUM are sent to the client for every table request. It's only used in a few cases. Change it so that these columns are only sent if requested and not by default.
| 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.