id
int64 0
5.38k
| issuekey
stringlengths 4
16
| created
stringlengths 19
19
| title
stringlengths 5
252
| description
stringlengths 1
1.39M
| storypoint
float64 0
100
|
---|---|---|---|---|---|
2,799 |
DM-14162
|
04/20/2018 09:45:47
|
Iterative inverse of radial transform not acceptably accurate
|
A radial transform made with {{afw.geom.makeRadialTransform}} has a very inaccurate iterative inverse in some cases. Here is an example: {code} import numpy as np import lsst.afw.geom as afwGeom plateScaleRad = 9.69627362219072e-05 radialCoeff = np.array([0.0, 1.0, 0.0, 0.925]) / plateScaleRad fieldAngleToFocalPlane = afwGeom.makeRadialTransform(radialCoeff) print(fieldAngleToFocalPlane.applyForward(fieldAngleToFocalPlane.applyInverse(afwGeom.Point2D(0, 850)))) # prints Point2D(0, 850.2) {code} The error is most likely in AST's {{PolyMap}}. Note: when inverse coefficients are not provided (as in this case) {{afw.geom.makeRadialTransform}} constructs a {{PolyMap}} using options {{"IterInverse=1, TolInverse=1e-8, NIterInverse=20"}}. When this is fixed, {{cbp}} test {{testSetFocalPlanePos}} in {{test_coordinateConverter.py}} can be simplified.
| 1 |
2,800 |
DM-14168
|
04/20/2018 13:02:20
|
Compare running times of cModel fits to individual objects with different parameters
|
I examined the distribution of cModel running times for HSC-R 9813 3,4. Two things are apparent - with default settings, some faint sources (mag_psf > 26) take a long time to fit (>1 min); also, changing parameters like the number of initial components or the fit algorithm can exacerbate this problem. The attached plots can be generated by the notebook created in DM-14118 (lsst-dev:/home/dtaranu/src/mine/taranu_lsst/cModelConfigs.ipynb). I have also put it up on [https://github.com/lsst-dm/modelling_research] (see test_lsst_cmodel.py and jupyternotebooks/lsst_cmodel_configs.py/.ipynb), which supersedes the local copy on lsst-dev.
| 2 |
2,801 |
DM-14170
|
04/20/2018 15:59:24
|
Add descriptions for dcr datasets
|
There are a number of dcr-related datasets in obs_base's {{policy/datasets.yaml}} and {{policy/exposures.yaml}} that need descriptions. See the descriptions that were added in DM-13756 for examples: a sentence or two describing what the dataset is and what might read/write it.
| 0.5 |
2,802 |
DM-14171
|
04/20/2018 16:00:36
|
Add descriptions for fgcm and transmission datasets
|
There are a number of fgcm and transmission curve-related datasets in obs_base's {{policy/datasets.yaml}} and {{policy/exposures.yaml}} that need descriptions. See the descriptions that were added in DM-13756 for examples: a sentence or two describing what the dataset is and what might read/write it.
| 0.5 |
2,803 |
DM-14175
|
04/22/2018 09:32:20
|
lsst_ci failing
|
The weekly, nightly, and {{lsst_distrib}} clean build are failing due to the same error message from {{lsst_ci}}. {code:java} lsst_ci: 15.0+15 .......................................................................................................................................................................::::: [2018-04-22T06:02:20.924442Z] ValueError: Instrument name and input dataset URL must be set in config file ::::: [2018-04-22T06:02:20.924450Z] ===================== 1 failed, 2 passed in 327.27 seconds ===================== ::::: [2018-04-22T06:02:20.955850Z] Global pytest run: failed ::::: [2018-04-22T06:02:20.976613Z] Failed test output: ::::: [2018-04-22T06:02:20.979598Z] Global pytest output is in /home/jenkins-slave/workspace/science-pipelines/lsst_distrib/centos-6.py3/lsstsw/build/lsst_ci/tests/.tests/pytest-lsst_ci.xml.failed ::::: [2018-04-22T06:02:20.979622Z] The following tests failed: ::::: [2018-04-22T06:02:20.982067Z] /home/jenkins-slave/workspace/science-pipelines/lsst_distrib/centos-6.py3/lsstsw/build/lsst_ci/tests/.tests/pytest-lsst_ci.xml.failed ::::: [2018-04-22T06:02:20.982180Z] 1 tests failed ::::: [2018-04-22T06:02:20.982695Z] scons: *** [checkTestStatus] Error 1 ::::: [2018-04-22T06:02:20.983341Z] scons: building terminated because of errors. ERROR (349 sec). {code}
| 2 |
2,804 |
DM-14197
|
04/24/2018 15:11:50
|
Make obs_test data ingestible
|
The data in {{obs_test/data/input}} are provided in the form of a Butler v1 repository, which is sufficient for most testing purposes. However, the files cannot be used to test ingestion code: the files have minimal (and generic) FITS headers, and cannot be ingested using default configurations for {{IngestTask}} and {{IngestCalibsTask}}. Known issues: * The flats and biases do not have an {{OBSTYPE}} header keyword, so they cannot be ingested without {{\-\-calibType}} (or with it, given DM-13975). * The default config for {{IngestCalibsTask}} does not list "defect" as one of the data types to register * Calibration files (or just defects?) cannot use the default columns in {{register.unique}}
| 2 |
2,805 |
DM-14208
|
04/25/2018 10:30:38
|
Update AuxDevice unit test
|
Addition of Fault system added some messages that must be verified in the unit test.
| 2 |
2,806 |
DM-14210
|
04/25/2018 10:42:20
|
Fine tune AuxDev ACK consumption and make more efficient.
|
Implement a version of the progressive ACK timer that checks strictly for the one ATS forwarder response; these must be blocking ACKs, so proper ACKs should be identified quickly if received.
| 5 |
2,807 |
DM-14211
|
04/25/2018 10:45:14
|
ATS report telemetry design and implementation
|
Build means to publish reports as telemetry. Reports may be work done, system warnings, etc.
| 8 |
2,808 |
DM-14212
|
04/25/2018 10:48:20
|
recompile and re-link DAQ code with new library version.
|
The utility apps (such as catalog listing, image triggering, etc.) must be rebuilt as well as the DAQ Forwarder code.
| 1 |
2,809 |
DM-14213
|
04/25/2018 10:49:51
|
Discuss ATS system release plan with DM release engineer
|
SSIA.
| 1 |
2,810 |
DM-14217
|
04/25/2018 15:48:02
|
SHOW PROCESSLIST is broken in Qserv czar
|
The master branch of package qserv seems to have a bug in an implementation of the 'SHOW PROCESSLIST' operation in Qserv czar. This statement always return an empty result set, while the underlying database table has multiple entries. Also there are messages in the proxy's log file *mysql-proxy-lua.log*: {code} [2018-04-25T23:19:22.312+0200] [LWP:308] DEBUG sql.SqlConnection (core/modules/sql/SqlConnection.cc:142) - connectToDb trying to connect [2018-04-25T23:19:22.312+0200] [LWP:308] DEBUG ccontrol.UserQueryType (core/modules/ccontrol/UserQueryType.cc:179) - isSubmit: SHOW PROCESSLIST [2018-04-25T23:19:22.312+0200] [LWP:308] DEBUG ccontrol.UserQueryType (core/modules/ccontrol/UserQueryType.cc:130) - isSelect: SHOW PROCESSLIST [2018-04-25T23:19:22.312+0200] [LWP:308] DEBUG ccontrol.UserQueryType (core/modules/ccontrol/UserQueryType.cc:192) - isSelectResult: SHOW PROCESSLIST [2018-04-25T23:19:22.312+0200] [LWP:308] DEBUG ccontrol.UserQueryType (core/modules/ccontrol/UserQueryType.cc:116) - isDropTable: SHOW PROCESSLIST [2018-04-25T23:19:22.312+0200] [LWP:308] DEBUG ccontrol.UserQueryType (core/modules/ccontrol/UserQueryType.cc:103) - isDropDb: SHOW PROCESSLIST [2018-04-25T23:19:22.312+0200] [LWP:308] DEBUG ccontrol.UserQueryType (core/modules/ccontrol/UserQueryType.cc:146) - isFlushChunksCache: SHOW PROCESSLIST [2018-04-25T23:19:22.312+0200] [LWP:308] DEBUG ccontrol.UserQueryType (core/modules/ccontrol/UserQueryType.cc:159) - isShowProcessList: SHOW PROCESSLIST [2018-04-25T23:19:22.312+0200] [LWP:308] DEBUG ccontrol.UserQueryType (core/modules/ccontrol/UserQueryType.cc:164) - isShowProcessList: full: n [2018-04-25T23:19:22.312+0200] [LWP:308] DEBUG ccontrol.UserQueryFactory (core/modules/ccontrol/UserQueryFactory.cc:237) - make UserQueryProcessList: full=n [2018-04-25T23:19:22.312+0200] [LWP:308] DEBUG czar.Czar (core/modules/czar/Czar.cc:179) - QI=?: starting finalizer thread for query [2018-04-25T23:19:22.312+0200] [LWP:308] DEBUG czar.Czar (core/modules/czar/Czar.cc:332) - QI=?: Remembering query: (10.158.37.125:24040, 85) (new map size: 2) [2018-04-25T23:19:22.312+0200] [LWP:308] DEBUG czar.Czar (core/modules/czar/Czar.cc:216) - QI=?: returning result to proxy: resultTable=qservResult.qserv_result_processlist_10898384329 messageTable=qservResult.message_10898384329 orderBy=ORDER BY Submitted [2018-04-25T23:19:22.312+0200] [LWP:1594] DEBUG czar.Czar (core/modules/czar/Czar.cc:166) - QI=?: submitting new query [2018-04-25T23:19:22.312+0200] [LWP:308] INFO mysql-proxy (qserv/mysqlProxy.lua:343) - Czar response: [result: qservResult.qserv_result_processlist_10898384329, message: qservResult.message_10898384329, order_by: "ORDER BY Submitted"] [2018-04-25T23:19:22.312+0200] [LWP:1594] DEBUG qmeta.QMetaSelect (core/modules/qmeta/QMetaSelect.cc:62) - Executing query: SELECT Id, User, Host, db, Command, Time, State, SUBSTRING(Info FROM 1 FOR 100) Info, CzarId, Submitted, Completed, ResultLocation FROM ShowProcessList WHERE CzarId = 2 AND (Completed IS NULL OR Completed > NOW() - INTERVAL 3 DAY) [2018-04-25T23:19:22.312+0200] [LWP:308] INFO mysql-proxy (qserv/mysqlProxy.lua:508) - Sendresult 0 [2018-04-25T23:19:22.312+0200] [LWP:1594] DEBUG ccontrol.UserQueryProcessList (core/modules/ccontrol/UserQueryProcessList.cc:176) - creating result table: CREATE TABLE qserv_result_processlist_10898384329(`Id` BIGINT(20),`User` CHAR(63),`Host` BINARY(0),`db` TEXT,`Command` CHAR(5),`Time` BINARY(0),`State` CHAR(9),`Info` VARCHAR(100),`CzarId` INT(11),`Submitted` TIMESTAMP NULL,`Completed` TIMESTAMP NULL,`ResultLocation` TEXT) [2018-04-25T23:19:22.312+0200] [LWP:1594] ERROR ccontrol.UserQueryProcessList (core/modules/ccontrol/UserQueryProcessList.cc:220) - error updating result table: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '02:59:27,2018-04-25 02:59:54,'table:result_1'),(2,'anonymous',NULL,'sdss_stripe8' at line 1 Unable to execute query: INSERT INTO qserv_result_processlist_10898384329(`Id`,`User`,`Host`,`db`,`Command`,`Time`,`State`,`Info`,`CzarId`,`Submitted`,`Completed`,`ResultLocation`) VALUES (1,'anonymous',NULL,'sdss_stripe82_01','SYNC',NULL,'FAILED','SELECT COUNT(*) FROM sdss_stripe82_01.RunDeepSource',2,2018-04-25 02:59:27,2018-04-25 02:59:54,'table:result_1'),(2,'anonymous',NULL,'sdss_stripe82_01','SYNC',NULL,'COMPLETED','SELECT COUNT(*) FROM sdss_stripe82_01.RunDeepSource',2,2018-04-25 05:26:51,2018-04-25 05:26:54,'table:result_2'),(3,'anonymous',NULL,'wise_00','SYNC',NULL,'COMPLETED','SELECT COUNT(*) FROM wise_00.allwise_p3as_psd',2,2018-04-25 05:27:40,2018-04-25 05:28:46,'table:result_3'),(4,'anonymous',NULL,'wise_00','SYNC',NULL,'COMPLETED','SELECT COUNT(*) FROM wise_00.allwise_p3as_psd',2,2018-04-25 05:28:58,2018-04-25 05:30:03,'table:result_4'),(5,'anonymous',NULL,'sdss_stripe82_01','SYNC',NULL,'COMPLETED','SELECT COUNT(*) FROM sdss_stripe82_01.RunDeepForcedSource',2,2018-04-25 05:30:20,2018-04-25 05:30:22,'table:result_5'),(6,'anonymous',NULL,'wise_00','SYNC',NULL,'ABORTED','SELECT COUNT(*) FROM wi... {code}
| 1 |
2,811 |
DM-14223
|
04/26/2018 03:36:54
|
Set terraform in travis integration tests
|
Use terraform in the CI build for creating the test cluster instead of installing it localy.
| 2 |
2,812 |
DM-14233
|
04/27/2018 11:25:41
|
Remove secondMomentStarSelector
|
Remove secondMomentStarselector, per RFC-475, and clean up any tests, etc. that refer to it.
| 0.5 |
2,813 |
DM-14236
|
04/27/2018 14:35:22
|
Incorporate l1dbproto into AssociationTask
|
Take the existing AssociationTask functionality and get it running using l1dbproto (rather than SQLite) as a back-end. (This can run on a standalone system; no need to deploy at LDF.)
| 8 |
2,814 |
DM-14237
|
04/27/2018 18:12:11
|
Change DecamIngestTask --filetype default from instcal to raw
|
As discussed in RFC-478, {{DecamIngestTask}} requires a {{\-\-filetype}} argument to ingest raw data, something its supertask {{IngestTask}} does by default. Change the default of {{\-\-filetype}} to {{raw}} so that {{DecamIngestTask}} behaves like {{IngestTask}} when given the same arguments. This work includes updating {{obs_decam}} documentation and updating any calls to {{DecamIngestTask}} that assume the old default.
| 1 |
2,815 |
DM-14244
|
04/30/2018 09:18:36
|
eups.lsst.codes backups not pruning promptly?
|
Daily backups are supposed to be expire out of the s3 bucket at 8 days. This appears to not be happening promptly. This might be because s3 isn't doing this promptly or it might be because it is a versioned bucket. This was noticed as the s3 usage has grown a bit faster than expected. {code:java} $ aws s3 ls s3://eups.lsst.codes-backups/daily/2018/04/ PRE 22/ PRE 23/ PRE 24/ PRE 25/ PRE 26/ PRE 27/ PRE 28/ PRE 29/ PRE 30/ {code} The metadata on the object itself seems to suggest that it will expire in ~9 hours: {code:java} aws s3api head-object --bucket eups.lsst.codes-backups --key daily/2018/04/22/2018-04-22T11:52:06Z/stack/src/tags/w_latest.list { "AcceptRanges": "bytes", "Expiration": "expiry-date=\"Tue, 01 May 2018 00:00:00 GMT\", rule-id=\"daily\"", "LastModified": "Sun, 22 Apr 2018 13:55:01 GMT", "ContentLength": 6857, "ETag": "\"09b6761b4c732b4396521e78bc0dad4c\"", "VersionId": "B19u2Pq35xxGQRX06FUfY1SVEFEEKRvh", "ContentType": "binary/octet-stream", "Metadata": {} } {code}
| 1 |
2,816 |
DM-14255
|
05/01/2018 10:39:10
|
Add ConvexPolygon.intersects and related methods
|
Add {{ConvexPolygon.intersects}} and the related methods {{contains}}, {{isDisjoint}} and {{isWithin}}. The region overloads can probably be implemented using {{ConvexPolygon.relates}} and the point overloads using {{ConvexPolygon.contains}}
| 0.5 |
2,817 |
DM-14275
|
05/01/2018 17:09:02
|
The distortion in test_wcsUtils.py testDistortion is unreasonable
|
The distortion model used in test_wcsUtils.py's testDistortion method is not reasonable (the distortion is far too much to be reasonable). I noticed this when that test failed testing a fix to starlink_ast on DM-14162
| 1 |
2,818 |
DM-14279
|
05/02/2018 09:01:29
|
upgrade blueocean to 1.5.0 (final) / broken links to triggered builds
|
Blueocean {{1.5.0-beta-2}}, which was deployed on DM-13258, does add links to triggered builds. However, these links are not properly URL encoded and are broken when linking to the build of any job that is nested in a cloudbees-folder. This was fixed in the final release of {{1.5.0}}
| 1 |
2,819 |
DM-14280
|
05/02/2018 09:46:13
|
nightly-release d_2018_05_02 failed
|
{{d_2018_05_02}} failed after 3 tries to build tarballs for {{centos-6.devtoolset-6.miniconda3-4.3.21-10a4fa6}}. The other two tarball configs were successful. It looks like this may this may be some sort of cached state / pip 10.x upgrade problem but it also appears that the system python is incorrectly being used in the virtualenv. {code:java} [miniconda3-4.3.21-10a4fa6] Running shell script + pip install virtualenv Requirement already satisfied (use --upgrade to upgrade): virtualenv in /usr/lib/python2.7/site-packages You are using pip version 8.1.2, however version 10.0.1 is available. You should consider upgrading via the 'pip install --upgrade pip' command. + virtualenv venv New python executable in venv/bin/python Installing Setuptools.............................................................................................done. Installing Pip................................................................................................................................................................................................................................................................................................................................done. + . venv/bin/activate ++ deactivate nondestructive ++ unset pydoc ++ '[' -n '' ']' ++ '[' -n '' ']' ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r ++ '[' -n '' ']' ++ unset VIRTUAL_ENV ++ '[' '!' nondestructive = nondestructive ']' ++ VIRTUAL_ENV=/home/jenkins-slave/workspace/release/tarball/redhat/el6/devtoolset-6/miniconda3-4.3.21-10a4fa6/venv ++ export VIRTUAL_ENV ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin ++ PATH=/home/jenkins-slave/workspace/release/tarball/redhat/el6/devtoolset-6/miniconda3-4.3.21-10a4fa6/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin ++ export PATH ++ '[' -n '' ']' ++ '[' -z '' ']' ++ _OLD_VIRTUAL_PS1= ++ '[' x '!=' x ']' +++ basename /home/jenkins-slave/workspace/release/tarball/redhat/el6/devtoolset-6/miniconda3-4.3.21-10a4fa6/venv ++ '[' venv = __ ']' +++ basename /home/jenkins-slave/workspace/release/tarball/redhat/el6/devtoolset-6/miniconda3-4.3.21-10a4fa6/venv ++ PS1='(venv)' ++ export PS1 ++ alias 'pydoc=python -m pydoc' ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r + pip install --upgrade pip Traceback (most recent call last): File "/home/jenkins-slave/workspace/release/tarball/redhat/el6/devtoolset-6/miniconda3-4.3.21-10a4fa6/venv/bin/pip", line 9, in <module> load_entry_point('pip==1.4.1', 'console_scripts', 'pip')() File "/home/jenkins-slave/workspace/release/tarball/redhat/el6/devtoolset-6/miniconda3-4.3.21-10a4fa6/venv/lib/python2.7/site-packages/pkg_resources.py", line 378, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File "/home/jenkins-slave/workspace/release/tarball/redhat/el6/devtoolset-6/miniconda3-4.3.21-10a4fa6/venv/lib/python2.7/site-packages/pkg_resources.py", line 2566, in load_entry_point return ep.load() File "/home/jenkins-slave/workspace/release/tarball/redhat/el6/devtoolset-6/miniconda3-4.3.21-10a4fa6/venv/lib/python2.7/site-packages/pkg_resources.py", line 2265, in load raise ImportError("%r has no %r attribute" % (entry,attr)) ImportError: <module 'pip' from '/home/jenkins-slave/workspace/release/tarball/redhat/el6/devtoolset-6/miniconda3-4.3.21-10a4fa6/venv/lib/python2.7/site-packages/pip/__init__.pyc'> has no 'main' attribute script returned exit code 1 {code}
| 0.5 |
2,820 |
DM-14290
|
05/02/2018 16:36:59
|
Do not raise generic Exceptions
|
We have several bits of code that raise either {{Exception}} or {{lsst.pex.exceptions.Exception}}/{{pexExcept.Exception}}. We should replace all of those with an appropriately specific exception. The generic Exception should almost never be raised (or caught, but that's a different issue).
| 2 |
2,821 |
DM-14291
|
05/02/2018 17:21:42
|
PolyMap.polyTran does not clear IterInverse
|
{{PolyMap.polyTran}} replaces the forward or reverse coefficients with a fit. But if it is used to fit the inverse (as is typical) and the original {{PolyMap}} has an iterative inverse, the returned mapping will use that iterative inverse and ignore the fit coefficients. I have asked David Berry if {{astPolyTran}} is behaving as it should by copying the value of {{IterInverse}} instead of clearing it. I am guessing it is intentional, in which case I can see three solutions: - Have astshim clear {{IterInverse}} in the returned mapping when appropriate. - Document the problem and allow {{PolyMap}} to be modified by setting {{IterInverse}} and, presumably, the two related properties, thus breaking the rule that mappings are immutable (other than Id and Ident). Allowing these properties to be set might be useful for tweaking behavior. - Document the problem and be done with it. I feel this is too surprising to consider. Of these I feel the first, clearing {{IterInverse}} when appropriate, is most appropriate and least surprising. However, it will require a bit of care, as the original mapping may be inverted and polyTran can fit either direction. I believe the logic is as follows: If the mapping being fit has IterInverse set: If the mapping being fit is not inverted and polyTran is fitting the inverse: unset IterInverse in the returned mapping if the mapping being fit is inverted and polyTran is fitting the forward direction: unset IterInverse in the returned mapping Which simplifies to: If polyMap IterInverse is set and polyTran forward == PolyMap.isInverted(): unset IterInverse in the returned mapping Independently it is worth considering allowing modifying the iterative inverse parameters but I'd like a clearer need before doing so.
| 0.5 |
2,822 |
DM-14292
|
05/02/2018 19:42:50
|
Deploy replication service at PDAC
|
Major milestones of this effort: # upgrade Qserv installation in PDAC with the latest version of Docker containers based on MariaDB 10.2.14. The new containers are also needed to support the extended management protocol needed for cooperation between Qserv workers and the Replication system's Controllers when changes to replica disposition are made. # Docker-based install and preliminary tests of the Replication system's tools on the cluster. At this stage a proper configuration of the Replication system will be devised and tested. # Scalability tests of the main replication operations, including operations with persistent state of the operations (Replication _jobs_ and _requests_). Making improvements to the implementation of the system as needed. # Implement the _Health Monitoring Algorithm_ for workers of both kinds (Qserv and the Replication system). Integrate this algorithm into the above mentioned _fixed logic Controller_. The initial version of the algorithm will be sending _probe_ requests to both kinds of workers and measure their responses (which must arrive within a reasonable period of time). This may also require to make some adjustments to the Replication system's Messaging Network to respect priorities of these requests. The current implementation has a plain queue. The new one should get a priority queue. # Finalize and test the fixed-logic replication Controller # when confident with the functionality, performance and robustness of the Replication system integrate the system with the existing Kubernetes infrastructure
| 40 |
2,823 |
DM-14299
|
05/03/2018 10:29:17
|
Remove Python2-isms from developer guide
|
Now that Python 2 is not supported, we can remove all the items in the Python style guide talking about Python 2 and {{__future__}}/{{futurize}}.
| 1 |
2,824 |
DM-14302
|
05/03/2018 11:03:28
|
verify fails on master, possibly with unexpected Quantity repr
|
In building {{verify}} package on my Mac with numpy 1.14.2 and astropy3 I get the following failure: {code} ============================= test session starts ============================== platform darwin -- Python 3.6.3, pytest-3.2.0, py-1.4.34, pluggy-0.4.0 rootdir: /Users/timj/work/lsstsw3/build/verify, inifile: setup.cfg plugins: session2file-0.1.9, forked-0.2, xdist-1.20.1, flake8-0.9.1, remotedata-0.2.1, openfiles-0.3.0, doctestplus-0.1.3, arraydiff-0.2 collected 539 items run-last-failure: rerun previous 2 failures tests/test_threshold_specification.py FF generated xml file: /Users/timj/work/lsstsw3/build/verify/tests/.tests/pytest-verify.xml =================================== FAILURES =================================== ___________________ ThresholdSpecificationTestCase.test_spec ___________________ self = <test_threshold_specification.ThresholdSpecificationTestCase testMethod=test_spec> def test_spec(self): """Test creating and accessing a specification from a quantity.""" s = ThresholdSpecification('design', 5 * u.mag, '<') self.assertEqual(s.name, Name(spec='design')) self.assertEqual(s.type, 'threshold') self.assertEqual(s.threshold.value, 5.) self.assertEqual(s.threshold.unit, u.mag) self.assertEqual(s.operator_str, '<') self.assertEqual( repr(s), > "ThresholdSpecification(" "Name(spec='design'), <Quantity 5.0 mag>, '<')") E AssertionError: "ThresholdSpecification(Name(spec='design'), <Quantity 5. mag>, '<')" != "ThresholdSpecification(Name(spec='design'), <Quantity 5.0 mag>, '<')" E - ThresholdSpecification(Name(spec='design'), <Quantity 5. mag>, '<') E + ThresholdSpecification(Name(spec='design'), <Quantity 5.0 mag>, '<') E ? + tests/test_threshold_specification.py:118: AssertionError ___________________ ThresholdSpecificationTestCase.test_spec ___________________ self = <test_threshold_specification.ThresholdSpecificationTestCase testMethod=test_spec> def test_spec(self): """Test creating and accessing a specification from a quantity.""" s = ThresholdSpecification('design', 5 * u.mag, '<') self.assertEqual(s.name, Name(spec='design')) self.assertEqual(s.type, 'threshold') self.assertEqual(s.threshold.value, 5.) self.assertEqual(s.threshold.unit, u.mag) self.assertEqual(s.operator_str, '<') self.assertEqual( repr(s), > "ThresholdSpecification(" "Name(spec='design'), <Quantity 5.0 mag>, '<')") E AssertionError: "ThresholdSpecification(Name(spec='design'), <Quantity 5. mag>, '<')" != "ThresholdSpecification(Name(spec='design'), <Quantity 5.0 mag>, '<')" E - ThresholdSpecification(Name(spec='design'), <Quantity 5. mag>, '<') E + ThresholdSpecification(Name(spec='design'), <Quantity 5.0 mag>, '<') E ? + tests/test_threshold_specification.py:118: AssertionError ============================= 537 tests deselected ============================= =================== 2 failed, 537 deselected in 3.40 seconds =================== Global pytest run: failed scons: Nothing to be done for `examples'. scons: Nothing to be done for `doc'. Failed test output: Global pytest output is in /Users/timj/work/lsstsw3/build/verify/tests/.tests/pytest-verify.xml.failed The following tests failed: /Users/timj/work/lsstsw3/build/verify/tests/.tests/pytest-verify.xml.failed 1 tests failed scons: *** [checkTestStatus] Error 1 scons: building terminated because of errors. {code}
| 0.5 |
2,825 |
DM-14305
|
05/03/2018 12:05:20
|
Upgrade Eigen to 3.3.4
|
Implementation of RFC-479. For now, this is exploratory work, to see just how bad the changes to ndarray are going to be.
| 2 |
2,826 |
DM-14307
|
05/03/2018 12:26:36
|
Review cModel logging setup
|
I read the developer guide's logging docs and reviewed and tested cModel's logging. In short, the optimizer has detailed debug statements, but there is very little output from cModel itself describing the control flow. For example, one galaxy was running the double shapelet PSF fitting tasks but skipped the cModel routine, as modelfit_CModel_flag_region_maxBadPixelFraction was set. I also spent some time (re)discovering that some ipython output gets sent to jupyter's stdout for reasons I don't understand (possibly this bug: https://github.com/gotcha/ipdb/issues/52).
| 2 |
2,827 |
DM-14311
|
05/03/2018 16:08:41
|
Add subtractAlgorithmRegistry to __all__ in imagePsfMatch.py
|
When DM-14134 added {{\_\_all\_\_}} to {{imagePsfMatch.py}}, {{subtractAlgorithmRegistry}} was not included. This breaks {{imageDifference.py}} in {{pipe_tasks}} which expects to be able to import it.
| 0.5 |
2,828 |
DM-14313
|
05/03/2018 17:11:25
|
Add pyarrow to the jellybean install.
|
Some of the QA tools now need pyarrow. Please install pyarrow with the other third party modules.
| 0.5 |
2,829 |
DM-14314
|
05/03/2018 17:58:42
|
Metaserv should return the metadata for WISE Tables
|
Per [~tatianag], dax_metaserv should be populated for everything we are serving through PDAC. http://lsst-qserv-dax01:5000/meta/v1/db/ should have one logical database entry for WISE with multiple schemas corresponding to each of the PDAC database described here: https://confluence.lsstcorp.org/display/DM/PDAC+v2+data+list (Similarly to how we have sdss_stripe82_01 in the schemas section returned by sst-qserv-dax01:5000/meta/v1/db/W13_sdss_v2/) We should only populate metaserv with the data readily available from dbschema ("name", "datatype", "nullable)". All other fields like "ucd", etc. can be empty. This would make it possible for SUIT to refer to WISE tables using logical database name rather than internal name like `"//lsst-qserv-master01:4040".wise_00.allwise_p3as_mep` and make sure that WISE data can be discovered through TAP like services. 5/10/2018 tgoldina Since there is no optimized way to get table column metadata via dbserv, this issue is a blocker for moving PDAC to dbserv v1 (albuquery). Please consider it when deciding on the priority of this ticket.
| 20 |
2,830 |
DM-14320
|
05/04/2018 10:25:21
|
dbserv (or albuquery) should include description column in the metadata result set
|
Per [~tatianag], The new version of dbserv, integrated with metaserv returns metadata for all columns in the result set. Currently, the "columns" metadata fields are "name", "datatype", "ucd", "unit", and "tableName". Ideally, column metadata should contain all column metadata fields available via metaserv, including "nullable" and "description". (See http://dm.lsst.org/dax_metaserv/api.html#get--meta-v1-db-(string-db_id)-tables-(table_id)- for column metadata fields stored in metaserv.) For WISE datasets, the 'description', 'unit', etc. can be empty (returned as empty strings) in metaserv. Expected columns, empty or not, should be propagated to the UI.
| 1 |
2,831 |
DM-14325
|
05/04/2018 15:06:34
|
deepDiff datasets not supported by HSC
|
Trying to run {{imageDifference.py}} on an HSC dataset crashes with an error saying that the {{deepDiff_diaSrc}} dataset type does not have a template. Manual inspection of {{obs_subaru/policy/HscMapper.yaml}} confirms that there are no {{diaSrc}} or {{deepDiff}} datasets mentioned. Please add the appropriate datasets so that we can do image differencing on HSC data.
| 1 |
2,832 |
DM-14333
|
05/04/2018 18:20:39
|
Support Oracle dialect in AP prototype script
|
While Oracle RAC at NCSA is getting ready for testing I could spend some time to try running my prototype against Oracle in some other non-production setup. ap_proto uses sqlalchemy but it also has dialect-specific SQL handling for optimization purpose. I need to extend that backend-specific code to support Oracle as well. I can think of few potential things that need different implementation for Oracle: * multi-row INSERT, Oracle does not support syntax that is supported by other backends (\{\{INSERT INTO TABLE (columns) VALUES (data), (data), (data), ...}}) so I'll have to find some other mechanism for bulk row insert * "UPSERT" functionality should also be implemented differently in Oracle * Case-sensitivity issue (probably minor) * Usual DATETIME stuff is not very portable * Anything else...
| 5 |
2,833 |
DM-14334
|
05/04/2018 18:26:03
|
Start implementing QuantumGraph building
|
Logic is described by Jim in [https://dmtn-056.lsst.io/operations.html#supertask-pre-flight-and-execution.] There are many complications of course, e.g. what is Registry responsibility vs. Pre-flight responsibility. First implementation will likely be sub-optimal and ugly.
| 8 |
2,834 |
DM-14336
|
05/07/2018 01:12:38
|
Unify k8s proxy manifest for worker and master pods
|
Unify the k8s definition of the proxy container for worker and master pods.
| 5 |
2,835 |
DM-14345
|
05/07/2018 13:26:27
|
Add LSP requirements to model and submit to CCB
|
In DM-14335 the final list of requirements was submitted. They need to be transferred to the model and converted to a docgen for review by DM CCB.
| 2 |
2,836 |
DM-14347
|
05/07/2018 14:43:05
|
Extend antlr4 integration test query parity
|
Start to go through the FIXME integration test queries and determine if they are FIXME because they fail to parse, or if they fail to run. Take notes on which is which. Make some effort to figure out why they don't run (with the acknowledgment that it may take 20 story points to chase down even 1 non-running query for a qserv novice). The output of this will be compared with [query pages in TRAC|https://dev.lsstcorp.org/trac/wiki/db/queries] (being ported to Confluence by [~fritzm]), and stories to fix these stories will be created as appropriate.
| 20 |
2,837 |
DM-14356
|
05/08/2018 09:39:11
|
Implement putting of matplotlib figures
|
Make matplotlib figures {{butler.put()}}able. [~jbosch] says on Slack: > Basically, that involves grepping {{daf_persistence}} for {{FitsCatalogStorage}}, copying what you see and calling it {{MatplotlibStorage}}, and then adjusting it appropriately (i.e. call {{savefig}} instead of {{writeFits}}, raise an exception when trying to read). [~price] notes that there are some gotchas involving the fact that the backend has to be set immediately after import, and this could prove tricky, especially if people want to combine this with pop-up style debug plots.. Some relevant info on this might be found in DM-14159.
| 1 |
2,838 |
DM-14359
|
05/08/2018 13:08:18
|
Fix data ID handling in ap_*
|
During investigation of DM-12672, I discovered several bugs in how {{ap_pipe}} and {{ap_verify}} handled data IDs: * Most command-line tasks implicitly assume their {{run}} or {{runDataRef}} methods take a fully expanded data reference. This is provided by {{pipe.base.ArgumentParser}}, but {{ap_pipe}}'s task runner inadvertently bypassed the data ID expansion. In practice, this meant {{ImageDifferenceTask}} didn't have all the information it expected. * The same bypass would have prevented {{ap_pipe}} from processing multiple datasets specified as {{visit=#}}, though {{visit=# ccd=1..62}} would have still worked. * {{ap_verify}} always provided partial data references, because it assumed the run method (or, more precisely, its butler calls) would expand any unambiguous reference. * The tests in {{test_ingestion.py}} were based on a naive view of how data IDs work, and most of the outstanding issues with the tests can now be fixed.
| 2 |
2,839 |
DM-14360
|
05/08/2018 14:20:06
|
Pin pybtex/sphinxcontrib-bibtex dependencies in documenteer 0.2.x & dev guide update
|
The Dev Guide suggests we use the following: {code} .. bibliography:: local.bib lsstbib/books.bib lsstbib/lsst.bib lsstbib/lsst-dm.bib lsstbib/refs.bib lsstbib/refs_ads.bib :encoding: latex+latin :style: lsst_aa {code} However, when I tried this with a recent sphinxcontrib-bibtex, I get: {code} /Users/jds/Projects/LSST/docs/dmtn/031/index.rst:508: ERROR: Error in "bibliography" directive: invalid option value: (option: "encoding"; value: 'latex+latin') unknown encoding: "latex+latin". .. bibliography:: lsst-texmf/texmf/bibtex/bib/refs_ads.bib :style: lsst_aa :encoding: latex+latin {code} I think this is due to changes upstream (I didn't track it down fully, but I note that both sphixcontrib-bibtex and pybtex have been making changes to the way they handle encodings in recent releases). Simply removing the {{:encoding:}} line works fine for me.
| 0.5 |
2,840 |
DM-14363
|
05/08/2018 14:50:31
|
Make afw::cameraGeom::Detector table-persistable
|
In Gen3, we're planning to just persist Detectors with Exposures to make each Exposure more self-contained and avoid complex modify-on-load code (which would have had to get more complex than what we have now to handle camera versioning). This means we need a way to save a Detector inside an Exposure, and at least at present, that means making it inherit from {{afw::table::io::Persistable}}.
| 8 |
2,841 |
DM-14366
|
05/08/2018 18:22:55
|
Make pipe_base and pipe_tasks pep8 compliant
|
Fix pep8 warnings and errors in pipe_base and pipe_tasks and enable automatic flake8 checking
| 1 |
2,842 |
DM-14367
|
05/09/2018 07:16:07
|
Remove cfitio headers and lib from git repo and system install these items.
|
SSIA For the Tucson turnkey system, but will be the same for overall system.
| 1 |
2,843 |
DM-14369
|
05/09/2018 07:26:48
|
nightly-release d_2018_05_09 failed
|
The nightly release failed due to fallout from DM-14138 merged yesterday. This is a simple id10t error in that I did not rebuild the {{lsstsqre/codekit}} image after making a new release of {{sqre-codekit}} to pypi, but updated the docker tag string in jenkins. https://ci.lsst.codes/blue/organizations/jenkins/release%2Fnightly-release/detail/nightly-release/285/pipeline {code:java} ++ id -un ++ id -u ++ id -gn ++ id -g + docker build -t lsstsqre/codekit:5.0.4-local --build-arg USER=jenkins-slave --build-arg UID=996 --build-arg GROUP=jenkins-slave --build-arg GID=991 --build-arg HOME=/home/jenkins-slave . Sending build context to Docker daemon 2.048kB Step 1/12 : FROM lsstsqre/codekit:5.0.4 manifest for lsstsqre/codekit:5.0.4 not found script returned exit code 1 {code}
| 1 |
2,844 |
DM-14370
|
05/09/2018 07:29:54
|
Modify SAL message emulator for Tucson turnkey system
|
Before shipping the turnkey system to Tucson, a burn-in test must be run for 48 hours. This will require modification of the simple SAL message emulator used for in-house testing. The new emulator should include occasional start-ups and shut-downs of the ATS DM system, as well as randomly draw image_ids/catalog names from a set of 100 pre-triggered in the DAQ (triggering the images is not necessary for this test, and is only realistically done when Tony's CCS system is running as well. As well as proper logging, the system should issue a 'result set' log report every time it is shut down.
| 2 |
2,845 |
DM-14378
|
05/09/2018 11:23:21
|
Add Gen3 conversion scripting and tests to ci_hsc
|
Add gen3 conversion to the ci_hsc SCons build and test that we can use a Gen3 Butler to {{get}} Datasets {{put}} with a Gen2 Butler.
| 2 |
2,846 |
DM-14385
|
05/09/2018 19:08:48
|
typo in metaserv return key
|
metaserv return object for {{http://lsst-qserv-dax01:5000/meta/v1/db/W13_sdss_v2/tables/RunDeepForcedSource/}} has a key {{result:}} instead of {{result}}. Appears like a typo.
| 2 |
2,847 |
DM-14392
|
05/11/2018 01:21:56
|
Add userfriendliness for provision script
|
Test will be run on NCSA openstack using 25 nodes and more...
| 8 |
2,848 |
DM-14393
|
05/11/2018 09:05:23
|
column datatypes returned by metaserv and dbserv should be consistent
|
Column datatype returned in the metadata section of the result by dbserv in absence of metaserv data should be consistent with the column datatype in metaserv. Currently, the dbserv returns "string" datatype, while metaserv returns "text". Also for 'BIT', dbserv returns "binary", while metaserv returns "boolean", for TINYINT, dbserv returns "int", metaserv returns "short". To facilitate debugging, please consider returning both dbserv and metaserv datatypes in the metadata section of the dbserv return. Per [~kennylo], column datatypes returned by metaserv come from the conversion table in lsst/dax/metaserv/schema_utils.py: {code:python} MYSQL_TYPE_MAP = { 'VARCHAR': "text", 'TIMESTAMP': "timestamp", 'BINARY': "binary", 'TINYINT': "short", 'BIGINT': "long", 'BIT': "boolean", 'FLOAT': "float", 'INTEGER': "int", 'DOUBLE': "double", 'CHAR': "text" } {code} column datatypes returned by dbserv are defined by the following conversion table from /lsst/dax/albuquery/Results.kt: {code:kotlin} fun jdbcToLsstType(jdbcType: JDBCType): String { return when (jdbcType) { JDBCType.INTEGER -> "int" JDBCType.SMALLINT -> "int" JDBCType.TINYINT -> "int" JDBCType.BIGINT -> "long" JDBCType.FLOAT -> "float" JDBCType.DOUBLE -> "double" JDBCType.DECIMAL -> "double" // FIXME JDBCType.NUMERIC -> "double" // FIXME JDBCType.ARRAY -> "binary" JDBCType.BINARY -> "binary" JDBCType.BIT -> "binary" JDBCType.BLOB -> "binary" JDBCType.CHAR -> "string" JDBCType.VARCHAR -> "string" JDBCType.NVARCHAR -> "string" JDBCType.CLOB -> "string" JDBCType.BOOLEAN -> "boolean" JDBCType.DATE -> "timestamp" JDBCType.TIMESTAMP -> "timestamp" JDBCType.TIMESTAMP_WITH_TIMEZONE -> "timestamp" JDBCType.TIME -> "time" else -> "UNKNOWN" } } {code}
| 8 |
2,849 |
DM-14433
|
05/14/2018 17:07:29
|
Perform forced photometry on visit images within the AP pipeline
|
Perform forced photometry on the corresponding PVI for every DIASource produced by the Alert Generation pipeline. Ensure this design is captured in LDM-151.
| 40 |
2,850 |
DM-14435
|
05/14/2018 17:18:36
|
Implement MOC overlay
|
Implement the visual display of MOC data in Firefly Initially, display the HEALPixels actually identified in the MOC (which is just a list of HEALPixel IDs in NUNIQ format) as polygons overlaid on a sky display in Firefly. Include this in the usual Firefly layer control behavior. * We would like multiple MOCs to be able to be displayed at once, with selectable colors used. * It may be useful to enable the control of both polygon boundary colors, transparency, etc. _as well as polygon fill_ with adjustable transparency. * Be able to switch between outline (wireframe) and transparent color fill * MOC show as a HiPS layer with checkbox off, should lazy load with any HiPS Implementation technical Notes: * We should read the table using our normal fits table reading and put all the rows (only one column) into the store. * Each number (NUNIQ) can be translated to a HealPix level and number and then to 4 WorldPt corners using functions in HipsUtil and HealpixIndex * We need to make new DrawObject will need to be created that can draw a filled polygon, fill with a transparent color for overlaying. Possibly we could modify FootprintObj or MarkerFootprintObj. * If the points reduce to a single point (based on zoom level) it should draw a single point. * The drawing layer should be able to change the color in our standard way.
| 20 |
2,851 |
DM-14438
|
05/14/2018 17:31:38
|
Load MOCs from other sources
|
We would like multiple MOCs to be able to be displayed at once. * Load more MOCs with the upload panel * -Load MOCs with the search panel the can search the CDS server.- _Moved to DM-15570_ For immediate implementation, we will put a test button at the top.
| 8 |
2,852 |
DM-14440
|
05/14/2018 17:55:45
|
dbserv v1: ParseException when spatial constraint is followed by other constraints
|
In dbserv v1 (albuquery), if I add another constraint to the spatial constraint, the query fails. For example, {noformat} > curl -L http://lsst-qserv-dax01:8080/sync/ -H "Accept: application/json" -d 'query=SELECT * FROM W13_sdss_v2.sdss_stripe82_01.RunDeepSource WHERE qserv_areaspec_circle(9.5,-1.1,0.002777777777777778) AND deblend_nchild < 2' {"message":"(conn=2417) Query processing error: QI=?: Failed to instantiate query: ParseException:Parse error(ANTLR):unexpected token: (:","type":"SQLException","code":null,"cause":null} {noformat} Without AND part it works fine. The query also works in dbserv v0. [SLAC discussion|https://lsstc.slack.com/archives/C2B709EQK/p1526333780000583]
| 2 |
2,853 |
DM-14441
|
05/14/2018 18:45:22
|
{{detect_isPrimary}} is not consistently set
|
Running to the end of the "Getting Started with the LSST piplines" tutorials https://pipelines.lsst.io/getting-started/multiband-analysis.html I think I have encountered a bug in how {{detect_isPrimary}} gets set. The documentation says that it is the union of {{deblend_nChild==0}} with {{detect_isPatchInner}} and {{detect_isTractInner}}, however: {code} >>> by_hand = (refTable['deblend_nChild']==0) & refTable['detect_isPatchInner'] & refTable['detect_isTractInner'] >>> np.where(by_hand != refTable['detect_isPrimary']) (array([4708, 4709, 4711, 4712, 4713, 4714, 4715, 4716, 4717, 4718, 4720, 4721, 4722, 4723, 4724, 4725, 4726, 4727, 4728, 4730, 4731, 4732, 4733, 4734, 4735, 4736, 4737, 4738, 4740, 4741, 4743, 4744, 4745, 4746, 4747, 4748, 4749, 4750, 4751, 4752, 4753, 4754, 4755, 4756, 4757, 4758, 4759, 4760, 4761, 4762, 4763, 4764, 4766, 4768, 4769, 4770, 4771, 4772, 4773, 4774, 4775, 4776, 4777, 4778, 4779, 4780, 4781, 4782, 4783, 4784, 4785, 4786, 4787, 4788, 4789, 4790, 4791, 4792, 4793, 4794, 4795, 4796, 4797, 4798, 4799, 4800, 4801, 4802, 4803, 4804, 4805, 4806, 4807, 4918]),) {\code} where {{refTable}} is loaded as specified in the tutorial outlined above. As always: let me know if you need more information to recreate this behavior. It is not impossible that this is just user error.
| 2 |
2,854 |
DM-14442
|
05/14/2018 19:46:57
|
F18 Qserv Release Engineering
|
This epic holds F18 effort budget for ongoing chores such as updating upstream packages in eups, Jenkins and Travis configuration maintenance, and compiler and platform compatibility fixes.
| 3 |
2,855 |
DM-14455
|
05/15/2018 02:02:59
|
F18 Butler Gen2 Critical Support
|
This epic holds the F18 effort budget for critical maintenance to Butler Gen 2.
| 20 |
2,856 |
DM-14458
|
05/15/2018 05:47:45
|
Add Redis scoreboard for incrementing session_id, job_num, and ack_ids
|
Since the principle components were built for level one image ingest and distribution, they have used ad hoc means for generating unique numbers and keeping track of the last number between restarts. With the development of the ATS Turnkey system, a redis conf file was produced that sets the default number of redis instances at 18 instead of 16, so there is room for an increment scoreboard. This is a much cleaner way of addressing this and with db dumps before shutting down and by not flushing the increment scoreboard class at creation, this need is finally, properly handled.
| 2 |
2,857 |
DM-14459
|
05/15/2018 08:13:14
|
Add check to (Posix)Datastore that prevents silent overwrite
|
A {{Datastore.put}} should raise when the file already exists (perhaps adding an option to force it).
| 1 |
2,858 |
DM-14471
|
05/16/2018 04:48:36
|
HiPS display test
|
DM-14149 made it possible to display HiPS images at level 20 level by using JavaScript 56 bit integer. It created new methods to do bit-shift. This ticket to test the result from science point of view.
| 2 |
2,859 |
DM-14472
|
05/16/2018 11:20:40
|
Integrate user expression parser into command line tool
|
User expression parser is almost complete, it can now be integrated in supertask CLI.
| 2 |
2,860 |
DM-14474
|
05/16/2018 15:28:22
|
Add environment variables to Jupyterlab deployments for Firefly
|
As part of the ease-of-use improvements in DM-14391, we would like to ask that two environment variables be provided in Jupyterlab deployments. FIREFLY_URL = path to the default Firefly server e.g. [https://lsst-lspdev.ncsa.illinois.edu/firefly] for the [https://lsst-lspdev.ncsa.illinois.edu/nb] environment. A trailing slash will work. FIREFLY_HTML = default landing page for Firefly, which is appended to the URL. We'd like this set to 'slate.html' as we intend to build around this view.
| 0.5 |
2,861 |
DM-14490
|
05/17/2018 02:24:36
|
Fix k8S virtual network for Qserv at CC-IN2P3
|
Test will be run on NCSA openstack using 25 nodes and more...
| 8 |
2,862 |
DM-14491
|
05/17/2018 09:24:10
|
FireflyClient display_url does not make weblink in Jupyterlabdemo
|
The {{display_url}} method added to {{FireflyClient}} in DM-14391 should make a clickable weblink. This works in my classic notebook tests, but in the Jupyterlabdemo environment the URL is only being printed out. Some restructuring of the try/except block in this function will make it work.
| 1 |
2,863 |
DM-14494
|
05/17/2018 11:12:43
|
selectImages.py depends on defunct lsst.geom
|
{{selectImages.py}} depends on {{lsst.geom.convexHull}}, which was removed in DM-13790. It should be rewritten to use {{lsst.sphgeom.ConvexPolygon.convexHull}} instead. This issue is blocker priority because {{selectImages}} is included in all imports of {{lsst.ap.pipe}}.
| 1 |
2,864 |
DM-14496
|
05/17/2018 13:36:50
|
test_association broken
|
{{ap_verify}}'s {{test_association}} does not run with the latest version of {{ap_association}}, complaining about {{ap.association.DIAObject}} not existing. Please update the {{ap_verify}} tests so that they pass. I have tried to update the tests to use source catalogs instead of {{DIAObjectCollection}} myself, but wasn't able to make a valid DIA Object catalog.
| 1 |
2,865 |
DM-14497
|
05/17/2018 17:40:44
|
ap_pipe doesn't know filters for AssociationTask
|
{{AssociationDBSqliteConfig}} now requires an observatory-specific filter list as part of its config. This issue adds the DECam value to {{ApPipeConfig.setDefaults}} as a quick fix; presumably the settings can be moved to an obs-specific config override file in DM-12315. [Apologies for the edit wars; I had thought this could be fixed at the level of {{AssociationDBSqliteConfig}} itself]
| 1 |
2,866 |
DM-14505
|
05/18/2018 10:07:38
|
Fixed coverage and cache bugs
|
[~tatianag] found several issues while working on DM-8215. Fix the following: * FITS Cacheing is not working for URL type WebPlotRequest * In certain cases coverage drawing is not showing overlays for multiple catalog tables Extra work done: * removed FileHolder.java, a interface that was only implement by on class * removed useHiPSForCoverage option, now the is always on. * fixed a problem an exception when loading wise images from metaConvert * remove CoverageChooser since it is not longer used. _to test cache bug:_ * load a image using the URL option, such as [http://web.ipac.caltech.edu/staff/roby/demo/wise-m51-band2.fits] * then load the same image again. It should come from the cache. _to test coverage bug:_ * use wise to load two different results * They should both be overlaid. ----
| 3 |
2,867 |
DM-14508
|
05/18/2018 12:58:33
|
Implement RFC-474 to allow use of scikit-learn
|
Make a stub package for scikit-learn
| 2 |
2,868 |
DM-14509
|
05/18/2018 13:35:05
|
Option to turn sparse matrices into dense ones to explore eigenvalues
|
I need to explore the properties of the matrix and gradient in jointcal's {{ConstrainedPhotometyModel}}, but the sparse matrix code in Eigen doesn't include many options for doing so. To better facilitate this, [~astier] suggested outputting the matrix elements and creating a dense matrix from it. This ticket is just for the output and checking that it's sensible: detailed exploration will be in a different ticket. A quick search suggests that one can pass the sparse matrix to one of Eigen's constructors to directly get the dense matrix. It might be easier to explore the matrix properties with numpy.
| 2 |
2,869 |
DM-14510
|
05/18/2018 13:49:47
|
Implement line search
|
[~astier] suggests that we should implement a [line search|https://en.wikipedia.org/wiki/Line_search] in jointcal, to deal with the non-linearities in the photometric model (and because we may eventually have those in astrometry, as the astrometry model gets more complex). He suggested using [Brent's method|https://en.wikipedia.org/wiki/Brent%27s_method], which is [available in GSL|https://www.gnu.org/software/gsl/doc/html/roots.html#c.gsl_root_fsolver_brent]. It might be easier to just take the code directly from GSL, instead of writing an interface layer, given the way the nature of the jointcal models. At the top of the outlier rejection loop in {{FitterBase.minimize()}}, this would look something like: {code} Eigen::VectorXd delta = chol.solve(grad); scale = lineSearch(delta); offsetParams(delta*scale)); {code}
| 8 |
2,870 |
DM-14512
|
05/18/2018 14:32:44
|
Remember filename of uploaded DS9 Region file
|
I notice that (successful) region file uploads produce an entry in the layer dialog of the form "REGION_PLOT_TYPE-(serial number)" (e.g., "REGION_PLOT_TYPE-7"). It would be helpful to the user if the filename of the uploaded file were retained and displayed in the layer dialog. Since regions can be created by API, not just by file upload, the file name convention can't apply in all cases, so the "serial number" approach may still be appropriate for API uploads. However, even in that case, I think a more user-friendly string than "REGION_PLOT_TYPE" would be an improvement. Does the API allow regions to be given names that could then be shown in the layer dialog? TODO in this ticket: * use the file name as label in the layer dialog. In cases that the file name is too long, treat it the same way as uploading an image, use only the characters fit in proper length space, and display the full name as tooltip when mouse over it. * make sure it behaves the same in API, providing a label field for caller to set the label for display. If there is no label supplied, use string "ds9 region overlay-\{N}". N will start at 1, and increase by sequence as it does now. Also implemented: * File upload pane is also updated to show the long file name of the uploaded file in shorter style as that shown in the layer dialog.
| 3 |
2,871 |
DM-14514
|
05/18/2018 14:55:49
|
Provide UI for uploading "instrument footprints" to Firefly (relocatable regions)
|
Inside Firefly, there is a UI option to place "footprints" (e.g., of astronomical instruments) over displayed images. Footprints are created from region files in the Firefly repo (https://github.com/Caltech-IPAC/firefly/tree/dev/src/firefly_data/footprint) and appear to be treated by Firefly as standard DS9 region files where the ra,dec coordinate system is treated as having a relocatable origin and orientation (i.e., as field angles rather than true ra and dec). It would be very nice to make this capability available (1) in the UI, for region data uploaded from a file by a user, and (2) in the API, for region data sent from the JavaScript and/or Python APIs. For (1), one possibility would be that the region file upload dialog might get a check-box added to it saying "make origin movable" or "treat as footprint". Alternatively, an "upload footprint" element might get added to the existing footprint menu. Please see also tickets I've filed in the IRSA system, including IRSA-1612 and IRSA-1613, regarding the treatment of footprints and their coordinates in the layer dialog.
| 5 |
2,872 |
DM-14515
|
05/18/2018 15:00:08
|
Improvements to Firefly footprint menu
|
Two suggestions (only!) for the footprint menu in Firefly: # Make it hierarchical, so that, e.g., all the HST instrument footprints appear in a submenu. (I would like to have at least three footprints for LSST, for instance.) # Make its content part of the application configuration, e.g., controlled via the {{suit}} package for LSST or the {{ife}} package for IRSA. The Firefly repo could still contain a standard set of footprint files, but more could be added in the application, and even some of the standard ones suppressed, perhaps?
| 1 |
2,873 |
DM-14516
|
05/18/2018 15:56:58
|
bug fix: make region file parsing case insensitive
|
When upload a ds9 region file to be overlaid on an image, the keyword is required to be lower-case characters. Lets make it case insensitive . test: * do image search on firefly on target (0,0, j2000) and upload footprint file from 'Load DS9 Region File' popup. * go to firefly/demo/ffapi-footprint-test.html, and add footprint layer from the footprint tool page.
| 1 |
2,874 |
DM-14517
|
05/18/2018 16:25:54
|
Fix bug in schema alias mapping in ap_association.
|
This work was originally on DM-14507 however it was found that the failure to use alias mapping was due to a bug and the originally alias mapping work. As such this ticket will fix the bug and create a new unittest testing the case of a miss matched schema in DIASource storage.
| 1 |
2,875 |
DM-14520
|
05/18/2018 17:40:45
|
Re-enable Dataset->DataUnit foreign keys
|
Dataset's foreign keys to DataUnit have been commented-out in the schema file. Uncomment them and get things working (actually, we'll need to update their representation in the schema in order to make them compound keys). I've already started this in trying to get the schema on DM-12620 synced with master, and I think I'm pretty close, but it makes sense to split this off from that ticket.
| 1 |
2,876 |
DM-14521
|
05/18/2018 17:54:44
|
Update qa_explorer for new coord types
|
There are mentions of {{IcrsCoord}} in {{qa_explorer}}. These need to be updated to the new coord objects.
| 1 |
2,877 |
DM-14533
|
05/21/2018 07:42:21
|
Remove geom from packages expected by ci_hsc
|
Geom was removed from the dependency tree in a previous ticket. ci_hsc still expects in its test that setup packages are recorded, causing a failure.
| 1 |
2,878 |
DM-14534
|
05/21/2018 10:36:51
|
Fix measurementInvestigationLib.makeRerunCatalog parent keys
|
lsst.meas.base.measurementInvestigationLib has convenience functions for setting up catalogs to rerun measurement tasks. However, if you call makeRerunCatalog with an id list including children but not their parents, these children will be skipped by runPlugins (e.g. from lsst.meas.base.SingleFrameMeasurementTask). makeRerunCatalog should offer options to either reset child objects' parent keys to zero if their parents are not in the list (the default) or add the parents to the id list.
| 2 |
2,879 |
DM-14536
|
05/21/2018 11:59:39
|
Utilities to include in Firefly docker container
|
Please add the following utilities to Firefly docker container: - procps (http://procps.sourceforge.net/) - wget - emacs These utilities will be helpful when debugging a dockerized Firefly deployment using kubectl.
| 1 |
2,880 |
DM-14542
|
05/22/2018 02:42:09
|
Separate cluster provisioning and k8s setup
|
Separate the cluster provisioning (terraform) from k8s setup
| 3 |
2,881 |
DM-14543
|
05/22/2018 07:12:15
|
Fix DatasetType registration
|
* Automatically register components, and * Allow for duplicate insertion only if identical.
| 0.5 |
2,882 |
DM-14545
|
05/22/2018 07:16:19
|
Add test for composite calexp to Butler
|
Add test for composite calexp to Butler
| 0.5 |
2,883 |
DM-14548
|
05/22/2018 10:08:31
|
Many refraction functions are documented to return float but return Quantity
|
The new refraction code in afw has many functions that claim to return float, but actually return astropy Quantity. This has two issues: - The code in refraction is hard to understand because atmosTerm1 and 2 have units of {{Pa / mbar}} but they are supposed to be dimensionless. It's hard to see how one gets radians from that. For clarity I think some values should be converted to floats instead of being left as Quantities. - The documented return value is wrong for many of the functions. I stumbled across this while implementing DM-14429 and had some trouble figuring out what was going on.
| 2 |
2,884 |
DM-14554
|
05/23/2018 11:02:17
|
Produce GraphViz plots from existing or example QuantumGraph
|
`stac` can make GraphViz files but `stack` is currently nit working - `run` command depends on a Butler and I need to update it to work with new repos. It would be useful to be able to generate GrpahViz plots from existing QGraph without executing stuff, will try to see how it can be done with the existing options.
| 2 |
2,885 |
DM-14557
|
05/23/2018 11:35:42
|
Add package docs to datasets
|
The {{ap_verify}} dataset framework is currently documented in the Sphinx documentation for {{ap_verify}}. This documentation assumes that individual datasets will at least have documentation of their GitHub repository, so that the pages can be linked from lists of known datasets or from examples. However, no such documentation has been written yet. This issue adds a documentation placeholder to {{ap_verify_dataset_template}}, and standardized package documentation to {{ap_verify_hits2015}}, {{ap_verify_testdata}}, and any other datasets existing at the time of work. It will *not* move the documentation of the dataset framework from its current location in {{ap_verify}}.
| 3 |
2,886 |
DM-14558
|
05/23/2018 12:48:19
|
Load all DIASources at once in AssociationTask.update_dia_objects
|
Currently, the update_dia_objects method within AssociationTask performs separate loads for each DIAObject it updates. This ticket will change this to one access to the database for all DIASources that are associated with the updated DIAObjects.
| 2 |
2,887 |
DM-14560
|
05/23/2018 14:10:30
|
Assist with KPM30 tests.
|
Debug Qserv shared scan scheduling and related issues for KPM30.
| 13 |
2,888 |
DM-14572
|
05/24/2018 13:52:30
|
Modernize documenteer package deployment
|
Modernize Documenteer's PyPI deployment by: * switch to setuptools_scm (common standard) * use conditionals for PyPI deployment so only one matrix job deploys to PyPI.
| 0.5 |
2,889 |
DM-14580
|
05/25/2018 11:14:02
|
Create tests for BestSeeingWcsSelectImagesTask.
|
Due to travel it was preferable to review DM-11953 without a unit test. This ticket is to create the missing tests, possibly with advice from [~rowen].
| 2 |
2,890 |
DM-14592
|
05/29/2018 15:55:58
|
Add default .user_setups file to Jupyter environment
|
It would be very helpful if the jupyter environment had a {{.user_setups}} file by default, with just a comment saying what the purpose of the file is.
| 0.5 |
2,891 |
DM-14593
|
05/29/2018 17:46:23
|
Clarify use of repo subdirectory in ap_verify_hits2015 dataset
|
The README of ap_verify_hits2015 currently says the following about the {{repo}} subdirectory: "Butler repo into which raw data can be ingested. This should be copied to an appropriate location before ingestion. Currently contains the appropriate DECam {{_mapper}} file." While this text does suggest a user should copying {{repo}} to a new location and then somehow make ingestion happen, it would benefit from clarification. Specifically: 1) The README should make it clear that the {{repo}} directory should not be altered in-place, as edits to a local copy of ap_verify_hits2015 would prevent users from git-pulling any changes, and 2) Documentation about ingestion for this specific dataset should be added/linked appropriately (presumably via ap_verify's ingest_dataset.py script).
| 2 |
2,892 |
DM-14594
|
05/30/2018 01:57:54
|
Refactor Terraform ressources creations
|
Refactor TF setup for ease of use and edge cases
| 1 |
2,893 |
DM-14597
|
05/30/2018 08:51:21
|
Multiband driver uses wrong method signature in runDetection
|
Multiband driver can run detection on a coadd, but this code path is rarely exercised. The method signature to detect coadd sources in pipe_tasks has changed, but was not updated in pipe_drivers. This ticket should fix the call to runDetection in detectCoaddSources
| 3 |
2,894 |
DM-14612
|
05/30/2018 14:53:50
|
Fix race condition in new jointcal matrix dump test
|
[~rowen] discovered a race condition in {{test_jointcal_cfht_minimal.test_jointcalTask_2_visits_photometry()}}, because the deletion of the files being tested (added in DM-14509) occurs in {{tearDown}}. Easiest fix is to delete the files after they've been checked for.
| 0.5 |
2,895 |
DM-14620
|
05/31/2018 11:54:41
|
Move ap_association DIA schemas their own module
|
Incorporating l1dbproto into ap_association requires definitions and mappings between afw schemas and db schemas. This ticket will expand the methods make_minimal_dia_*_schema to more closely approximate that of l1dproto and the DPDD. This will also create mappings between the ip_diffim schemas input into association and the DIA schemas.
| 2 |
2,896 |
DM-14625
|
05/31/2018 17:16:18
|
Fix ndarray compiler warnings
|
Fix ndarray compiler warnings in the few remaining packages that have not been updated. This consists of removing {code} if (_import_array() < 0) { PyErr_SetString(PyExc_ImportError, "numpy.core.multiarray failed to import"); return nullptr; } {code} and import of numpy headers from pybind11 wrappers
| 1 |
2,897 |
DM-14627
|
05/31/2018 18:13:33
|
Firefly performance test and analysis through NB
|
This is to capture the work effort in testing the performance of FIrelfy image display in NB deployed at Kubernetes commons in NCSA. The work and suggestions are here: The performance study was prompted by demos of the qa_explorer notebook at the DMLT, with the ginga and the Firefly backends. Using the lsst-lspdev notebook environment and Firefly server, I checked the performance of Firefly panning at zoom level=1 on a 4k by 4k image. Panning to a new position typically took 5-10 seconds but occasionally as long as 20. There is a strong caching effect when panning to a position previously visited, as then the pan typically took between 0.2 and 1 second. With assistance from Simon, I was able to install the qa_explorer notebook and operate its interactive feature of selecting a star or galaxy from a scatter plot, and zooming and panning to its location on the closest coadd image. The ginga backend typically took a few to 5 seconds to display the desired location. The Firefly backend typically took 15-20 seconds for the display, zoom and pan. The Firefly display backend always shows the entire image fit to a frame, before zooming and panning to the desired location. Performance could be improved by batching together the initial display, zoom and pan. Alternatively, if DAX imageserv were available it would be more efficient to make a cutout at the desired location and then display it — a solution we used in our forced photometry demo on the PDAC. The show_fits method of FireflyClient will take an initial zoom level as an argument. For the qa_explorer application, it would be better to set the initial zoom level to 1, to skip the time-consuming display of the entire image. Currently the afw.display abstraction doesn’t support this — it was developed for ds9 where zooming is easy. The mtv method of afw.display could be modified to take optional arguments like zoom level and a position on which to center the display.
| 2 |
2,898 |
DM-14628
|
05/31/2018 18:13:38
|
meas_astrom pytest setup is missing E266
|
The setup.cfg file is correct for flake8, but not for the flake8 extension of pytest. This apparently causes building meas_astrom to fail on my Mac, but not on lsst-dev. More generally, it would be good if there were a way to insist the flake8 exclusions were identical to the pytest flake8 exclusions to prevent this kind of problem. That's outside the scope of this quick fix ticket, however.
| 0.5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.